Zimki is growing - excellent.
These are my thoughts on why a grid of utility based providers of an open source platform is necessary and the CA / CODB divide in IT.
No service is ever 100%, hence the abundant use of the n+x model from stand alone (n+0), to the simple cluster (n+1) and further.
This extends not just to hardware but to providers. Hence the greater the number of providers in a cloud and the more distributed the application, the greater the resiliance. A case of n+1 would have one utility provider as the primary source, and another as the secondary.
By creating a utility IT environment where applications & data can simply switch from one provider to another, you enable two things to happen.
Firstly, you create a competitive utility IT environment - where price & quality of service (QoS) can be compared.
Secondly, you should be able to create a greater level of resilience at the same price.
As far as I am aware, it is estimated that 15% of company data centre capacity is utilised. If you accept this and assume a 100% markup then for the same price each company can access three equivalent utility data centres (n+2). Assuming perfect balancing of supply and demand, and discounting any economies of scale, an n+1 model will give more resilience at less cost on a utility basis.
This requires simple transfer between providers and open competition. If we spend over $2 trillion p.a. on IT and the majority of that IT is CODB and of little or no strategic value - then such cost savings will become critical.
This is why I believe there will be a shift towards a cloud of utility providers, competing on price & QoS with simple transfer of applications and data between providers. In the same way that it is likely that most CODB apps will become generic and exist somewhere on the cloud.
Price & competitiveness will force the issue. As the ubiquity of IT increases, so more of it will shift towards CODB.
Scarcity is the key to differentiation and a source of advantage, not ubiquity.
Everyone has ERP/CRM etc. The systems are not a source of differentiation and the focus should be "as cheap as chips".
In the long run, the utility model is most likely to be the only one standing, but the driving force will be a greater understanding that the majority of IT is CODB and price / QoS will become the critical issues.
The days of added business value are limited for the majority of systems.
The minority which is genuinely novel and new and therefore can be seen as a source of competitive advantage, will most likely shift to worth based development (WBD) methods - where reward is directly related to a metric of business value. This is all the more achievable when some of the risks are negated (such as hosting / operating costs) - however it still requires a change in mindset to distinguish CA from CODB.
Transitional (the movement from CA to CODB) is ripe for open sourcing, to avoid the cost of transition associated with being the non-standard product - however timing on this will always be critical.
So in general (and this is what we discussed back at Euro Foo'04, a rehash of a report I wrote back in 2002) - the ideal is roughly:-
Characteristics :-
- novel and new
- relevant
- potential & measurable fast return
- uncertain
possible source of CA
Approach: build using a WBD [minimise risk, share reward] method and if it becomes successful, and competitors appear to be building equivalents adopt the approach of open sourcing the entire service, allow all competitors to copy it and attempt to establish it as the standard product. [Avoidance of the cost of transition]
Characteristics :-
- you've heard of it or worked on before
- lots of companies have it
- a generic term exists to describe it
- often called "strategic"
- you believe you need
- considered as "best practice"
most likely CODB
Approach :"Cheap as chips" - use a generic product run on a utility service, avoid customisation.
This is based on my comments on Carr's blog about Google whacking the IT industry.