Friday, November 19, 2010

All in a word.

In my previous post, I provided a more fully fledged version of the lifecycle curve that I use to discuss how activities change. I've spoken about this for many years but I thought I spend a little time focusing on a few nuances.

Today, I'll talk about the *aaS misconception - a pet hate of mine. The figure below shows the evolution of infrastructure through different stages. [The stages are outlined in the previous post]

Figure 1 - Lifecycle (click on image for higher resolution)


I'll note that service bureau's started back in the 1960s and we have a rich history of hosting companies which date well before the current "cloud" phenomenon. This causes a great deal of confusion over who and who isn't providing cloud.

The problem is the use of the *aaS terms such as Infrastructure as a Service. Infrastructure clouds aren't just about Infrastructure as a Service, they're about Infrastructure as a Utility Service.

Much of the confusion has been caused by the great renaming of utility computing to cloud, which is why I'm fairly consistent on the need to return to Parkhill's view of the world (Challenge of the Computer Utility, 1966).

Cloud exists because infrastructure has become ubiquitous and well defined enough to support the volume operations needed for provision of a commodity through utility services. The commodity part of the equation is vital to understanding what is happening and it provides the distinction between a VDC (virtual data centre) and cloud environments.

If you're building an infrastructure cloud (whether public or private) then I'll assume you've got multi-tenancy, APIs for creating instances, utility billing and you are probably using some form of virtualisation. Now, if this is the case then you're part of the way there, so go check out your data centre.

IF :-
  • your data centre is full of racks or containers each with volumes of highly commoditised servers
  • you've stripped out almost all physical redundancy because frankly it's too expensive and only exists because of legacy architectural principles due to the high MTTR for replacement of equipment
  • you're working on the principle of volume operations and provision of standardised "good enough" components with defined sizes of virtual servers
  • the environment is heavily automated
  • you're working hard to drive even greater standardisation and cost efficiencies
  • you don't know where applications are running in your data centre and you don't care.
  • you don't care if a single server dies

... then you're treating infrastructure like a commodity and you're running a cloud.

The economies of scale you can make with your cloud will vary according to size, this is something you've come to accept. But when dealing with scale you should be looking at :-
  • operating not on the basis of servers but of racks or containers i.e. when enough of a rack is dead you pull it out and replace it with a new one
  • your TCO (incl hardware/software/people/power/building ...) for providing a standard virtual server is probably somewhere between $200 - $400 per annum and you're trying to make it less.
Obviously, you might make compromises for reasons of short term educational barriers (i.e. to encourage adoption). Examples include: you might want the ability to know where an application is running or to move an application from one server to another or you might even have a highly resilient section to cope with many legacy systems that have developed with old architectural principles such as Scale-up and N+1. Whilst these are valuable short term measures and there will be many niche markets carved out based upon such capabilities, they incur costs and ultimately aren't needed.

Cost and variability are what you want to drive out of the system ... that's the whole point about a utility. Anyway, rant over until next week.

Sunday, November 14, 2010

IT Extremists

The problem with any transition is that inevitably you end up with extremists, cloud computing and IT are no exception. I thought I'd say a few words on the subject.

I'll start with highlighting some points regarding the curve which I use to describe the underlying transition (evolution) behind cloud. I'm not going to simplify the graph quite as much as I normally do but then I'll assume it's not the first time readers have seen this.

Figure 1 - Lifecycle (click on image for higher resolution)



The points I'll highlight are :-
  1. IT isn't one thing it's a mass of activities (the blue crosses)
  2. All activities are undergoing evolution (commonly known as commoditisation) from innovation to commodity.
  3. As activities shift towards more of a commodity, the value is in the service and not the bits. Hence the use open source has naturally advantages particularly in provision of a marketplace of service providers.
  4. Commoditisation of an activity not only enables innovation of new activities (creative destruction), it can accelerate the rate of innovation (componentisation) of higher order systems and even accelerate the process of evolution of all activities (increase communication, participation etc).
  5. Commoditisation of an activity can result in increased consumption of that activity through price elasticity, long tail of unmet demand, increased agility and co-evolution of new industries. These are the principle causes of Jevons' paradox.
  6. As an activity evolves between different stages risks occur including disruption (including previous relationships, political capital & investment), transition (including confusion, governance & trust) and outsourcing risks (including suitability, loss of strategic control and lack of pricing competition.
  7. Benefits of the evolution of an activity are standard and include increased efficiencies (including economies of scale, balancing of heterogeneous demand etc), ability of user to focus on core activities, increased rates of agility and tighter linking between expenditure and consumption.
  8. Within a competitive ecosystem, adoption of a more evolved model creates pressure for others to adopt (Red Queen Hypothesis).
  9. The process of evolution is itself driven by end user and supplier competition.
  10. The general properties of an activity changes as it evolves from innovation (i.e. dynamic, deviates, uncertain, source of potential advantage, differential) to more of a commodity (i.e.repeated, standard, defined, operational efficiency, cost of doing business).

The above is a summary of some of the effects, however I'll use this to demonstrate the extremist views that appear in our IT field.

Private vs Public Cloud: in all other industries which have undergone this transition, a hybrid form (i.e. public + private) appeared and then the balance between the two extremes shifted towards more public provision as marketplaces developed. Whilst private provision didn't achieve (in general) the efficiencies of public provision, it can be used to mitigate transitional and outsourcing risks. Cloud computing is no exception, hybrid forms will appear purely for the reasons of balancing benefits vs risks and over time the balance between private and public will shift towards public provision as marketplaces form. Beware ideologists saying cloud will develop as just one or the other, history is not on their side

Commoditisation vs Innovation: the beauty of commoditisation is that it enables and accelerates the rate of innovation of higher order systems. The development of commodity provision of electricity resulted in an explosion of innovation in things which consumed electricity. This process is behind our amazing technological progress over the last two hundred years. Beware those who say commoditisation will stifle innovation, history says the reverse.

IT is becoming a commodity vs IT isn't becoming a commodity: IT isn't one thing, it's a mass of activities. Some of those activities are becoming a commodity and new activities (i.e. innovations) are appearing all the time. Beware those describing the future of IT as though it's one thing.

Open Source vs Proprietary : each technique has a domain in which it has certain advantages. Open source has a peculiarly powerful advantage in accelerating the evolution of an activity towards being a commodity, a domain where open source has natural strengths. The two approaches are not mutually exclusive i.e. both can be used. However, as activities become provided through utility services, the economics of the product world doesn't apply i.e. most of the wealthy service companies in the future will be primarily using open source and happily buying up open source and proprietary groups. This is diametrically opposed to the current product world where proprietary product groups buy up open source companies. Beware the open source vs proprietary viewpoint and the application of old product ideas to the future.

I could go on all night and pick on a mass of subjects including Agile vs Six Sigma, Networked vs Hiearchical, Push vs Pull, Dynamic vs Linear ... but I won't. I'll just say that in general where there exists two opposite extremes, the answer normally involves a bit of both.