Today, I'll talk about the *aaS misconception - a pet hate of mine. The figure below shows the evolution of infrastructure through different stages. [The stages are outlined in the previous post]
Figure 1 - Lifecycle (click on image for higher resolution)
I'll note that service bureau's started back in the 1960s and we have a rich history of hosting companies which date well before the current "cloud" phenomenon. This causes a great deal of confusion over who and who isn't providing cloud.
The problem is the use of the *aaS terms such as Infrastructure as a Service. Infrastructure clouds aren't just about Infrastructure as a Service, they're about Infrastructure as a Utility Service.
Much of the confusion has been caused by the great renaming of utility computing to cloud, which is why I'm fairly consistent on the need to return to Parkhill's view of the world (Challenge of the Computer Utility, 1966).
Cloud exists because infrastructure has become ubiquitous and well defined enough to support the volume operations needed for provision of a commodity through utility services. The commodity part of the equation is vital to understanding what is happening and it provides the distinction between a VDC (virtual data centre) and cloud environments.
If you're building an infrastructure cloud (whether public or private) then I'll assume you've got multi-tenancy, APIs for creating instances, utility billing and you are probably using some form of virtualisation. Now, if this is the case then you're part of the way there, so go check out your data centre.
IF :-
- your data centre is full of racks or containers each with volumes of highly commoditised servers
- you've stripped out almost all physical redundancy because frankly it's too expensive and only exists because of legacy architectural principles due to the high MTTR for replacement of equipment
- you're working on the principle of volume operations and provision of standardised "good enough" components with defined sizes of virtual servers
- the environment is heavily automated
- you're working hard to drive even greater standardisation and cost efficiencies
- you don't know where applications are running in your data centre and you don't care.
- you don't care if a single server dies
... then you're treating infrastructure like a commodity and you're running a cloud.
The economies of scale you can make with your cloud will vary according to size, this is something you've come to accept. But when dealing with scale you should be looking at :-
- operating not on the basis of servers but of racks or containers i.e. when enough of a rack is dead you pull it out and replace it with a new one
- your TCO (incl hardware/software/people/power/building ...) for providing a standard virtual server is probably somewhere between $200 - $400 per annum and you're trying to make it less.
Obviously, you might make compromises for reasons of short term educational barriers (i.e. to encourage adoption). Examples include: you might want the ability to know where an application is running or to move an application from one server to another or you might even have a highly resilient section to cope with many legacy systems that have developed with old architectural principles such as Scale-up and N+1. Whilst these are valuable short term measures and there will be many niche markets carved out based upon such capabilities, they incur costs and ultimately aren't needed.
Cost and variability are what you want to drive out of the system ... that's the whole point about a utility. Anyway, rant over until next week.