Friday, July 30, 2010

Private vs Public clouds

I thought this argument has been settled a long time ago, seems not. So, once more dear friends I will put on my best impression of a stuck record. First what is the difference between a public and a private cloud?
  • A public cloud (the clue is in the name) is open to the public.
  • A private cloud (the clue is in the name) is private to some set of people.
Naturally, there are shades of grey. For example, the set of people for which a private cloud is private might be one person, a department, a company, a community, a nation or some sort of collection of the above. It is common to use a variety of notations (community, government etc) to distinguish these constraints on use i.e. what subset of people are allowed to use it.

There is another side to this which is your relationship to the provider. It is either :-
  • external to you and therefore controlled, operated and run by another party.
  • internal to you which means it is controlled, operated and run by yourself.
Now, once again there are shades of grey because it is perfectly possible for a community of companies to build a shared cloud environment. Examples of the notation include :-
  • AWS offers a public cloud which is external to everyone but Amazon.
  • Eucalyptus offers technology for a company to build a private cloud which is internal to that company.
You could write a list with examples of each but there is little point as no-one uses this notation. Instead in common parlance we tend to use the term public cloud with a single counterpoint of private cloud to mean a cloud where an organisation makes up the subset of private users and the cloud is provided internally to that organisation. Now we have our bearings on the terms, this leaves a question ...

Why use a private cloud?

A private cloud (using the common meaning) is one that you control and operate. It hence overcomes - or at least creates the illusion of overcoming - many transitional risks such as governance, trust & security of supply. However, it does so at the potential loss of the economies of scale found in public clouds combined with additional costs such as planning, administration and management.

The choice over whether to use one type of cloud or another is always one of benefits vs risks (whether disruption, transition or outsourcing risks). A hybrid cloud strategy simply refers to using a combination of both public and private clouds to maximise benefits for a given appetite of risk.

Naturally, the actual risk can change with a variety of events. For example, the formation of competitive cloud marketplaces with easy switching between multiple providers reduces outsourcing risks (e.g. lack of pricing competition, loss of strategic control, lack of second sourcing options).

For a consumer of cloud services, the ideal scenario is multiple providers of the same service, the option to implement your own environment and no loss of strategic control or dependency to a single vendor. For this to scenario happen, the technology must be open sourced and hence the technology owners must first realise that in this cloud world value isn't generated through bits of software but instead through services.

In the same way that it took a book company to disrupt the hosting world by offering commoditised infrastructure services, a hosting company is now trying to do the same to the world of technology vendors through open source. This is just the start and whilst openstack is currently focused on the infrastructure layer, expect it to move up the computing stack in short order.

There are four companies who in my mind exemplify this whole commodity approach - Rackspace, Amazon, Google and Canonical. I expect they will become titans in this space.

-- 5th January 2017

In 2017, there still exists a surprising view that private cloud is more than transitional. Whilst Amazon, Google and Canonical have emerged as major players (in the case of Amazon, a Titan) the story of OpenStack and Rackspace was less rosy. A mix of poor strategic gameplay led to OpenStack becoming firmly entrenched in the private cloud space and though it move up the stack a bit, it never moved into the platform space. That charge has been left to Cloud Foundry which is now facing off against Amazon's effort - Lambda. Rackspace lost the public cloud battle but it has re-invented itself around Amazon. 

Monday, July 26, 2010

OSCON 2010

I thoroughly enjoyed the OSCON cloud summit and the talk that I gave at OSCON - the audiences were fantastic and the organisation was superb (huge thanks to Edd, Allison and the O'Reilly crew for making this happen).

I'm really proud to have played my small part in this event as the MC for day, along with John Willis.

I haven't yet talked a great deal on my research, but the keynote at OSCON gives a taste of it - so I thought I'd link to it here. Those who know me, also know that this had been a hobby horse of mine over the last decade. It's finally good to spend some focused time on it though of course these ideas are far from new.

A couple of final notes :-

  • Utility services are just a domain within the commodity phase of an activity's evolution. There are constraints which will prevent a commodity being provided through services. I sometimes plot on the graph a wider "services" stage, however for the sake of simplicity I've left this out.
  • The stages of lifecycle are approximate only i.e. this is where products appear, this is where utility services generally appear etc.
  • Multiple activities can be bundled into a single product. For example the iPhone is a combination of different activities from personal communication to digital recorder to web surfing to time keeper to ... the list is quite long. These activities are all evolving and being implemented by others, which forces Apple to focus on two areas :- the bundling of new innovative activities into the iPhone and application innovation through the App Store. The former is expensive and risky. The later requires development of a strong ecosystem, ideally with users being allowed to create and distribute their own applications. The manner in which Apple manages this is less than ideal and they now face severe disruption from Android. As there is also little exploitation of the wider manufacturers' ecosystem, Apple has cornered itself into creating highly costly & risky innovations with weak leveraging. IMHO, they are in trouble and this should become painfully clear in the next five years unless they change.
  • The ILC model is generally applicable. I picked examples from cloud providers but equally I could have discussed Canonical with Ubuntu. Canonical ruthlessly commoditises activities to provide a stable core and I'd strongly argue that Rackspace & Canonical point to the future direction of IT.
  • Open source is the natural end state for any activity described by software which is ubiquitous and well defined. This doesn't mean that open source can't be used earlier, of course it can and there are numerous tactical advantages of doing so, along with benefits such as increased collaboration. However, what I am saying is that by the time an activity has reached the commodity phase then only open source makes sense. Those who have been questioning whether "cloud is the death of open source" have a poor understanding as to what is actually happening.
  • Open core is in general a tactical anomaly. On the one hand, if successful, it will cause widespread distribution (driving an activity towards more of a commodity) and yet it attempts to generate revenue through proprietary elements which is against the natural state that open core is forcing activities towards. A number of companies have used this approach successfully and have even been bought for huge sums by large companies. However, it still remains a tactical anomaly which attempts to achieve both the benefits of open and closed by being both.
  • The S-Curves I use are not time based. If you follow the evolution of an activity through specific phases of its lifecycle and plot adoption against time, you will derive a set of non-uniform S-Curves for Roger's diffusion of innovation. It's important to realise that the accelerators I mentioned (open source, participation, network effects) along with others I didn't mention (communication mechanisms, co-evolution etc) alter the speed at which an activity evolves. Whilst, this doesn't impact the S-Curves I use, it does compact Roger's curves of more recent innovations when compared to earlier diffusions.
  • The speed at which an activity moves across the profile graph (i.e. through its lifecycle) depends upon the activity.
  • None of these ideas are new. The nearest to new is company profile which I've been refining in the last year from earlier work (between '04-'07) and this refinement is simply a formalisation of already existing concepts. If you watched the video and thought, "that's new", then my only advice is be concerned.
  • On the question of science, the models presented (S-Curve, Profile) are part of a general hypothesis on the evolution of business activities. Whilst data exists, there is neither the volume of evidence nor independent observation to validate beyond this. Furthermore, whilst the models show some usefulness and can be falsified, they are not predictive (and hence this cannot be considered scientific but remains firmly within the field of philosophy). The reason for this is that in order to generate the graphs and avoid asymptotic behaviour, a definition of commodity is required. The consequence of such is that an activity can only be plotted in terms of relative historical position i.e. after it has become a commodity. This means, all positions of activities which have not become a commodity are uncertain (as per one of the axis of the graph) and therefore approximations. The models do not create a crystal ball and the future is one information barrier we can't get past. Even though the new pattens of organisation are testable it should always be remembered that fitness does not guarantee survival.

That's enough for now, I'll expand the topic sometime later.

Monday, July 19, 2010

OpenStack

There have been many attempts to create open source ecosystems around cloud computing over the last couple of years. Most of them have not fully adopted the largest public ecosystem (being EC2) or been truly open source (instead using an open core model) or they have lacked the experience of large scale cloud operations.

The recent announcement of Open Stack changes this. Entirely open sourced technology for building and running a cloud, supported by an ecosystem of large companies and agencies (including NASA and Rackspace), provision of the EC2 & S3 APIs and the experience of running a large cloud installation. 

This is fantastic news. If you want my view on how this will turn out, well it's rather simple.

OpenStack's move further consolidates the ecosystem around EC2 / S3 which is not only good news for Amazon but also helps propel Rackspace's position as a real thought leader in this space. It's worth noting that the EC2 / S3 API might be supplanted over time, especially as OpenStack builds a marketplace of providers, unless Amazon becomes more open with it. The icing on the cake will be if Rackspace itself (which will use the OpenStack technology) provides the EC2 / S3 APIs, in which case the growth and consolidation around Rackspace's efforts and any providers of OpenStack will become immense.

This is also surprisingly good news for Eucalyptus if they move towards an entirely (or at least more) open approach. In such circumstance, the probability is we're going to end up with a straight forward "clash of the titans" between Eucalyptus and OpenStack to become the Apache of Cloud Computing.

Don't be surprised if Eucalyptus even go so far as to adopt some of OpenStack's work. Marten Mickos is an astute businessman and there are many ways they can turn this to their advantage. However, in general it's not a good time to be any other infrastructure cloud technology vendor, as Simon Crosby makes clear with his "VMWare Redwood = DeadWood" post.

VMWare's position in the infrastructure space is looking unsurprisingly shaky for the future but then they already know of the oncoming disruption, as they made clear with this interview. Why else do you think that VMWare has been busily acquiring into the platform space? RabbitMQ is also increasingly looking like a great purchase for them.

As for RedHat's cloud strategy - they must be feeling increasingly lonely as if no-one wants to invite them to the party. On the other hand, this is good news for Ubuntu, because of both UEC (powered by Eucalyptus) and OpenStacks involvement with the Ubuntu community. Don't be surprised if Ubuntu launches a "powered by openstack" version.

Best of all, it's great for the end users as they will see real choice and further standardisation of a messy industry in the infrastructure space. Of course, the real beauty is that once this happens we can finally start consolidating and standardising the platform space.

Overall, I'm very bullish about OpenStack and its focus on the Amazon APIs. There is a long road ahead but this has potential.

Tuesday, July 13, 2010

From Slime Mold to Neurons

I wasn't going to write much about clouds, being focused on my new area of research but I could hardly let James' post go unchallenged.

Before I critique the post, I need to go through some basic genetics for those of you who are new to that subject.

DNA is the accepted means of providing genetic instructions used in the development and functioning of all known living organisms. There are exclusions, such as RNA Viruses (which are considered not to be living organisms) and forms of non DNA based inheritance from topology, methylation etc (epigenetics).

DNA doesn't operate in isolation, for example the same DNA sequence in a human produces a multitude of specialised cells. It instead acts in combination with both the environment it exists within and the environments it has existed within. Hence it is more correct to say that DNA contains genetic information that influences the phenotype (characteristics) of an organism.

To keep things simple, I'll ignore the multitude of RNA types (from messenger to transport), the issues of expression, the terminology of genes and 3D geometry and take a few chunky liberties in the description of how DNA works.

In principle, DNA consists of a long double helix sequence of four basic nucleotides (the base pairs) known as C,G,A,T. Different sections of this sequence (referred to as genes) are transcribed and translated into protein structures which affect the operation of the cell. Each three letter word (a codon) of the genetic sequence (i.e. CGT or GAT, giving 64 possible combinations) is translated to an amino acid (of which there are 22 standard).

The entire complexity of life is built upon such simple subsystems which in turn are part of ever more complex systems - cell structures that are part of cells that are part of organs etc. Without this component structure, the level of complexity in living organisms would not have been feasible. It's worth noting that the agility of complex structures to evolve is dependent upon the organisation of their subsystems.

So, what has this to do with cloud?

Well, if you take an example such as Amazon's Web Services, the complexity of the many systems that users have developed with cloud services is based upon the provision of simple, standard subsystems for storage, compute resources and networks.

There is some limited variability in the type of subsystems (for example the size of Amazon instances) and the introduction of a new Cluster Compute Instance but these are the genetic analogy to amino acids which are then used to build more complex protein structures. Your deployment scripts (whether you use a system such as RightScale or another) are your DNA which is then transcribed and translated into the deployment of basic instances to create the complex structures you require.

So, back to James' post. My objection to the post is that whilst you, as a user, can create a slime mould or a neuron or a million other cellular anologies with these basic components, the key is how YOU combine these common and well defined (i.e. commodity-like) components.

James' however infers in his post that we need to see alternative cloud models, not just the "slime mold model cloud" but "more complex topologies" with the "emergence of more topologically calibrated and therefore feature rich clouds". In principle he is arguing for more configuration of these basic subsystems.

Whilst I agree that some additional basic subsystems (e.g. the cluster computer instance) are needed, I'd argue against the principle of wide ranging diversity in the underlying subsystems.  Whilst such a richness of diversity does create benefits for technology vendors - which company hasn't fallen for the "competitive advantage of customizing a CRM system" gag - it will not create the "wealth" and diversity in higher order user created systems but instead lead to a grindingly slow sprawl that will create further lock-in issues, move us away from competitive marketplaces and end up with users spending vastly more time building and configuring stuff which really doesn't matter.

There are always edge cases, however in general the range of subsystems we require are fairly limited and it's from these we can build all the different type of systems (or cells) we want.

If there is anything that should be learned from biological analogies, it is from such "modest entities", such simple subsystems that complexity and diversity is created. We've learned this lesson throughout time, from the electronic to the industrial revolution to the works of Vitruvius.

Unfortunately, as the old adage goes - "the one thing you learn from history, is we never learn from history".

One final note: analogies between cloud computing and biological systems are generally weak at best - my above example is no exception. I use it purely to continue in the same spirit as the original discussion and to try and highlight the core issue of diversity in the subsystems vs diversity in what is built with stable subsystems. I don't recommend comparing cloud computing to biology, unless you want endless arguments.

One very final note: computer models are based on simple arithmetic and hence are bound to Godel's law of incompleteness, neither being being both complete and certain. As activities provided through software tend towards being ubiquitous and well defined, they will tend towards being a good enough component like a defined brick with standardised interfaces. The components themselves will have some inherent non-linear qualities (i.e. the halting problem) which is why the design for failure paradigm is so important.

Biological components are also only linear at a superficial level [e.g. of interfaces such as this codon encoding for this amino acid or a specific cell structure having certain functions or a specific cell type having a defined function] and on an individual level [behind the interface] they are highly non-linear and cannot be described by simple arithmetic means nor modelled with certainty by a computer. For these reasons, biological system have evolved highly complex regulatory systems (such as the human immune system) which even today we barely understand. However, we're acutely aware that a major function of it is to control rogue cells. This level of complexity is far beyond the capabilities of modern computing and also filled with numerous information barriers (the uncertainty principle is only one of many) which prevent us from anything more than approximation. 

However there are many useful concepts in biological systems (Red Queen, Ecosystems, Co-evolution etc) along with certain derived concepts, such as design for failure which have value in the computing world - just be careful on pushing the analogies too far.

--- Update 23 April 2014
Added [emphasis] to clarify certain points

Wednesday, June 02, 2010

A life less cloudy

For 18 months I ran the cloud strategy at Canonical. It was a real pleasure to work with some very amazing people and play a small part in Canonical's journey. Today, Ubuntu has become the dominant cloud operating system and leads the field in provision of hybrid clouds combining both public use on Amazon EC2 with private cloud technology.

However, I've been in the cloud space for five years, I've got my long service medal and it's time to scratch another itch but I thought I'd wrap up with some general thoughts.

Definition of Cloud
Given that 200 years after the industrial revolution we still can't agree on a good enough definition of that, it was never likely the industry was going to agree on cloud. The problem with cloud is it's not a thing but a transition of certain IT activities from a product to a more service based economy provided through large scale utilities. This change was predicted by Douglas Parkhill in his 1966 in his book the "Challenge of compute utilities". He made the comparison to the electricity industry and talked of a future of public, private, community and government utilities providing computer resource online, on demand, with elastic supply and charged on a utility basis. These characteristics permeate cloud today but just don't try and get everyone to agree.

Why now?
A number of factors were needed before this change could occur. Firstly, you needed the concept which Parkhill kindly provided us with. Secondly, you needed the technology to achieve this service provision but we've had that for the best part of a decade. Thirdly, you needed those IT activities to be well defined and ubiquitous enough to be suitable for the volume operations needed to support a utility business. This has also happened over the last decade.

Lastly, you needed a change in attitude and a willingness of business to adopt these new models of provision. This change in business attitude started with Strassman in the 1990s when he pointed out that spending on IT amounted to little more than an arms race with dubious links to any value created. Nick Carr developed these concepts further and this wisdom has now peculated throughout the business world.

Is everything suitable for the cloud?
The answer to this is "depends". To understand why we must first appreciate that IT consists of many ubiquitous and well defined activities that are suitable for cloud provision. However, IT also contains many new activities that are not ubiquitous nor well defined and hence aren't suitable for provision as cloud services. This of course doesn't mean that such novel activities can't be built on cloud services.

There is a difference between that which can be provided as a cloud service and that which can be built upon cloud services. However, don't get comfortable yet because activities have a lifecycle. Business activities themselves undergo a constant transition from innovation to commodity. In short that which can be built upon cloud services will eventually become that which is provided as cloud services. To make matters worse this is all connected. So activities (even practices) are connected together, they're all undergoing transition from one state to another and this is happening in your business. You can actually map this out (something we did at Canonical) and use this to your advantage but yes, it's often a complex mess.

Benefits & Risks.
The benefits and risks of cloud have been listed countless times before so I'll avoid going through these again and just point out some highlights.

First, cloud is fundamentally about enabling and increasing rates of user innovation (a combination of creative destruction and componentisation). Don't get bogged down into the cost saving arguments because whilst cloud will increase efficiency, a number of factors will explode consumption.

Second, you don't have choice over cloud. Your company is competing in an ecosystem with others who will adopt cloud computing. The benefits of cloud will create pressure for you to adopt in order to remain competitive (this is the "Red Queen Hypothesis")

Third, you do have a choice over how to implement cloud. The hybrid model of public & private service is a standard supply chain trick of balancing benefits vs risks. Expect lots of this and ignore anyone who tells you that private cloud isn't cloud computing.

Fourth, it's not about virtual data centres. This one is a tough nut to crack since lots of people have spent vast sums on virtual data centres and they want them to be infrastructure clouds. The problem is simply one of economics. VDCs are based upon concepts of resilience which have their origin in physical product world and come with a wide range of overheads. Infrastructure clouds are based upon the economics of volume operations with vast numbers of cheap & reliable enough VMs and resilience created through design for failure at the management layer.

Fifth, providers ain't your buddies. In a commodity world of utility services then the market, the exchanges, brokers, standardisation and assurance bodies (providing price vs QoS comparisons) are your buddies. Whilst some providers get this, others will try and convince you of their innovative cloud features. These are the enemies of portability & easy switching and the fastest way to end up paying more than you need.

Six, security, auditing and governance have to change in the cloud world and it's not going to be the same as the perimiterised, defense in depth approach of old.

There's a whole host of topics to talk about from the future formation of exchanges, the creation of rating agencies, the introduction of insurance and derivative instruments, the componentisation effects of the platform space, the growth in importance of orchestration, the …. but this has been done to death for the last five years.

I'll mention one last thing. The secret to Ubuntu's success in the cloud is execution.

We understood our environment and rather than fighting battles which couldn't be won, Canonical choose to adopt a dominant future approach in the market, provide users with choice and real technology and then combine this with a clear story. Hence Canonical launched a hybrid cloud strategy making it easy for users to build with Ubuntu in the public cloud computing space combined with simple and easily installed technology for creating private clouds that matched the same APIs.

By combining these approaches, management tools could be used across both public and private clouds with machine images that could run in both environments. Users had their first real opportunity to exploit a hybrid cloud environment, choosing for themselves the right balance of public and private with an open source technology that limited lock-in and took advantage of both the Ubuntu and the Amazon EC2 ecosystems. Since it's launch in April '09. this approach has gone from strength to strength and has created a beach-head for Canonical to extend into the orchestration, platform and marketplace fields. This is all part of our battle plan and it should be interesting to see how it turns out.

However, the strategy wasn't new, what was critical was the way in which Canonical's excellent engineers achieved this.
Despite the various prognostications on the disruptive nature of cloud, it's old hat. The strategies around cloud were formulated many many years ago. The concepts and ideas around cloud, orchestration, platforms, the economics, the barriers to adoption and how to play in this game are all well known. Even the map of the landscape I used in Canonical to help us understand where we needed to attack was based upon an earlier version from Fotango. Nothing is that new. 

If you're looking for that great strategy to bring your company into the cloud, be aware that whatever you think of is almost certainly being done by others and has been contemplated to the nth degree by many more. Today, success in cloud is all about partnerships, execution, tactical moves and timing in a great game of shogi that is already well advanced.

The time for strategists in the cloud space is over and it's all about how the game is played.

Don't get me wrong, the game is still an exciting place. There are all manner of tactical plays, twist and turns and efforts to help standardise and form this future industry. However, strategy is my passion, it's where my heart lives.

Which is why I have joined CSC's Leading Edge Forum to pursue a new challenge at the very centre of change. There's a number of (what I consider important) experiments I need to run on organisational change and this is one of those rare opportunities I'll get to make it. Change and I have some unfinished business.

Saturday, May 01, 2010

VMForce, Zimki and the cloud.

Before discussing VMForce, I first want to remind people of the tale of Zimki.

Zimki was one of the first platform as a service offerings, initially built and released in 2005 under the name libapi (liberation API).

With Zimki, a user could sign onto this hosted service and create entire applications (including all client and server side components) in one language - JavaScript. You developed inside the platform consuming the services it provides through APIs and the user had no need to be aware of physical machines. The service was charged on a utility basis including JavaScript operations, storage and network and billing could be broken down to the Javascript function. Billing was calculated in a parallel mechanism (think network taps) and didn't interfere with any operations.

The use of JavaScript as the base language was made for several reasons. First, it was widely known. Second, it had been contained within the browser for almost a decade. Thirdly, with one common language there was a reduced potential for translation errors between client and server side code. Lastly ALL communication was AJAX (i.e. asynchronous) and objects were passed as JSON.

The platform provided additional primitives to the language for storage of objects (data) through a NoSQL object store, along with templating systems and simple conversion of functions to web services (APIs). Building a system in Zimki was simple with entire applications being written and released in a matter of hours. However, this wasn't some sort of magic but instead a normal consequence of componentisation and a standardised platform delivered as a service.

The entire platform had an extensive API set for its management, monitoring and development which was used both by the web based development tools and the local development environment. All user written functions could be exposed as an API by the addition of the zimki.publishPath([path], [function name]) to your code base. Since all code and data were stored as objects, moving entire applications from one hosted Zimki environment to another hosted Zimki environment was relatively trivial and was demonstrated several times in '06/'07. You could even move high load functions from one system to another.

The entire system had also been constructed to allow for the rapid development of shared code bases & components amongst its users. Zimki was far from perfect but it was highly useable and was rapidly growing.

During 2006, it was announced that Zimki would be entirely open sourced in 2007 to enable other companies to run their own Zimki installations and for other providers to set-up. The purpose of this action was to create a competitive marketplace of Zimki providers (multiple public sources) as well as enabling hybrid clouds (the use of public and private provision).

The system was going to be GPLv3'd precisely because of the "SaaS" loophole. The intention was to allow multiple providers to make operational improvements to the code base for reasons of service competition whilst ensuring portability through a separately established assurance authority with a trademarked compliance stamp. As part of this, an exchange was to be established.

The use of open source, extensive APIs (covering management, development and monitoring), all objects and code freely portable and an assurance authority was specifically designed to create a competitive marketplace.

Everything was looking good. Zimki was backed by a well resourced company, Fotango, for which I was the acting CEO. We'd recorded 16 highly profitable quarters, had significant cash reserves and were ready to take on this market. Unfortunately, the parent company Canon had reasonably decided that this utility computing world was not core to its objectives and it had a different focus. The consequence of this was that Zimki was never open sourced, the service was terminated and the company outsourced.

Had Canon taken another path then it could well have become one of the largest cloud providers in today's market, rivaling those of later entrants such as Google (with AppEngine), Microsoft (with Azure) and more recently VMForce. But that's innovation for you, it's a gamble and outcomes are uncertain. Bizarrely, the termination of Zimki reinforced the importance of portability and a choice of providers in the PaaS space.

Probably the most important lesson learned from Zimki was that lock-in can be created in multiple forms, including lack of access to code & data, high exit costs, additional services and management tools. The only viable way of preventing this and creating a marketplace with competition in service provision, is if the entire service is open sourced and built with the ideas of portability from the beginning.

So five years later, we have the announcement of VMForce providing a PaaS offering based around Java. Obviously it will provide a much faster rate of development and deployment (a normal consequence of componentisation) along with all the other benefits of a cloud service. This however, isn't anything new or pioneering, it's a consequence of a standardised platform which we showed back in 2006. Unfortunately the system won't be open sourced, it runs on a proprietary platform and there exists many additional services around it. These are all ample opportunities for lock-in.

So we get the standard benefits and we get lock-in, when we could just get the benefits. Unfortunately this won't happen and we won't see freely competitive marketplaces until the underlying reference models (i.e. the entire system) are open sourced and people realise that open standards are necessary for portability but not sufficient for it.

We knew this in 2005, nothing has altered my opinion since then.

Instead, I expect we will hear lots of open talk without the essential ingredient of actually being open sourced. So my thoughts on the VMForce announcement, nothing new and the same old problems. I'm unimpressed so far but at least it gives us another Java platform to play with.

For reference, this is an old Zimki Presentation from 2006, with the talk notes included as text. There are a couple of flaws in the concepts which were later refined, but the basics are all there.

Monday, April 12, 2010

Use cloud and get rid of your sysadmin.

Following on from my Cloud Computing Myths post.

The principle argument behind cloud getting rid of sysadmins is one of "pre-cloud a sysadmin can manage a few hundred machines, in the cloud era with automation a sysadmin can manage tens of thousands of virtual machines". In short, since system admins will be able to manage a two orders of magnitude greater number of virtual machines then we will need less of them.

Let's be first clear what automation means. At the infrastructure layer of the computing stack there are a range of systems, commonly known as orchestration tools, which allow for basic management of a cloud, automatic deployment of virtual infrastructure, configuration management, self-healing, monitoring, auto-scaling and so forth. These tools take advantage of the fact that in the cloud era, infrastructure is code and is created, modified and destroyed through APIs.

Rather than attempting to create specialised infrastructure, the cloud world takes advantage of a bountiful supply of virtual machines provided as standardised components. Hence scaling is achieved not through provision of an ever more powerful machine but deployment of vastly more standardised virtual machines.

Furthermore the concept of a machine also changes. We're moving away from the idea of a virtual machine image for this or that, to one of a basic machine image and all the run time information you require to configure it. The same base image will become a wiki, a web server or part of a n-tier system.

All of these capabilities allow for more ephemeral infrastructure, rapidly changing according to need with rapid deployment and destruction. This creates a range of management problems and hence we have the growth of interest in orchestration tools. These tools vary from specifically focused components to more general solutions and include chef, controltier,CohesiveFT, capistrano, rightscale, scalr and the list goes on.

A favourite example of mine, simply because it acts as a pointer towards the future, is PoolParty. Using a simple syntax of describing infrastructure deployment, PoolParty synthesises the core concepts of this infrastructure change. For example, deploying a system no longer becomes a long architectural review and planning process, an RT ticket requesting some new servers with an inevitable wait, the installation, racking and configuration of those servers followed with change control meetings.

Deploying a system becomes in principle as simple as :-

Pool "my_application" do
Cloud "my_application_server" do
Using EC2
Instance 1...1
Image_id "xxxxx"
Autoscale
end

Cloud "my_database_server" do
Using EC2
Instances 1...1
Image_id "xxxxx"
end

end

It is these concepts of infrastructure as code and automation through orchestration tools when combined with a future of computing resources provided as larger components (pre-built racks and containers) which have led many to assume that cloud will remove the roles of many sysadmins. This is a weak assumption.

A historical review of computing resource usage shows it's price elastic. In short, as the cost for provision of a unit of compute resource reduces then the demand has increased leading to today's proliferation of computing.

Now, depending upon who you talk to, the inefficiency of computer resources in your average data centre runs at 80-90%. Adoption of private clouds should (ignoring the benefits of using commodity hardware) provide a 5 x reduction in price per unit. Based upon historical precedents, you could expect this to be much higher in public cloud and lead to a 10-15x increase in consumption as we find the long tail of applications that companies desire becomes ever more feasible.

Of course, this ignores transient applications (those with a short life time such as weeks, days or hours), componentisation (e.g. self service and use of infrastructure as a base component), co-evolution effects and the larger economies of scale potentially available on public providers.

Given Moore's law, the current level of wastage, a standard VM / Physical server conversion rate, greater efficiencies in public provision, increasing use of commodity hardware and the assumption that expenditure of computing resources will remain flat (any reductions in cost per unit being compensated by increase in workload) then it is entirely feasible that within 5-7 years these effects could lead to a 100x increase in virtual infrastructure (i.e. number of virtual servers compared to current physical servers). It's more than possible that in five years time every large marketing department will have its own 1,000 node hadoop cluster for data processing of consumer behaviour.

So, we come back to the original argument which is "pre-cloud a sysadmin can manage a few hundred machines, in the cloud era with automation a sysadmin can manage tens of thousands of virtual machines". The problem with this argument is that if cloud develops as expected then each company will be managing two orders of magnitude more virtual machines which means there'll be at least as many sysadmins as there are today.

Now whilst the model changes when it comes to platform and software as a service (and there are complications here which I'll leave to another day), the assumption that cloud will lead to less system adminstrators is another one of those cloud myths which hasn't been properly thought through.

P.S. The nature of the role of a sysadmin will change and their skillsets will broaden, however if you're planning to use cloud to reduce their numbers then you might be in for a nasty shock.

P.P.S. Just to clarify, I've been asked by a company which runs 2,000 physical servers whether this means that in 5-7 years they could be running 200,000 virtual servers (some of which will be provided by private and most on public clouds, ideally through an exchange or brokers). This is exactly what I mean. You're going to need orchestration tools just to cope and you'll need sysadmins to be skilled in these and managing a much more complex environment.

Friday, April 09, 2010

Common Cloud Myths

Over the last three years, I've spent an increasingly disproportionate amount of my time dealing with cloud myths. I thought I'd catalogue my favourites by bashing one every other day.

Cloud is Green

The use of cloud infrastructure certainly allows for more efficient provision of infrastructure through matching supply to demand. In general :-

1. For a traditional scenario where every application has its own physical infrastructure then each application requires a capacity of compute resources, storage and network which must exceed its maximum load and provide suitable spare capacity for anticipated growth. This situation is often complicated by two factors. First, most applications contains multiple components and some of those often highly under utilise physical resources (for example load balancing). Second, due to the logistics of provisioning physical equipment then the excess capacity must be sufficiently large. At best, the total compute resources required will significantly exceed the sum of all the individual peak application loads and spare capacity.

2. The shared infrastructure scenario covers networks, storage and compute resources (through virtualisation). Resource requirements are balanced across multiple applications with variable loads and the total spare capacity held is significantly reduced. In an optimal case the total capacity can be reduced to a general spare capacity plus the peak of the sum of the application loads. Virtual Data Centres, provisioning resources according to need, are an example of shared infrastructure.

3. In the case of a private cloud (i.e. a private compute utility), the economics are close to that of a shared scenario. However, there is one important distinction in that a compute utility is about commodity infrastructure. For example, virtual data centres provide highly resilient virtual infrastructure which incur significant costs whereas a private cloud focuses on rapid provision of low cost, good enough virtual infrastructure.

At the nodes (the servers providing virtual machines) of a private cloud, redundant power supplies are seen as an unnecessary cost rather than a benefit. This ruthless focus on commodity infrastructure provides a lower price point per virtual machine but that necessitates that resilience is created in the management layer and application (the design for failure concept). The reasoning for this, is the same reasoning behind RAID (redundant array of inexpensive disks). By pushing resilience into the management layer and combining more lower cost, less resilient hardware you can actually enable higher levels of resilience and performance for a given price point.

However, the downside is that you can't just take what has existed on physical servers and plonk it on a cloud and expect it to work like a highly resilient physical server. You can however do this with a virtual data centre.

This distinction and focus on commodity provision is the difference between a virtual data centre and a private cloud. It's a very subtle but massively important distinction because whilst a virtual data centre has the benefit of reducing educational costs of transition in the short term (being like existing physical environments), it's exactly these characteristics that will make it inefficient compared to private clouds in the longer term.

4. In the case of a public cloud infrastructure (a public compute utility), the concepts are taken further by balancing variable demands of one company for compute resources against another. This is one of many potential economies of scale that can lead to lower unit costs. However unit cost is only one consideration here, there are transitional and outsourcing risks that need to be factored in which is why we often use hybrid solutions combining both public and private clouds.

The overall effect of moving through these different stages is that the provision of infrastructure becomes more efficient and hence we have the "cloud is green" assumption.

I pointed out, back in 2008 at IT@Cork, that this assumption ignored co-evolution, componentisation and price elasticity effects.

By increasing efficiency and reducing cost for provision of infrastructure, a large number of activities which might have once not been economically feasible become economically feasible. Furthermore, the self-service nature of cloud not only increases agility by enabling faster provision of infrastructure but accelerates user innovation through provision of standardised components (i.e. the infrastructure equivalent of a brick). This latter effect can encourage the co-evolution of new industries in the same manner that the commoditisation of electronic switching (from the innovation of the Flemming valve to complex products containing thousands of switches) led to digital calculators and computers which in turn drove further commoditisation and demand for electronic switching.

The effect of these forces is that whilst infrastructure provision may become more efficient, the overall demand for infrastructure will outstrip these gains precisely because infrastructure has become a more efficient and standardised component.

We end up using vastly more of a more efficient resource. Lo and behold, cloud turns out not to be green.

The same effect was noted by Willam Stanley Jevons in the 1850s, when he "observed that England's consumption of coal soared after James Watt introduced his coal-fired steam engine, which greatly improved the efficiency of Thomas Newcomen's earlier design"

Thursday, March 25, 2010

Cloud computing made simple

It has been a truly amazing year since we embarked on our "cloud" journey at Ubuntu, hence I thought I'd review some of the highlights.

We started the journey back in 2008 when Mark Shuttleworth announced our commitment to providing cloud technology to our users. At that time, the cloud world was already in a state of growing confusion, so we adopted an approach of :-

  • make the cloud simple.
  • focus on one layer of the computing stack (infrastructure) to begin with.
  • give our users real technology not promises.
  • help drive standardisation (a key requirements of this shift towards a service world) by adopted public defacto standards.
  • work with leading partners in this growing industry.
  • provide open source systems to avoid lock-in issues.
  • actively work to mitigate risks and concerns over cloud by giving our users options.

Hence, in April'09 as part of Ubuntu 9.04 we launched our hybrid cloud strategy.

Our approach was based around the adoption of Amazon EC2 / S3 & EBS as the public defacto standard rather than the creation of some new APIs (there's too many already).

We provided Ubuntu images for use on Amazon EC2 (public cloud) and the technology to build your own private cloud (known as Ubuntu Enterprise Cloud) that matched the same APIs of Amazon. We also added management tools which could cross both public and private domains because of our adoption of a standard API set.

For 9.10 we significantly improved the robustness and ease of setting up a private cloud (Mark built his own several node system in under 25 mins from bare metal). We provided the base for an application store, improved the management capabilities of Landscape and the features of UEC grew extensively. We also launched training, consultancy and support services for the cloud and a JumpStart program to help companies move into the cloud quickly.

During this time we've worked closely with many partners, I'll mention a few (more details can be found on the Ubuntu Cloud site) :-

  • Eucalyptus whose open source technology we adopted into the distribution as a core part of Ubuntu Enterprise Cloud.
  • Intel's Cloud Builder program to provide best practices on how to create a private cloud using UEC. I'd strongly recommend reading the whitepaper.
  • RightScale & CohesiveFT to provide best of breed public management tools alongside our own Landscape system.
  • Dell, who will offer a range of pre-built clouds using a series of ‘blueprint’ configurations that have been optimised for different use cases and scale. These will include PowerEdge-C hardware, UEC software and full technical support.

In one year, we've made cloud simple for our users. We've brought our "Linux for Humans" philosophy into the cloud by getting rid of complexity, confusion and myth.

If you want to get into cloud, then we offer :-

  • Simple Choices: You can have either private, public or hybrid (i.e. public + private) infrastructure clouds.
  • Simple Setup: If you want to build a private cloud, then Ubuntu makes the set-up ridiculously easy. You can be up and running with your own cloud in minutes. Along with with our community documentation covering CD installation and more advanced options, you can also find detailed information on how to build a cloud through Intel's cloud builder program. However, if building a cloud still sounds too daunting then Dell offers pre-built, fully configured and supported private clouds.
  • Simple Management: You can use the same tools for both your private and public clouds because we've standardised around a common set of APIs. There's no need to learn one set of systems for private and another for public.
  • Simple Bursting: Since we provide common machine images which run on both public and private cloud offerings combined with standardised APIs, then the process of moving infrastructure and combining both private and public clouds is ... simpler.
  • Enterprise Help: If you still need help then we offer it, including 24x7 support and a jumpstart program to get your company into the cloud.
  • Open source: UEC, the Ubuntu machine images and all the basic tools are open sourced. We're committed to providing open source systems and following through on a genuine open source approach i.e. the system is open source and free and so are all the security patches and version upgrades.

The results of this year have been very encouraging. We recently estimated that there are now over 7,000 private clouds built with UEC, however with 7% of users in our annual Ubuntu User Survey saying that they have built a UEC cloud, the true figure might be very much higher. It was great to hear that almost 70% of users felt Ubuntu was a viable platform for the cloud but there were several surprising statistics including :-

  • 54% were using the cloud in some form or another (software, platform or infrastructure). However giving the fuzziness of the term cloud, this can only be seen as a signal of intent to use online services.
  • Only 10% had used public cloud providers (such as Amazon) for infrastructure. What was quite remarkable was that given the relatively recent availability of UEC, almost as many people had built private clouds as had used public cloud providers.
  • 60% felt that the use of private cloud was more important to their organisation, 25% thought that both private and public was of equal importance whilst only 15% felt that public cloud was the most important.

Whilst this survey was targetted at Ubuntu users, the data we receive from external sources suggest that Ubuntu is becoming the dominant operating system for consumers of the infrastructure cloud space. Even a simple ranking of search terms using Google's Insight around cloud computing show how significant a player Ubuntu is.

Whilst this is great news, what really pleases me is that we're making cloud simple and real for organisations and listening to what they need. We're getting away from the confusion over cloud, the tireless consultant drivel over whether private cloud is cloud computing and the endless pontifications and forums debating vague futures. Instead, we're giving real people, real technology which does exactly what they want.

Over the next year we're going to be tackling issues around creating competitive marketplaces (i.e. more choice for our users), simplfying self-service IT capabilities and orchestration and providing a wide range of open source stacks and platforms to use in the cloud.

We're going to continue to drive down this path of commoditisation by providing common workloads for the cloud (the same as we've been doing for server) and helping businesses to standardise that which is just cost of doing business.

Regardless of any attempts to badge "cloud" as just a more advanced flavour of virtualisation or describe it as "not real yet" by various late vendors, we will be doing our best to bust the various "cloud" myths and push the industry towards competitive marketplaces of computer utilities through defacto standardisation.

Commoditise! Commoditise! Commoditise!

I'm also delighted about our partners successes, with RightScale passing the million server mark, Amazon's continual growth and leadership with the introduction of spot markets, Dell's outstanding move to make cloud mainstream, Intel's push to make cloud easier & Eucalyptus' continued adoption and the appointment of Marten Mickos as CEO.

P.S. if you want to keep track of what's happening with Ubuntu in the cloud, a good place to start is our cloud blog or following our twitter feed, ubuntucloud.

Sunday, March 14, 2010

Cloud rant

For the last two posts, I've had a pop at the term "cloud", I'd like to now explain my reasoning in more detail.

First, as always, some background information.

What we know
The transition from a product to a services world in I.T. is very real. It's happening now because of the confluence of concept, suitability, technology and a change it business attitude. Overall it's driven by the process of commoditisation and it is the very ubiquity of specific I.T. activities that makes them potentially suitable for volume operations. I say potentially because volume operations is only viable for activities which are both well defined and ubiquitous.

Fortunately ubiquity has a relationship to certainty (or in other words how well understood, defined and therefore certain an activity is) which is why these activities are suitable for provision on the basis of volume operations through large computer utilities.

Lifecycle
For many years I've talked about lifecycle and the evolution of business activities. Any activity goes through various stages from its first innovation (as per the use of computer resources in the Z3 in 1941) to custom built examples to products which describe the activity. Usually, the activity ends up becoming a ubiquitous and well defined commodity (assuming there are no natural limits or constraints).

During this lifecycle, as the activity becomes more defined in the product stage, service models can often arise (for example the rental model of the managed hosting industry for computing infrastructure or the early subscription like models of electricity provision). As the activity becomes more of a commodity the existence of these service models tends to lead to the rise of utility services (as with electricity provision).

An essential requirement for the growth of the utility model (and the type of volume operations necessary to support it) is that consumers view that what's provided is a commodity. It's little more than a cost of doing business and a standardised version is good enough.

The latter is why a change of attitude is critical in development of the utility service model. If, for example, consumers still view the activity as creating some form of competitive advantage (whether true or not), they are unlikely to adopt a utility model of standard services.

The change in attitude
Over the last decade the attitude of business towards certain I.T. activities has changed dramatically. Recently, a group of 60 odd CIOs & Architects highlighted that many of the I.T. related activities they undertook were commonplace and well defined, particularly across their industry & geography.

Now that doesn't mean they did things the same way, quite the reverse.

Taking just one activity, ERP, then all of these companies agreed that whilst they gain no competitive advantage in ERP, it was an essential cost of doing business. They also agreed that they all had their own customised processes for ERP which they invested heavily in. The shock was their agreement that these different processes provided no differential benefit. It was estimated that for this one activity alone, across this small group of companies, then $600 million p.a. was spent maintaining differences which provided no value.

By reducing customisation through the provision of ERP as standardised services, then each company would significantly benefit in terms of cost savings. Standardisation of processes and removing costs associated with customisation is seen as one of the major benefits of the transition from an "as a Product" to an "as a Service" world.

Let's be clear here, "as a Service" was considered shorthand for "provision of a commodity activity through standardised services via a competitive marketplace of computer utilities". These companies were not looking for an outsourcing arrangement for a highly customised service for their needs, they were looking for a commodity to be treated as a commodity.

The concepts of computer utilities offering elastic and infinite supply on demand, provision of activities through services and commoditisation are all tightly coupled.

The benefits & risks of "cloud"
The benefits of this shift towards service provision via large computer utilities has been discussed extensively for the last 40 years :-

  • economies of scale (volume operations)
  • focus on core activities (outsourcing to a service provider)
  • pay per use (utility charging)
  • increased consumer innovation ( componentisation)

However, one critical benefit that gets missed is standardisation itself.

The risks associated with this change (ignoring the disruptive effect on the product industry) can be classified into transitional risks (related to the change in business relationship) and generic outsourcing risks. I've categorised these below for completeness.

Transitional Risks

  • Confusion over the new models.
  • Trust in the new providers.
  • Transparency of relationships.
  • Governance of these new models (including auditing, security & management).
  • Security of supply

Outsourcing Risks
  • Suitability of the activity for service provision.
  • Vendor lock-in (& exit costs).
  • The availability of second sourcing options.
  • Pricing competition.
  • Loss of strategic control.
Transitional risks can be mitigated through standard supply chain management techniques. For example with electricity we often combine both public and private sources of provision (a hybrid option). However outsourcing risks require the formation of competitive marketplaces in order to mitigate. Whilst the latter is a genuine concern for companies, even without these marketplaces the benefits of this change are still attractive.

The problem with the term "Cloud"
This change in I.T. is all about standardised service provision of commodity activities through a competitive marketplace of computer utilities. The notion of utility conjures up easily understood and familiar models.

Few would have a problem in understanding how access to a standardised form of electricity through a marketplace has allowed for a huge range of new innovations built upon consuming electricity. Few would have a problem in understanding how the providers themselves have sought new innovative ways of generating electricity. Few would ever consider electricity itself as a form of innovation, to most it comes from a plug and it is critical that it is standardised.

This is a really important point because our companies comprise of value chains that are full of components which are evolving to become commodity and utility like services. This commoditisation enables new higher order systems to be created and new businesses to form e.g. electricity enabled radio, television but also destroys old business. The key here is that whilst commoditisation enables innovation it also destroys the past. The two are different,

The problem with the term "cloud" beyond being fuzzy, is it often used to describe this change in I.T. as something new and innovative. The term helps disguise a fundamental shift towards a world where the bits don't matter and it's all about services. The term allows for all manner of things to be called cloud, many of which have little to do with the standardisation of an activity and its provision through utility services.

You could easily argue the term is misleading as it encourages customisation and distracts the consumer from what should be their focus - standardised services through a competitive marketplace of computer utilities.

Alas, as I said we ALL have to use the term today because of its momentum.

At Ubuntu we focus on commodity provision of activities (common workloads), providing our users with the technology to build a private computer utility (nee "Cloud", as part of a hybrid strategy), the adoption of the defacto standard of EC2 & S3 and we also provide all the technology as open source to encourage the formation of competitive markets.

We use the term "cloud" because it's what customers, analysts and others expect to hear. This of course doesn't stop us from explaining what is really happening and busting the "cloud" myths that exist.

Saturday, March 13, 2010

Is your cloud a poodle?

Since we're fond of replacing meaningful concepts such as commoditisation, lifecycle, categorisation and computer utilities with bland terms like "cloud", I thought I'd follow the trend on to its next logical conclusion - Poodle Computing.

The shift of I.T. activities from being provided "as a Product" to being provided "as a Service" through large computer utilities has an obvious next step - the formation of competitive marketplaces. These marketplaces will require standardisation of what is after all a commodity (i.e. ubiquitous and well defined enough to be suitable for service provision through volume operations) and the ability of consumers to switch easily between and consume resources over multiple providers (which in turn requires multiple providers, access to code & data, interoperability of providers and an overall low exit costs)

I won't bore you with the mechanics of this and the eventual formation of brokerages & exchanges, I covered this subject extensively in 2007 when I made my "6 years from now you'll be seeing job adverts for computer resource brokers" prediction.

However, in this future world of brokerages and fungible compute resources (or fungitility as I jokingly called it) the consumer will become ever more distanced from the source of provision. This will be no different to the many other forms of utilities where vibrant exchange markets exist and what the consumer purchases often has gone through the hands of brokers. You don't actually know which power station generated the electricity you consume.

So this brings me to the title of the post. As consumer and the source becomes more distanced, it reminds me of Peter Steiner's cartoon"On the Internet, nobody knows you're a dog".

On that basis, what sort of dog flavour of computing resource will you be consuming?

By introducing the concept of "dog computing" to cover this "cloud of clouds" world (hey, they're both meaningless) then the marketing possibilities will become endless and a lot more fun.

I can see the conversation now, walking into a lean and mean sales organisation and saying to the CEO that they are using "Poodle Computing". Shouldn't they be using our brand new "Pitbull Computing" or at least upgrading to "Springer Spaniel"?

We could always call things what they are (computer utilities & competitive markets of computer utilities") but I suspect we will end up with "cloud of clouds", "cloud exchanges" and an OTC market of ominous sounding "cloudy futures".

Friday, March 12, 2010

What is Cloud?

Before we can discuss this term, a bit of history and background is needed.

Activities

All business activities undergo a lifecycle, they evolve through distinct stages including :-

  • the first introduction of a new activity (its innovation)
  • the custom built examples replicating this activity (the copying phase)
  • the introduction of products which provide that activity ( the product stage, including numerous rounds of feature differentiation which are also unfortunately called product innovation)
  • The activity becoming more of a commodity (ubiquitous, well-defined and with no qualitative differentiation). In certain circumstances that commodity can be provided through utility services.

It should be noted that the characteristics of an activity changes as it move through its life-cycle. As a commodity it's of little strategic value (or differentiation) between competitors whereas in its early stages it can often be a source of competitive advantage (a differential).

Information Technology

At any one moment in time, I.T. consists of a mass of different activities at different stages of their life-cycle. Some of those activities are provided through discrete software applications (an example might be ERP), other activities relate to the use of platforms (developing a new system using RoR or provisioning of a large database etc) whilst others relate to the provision of infrastructure (compute resource, storage, networks).

You can categorise these activities into a computing stack of infrastructure, platform and software. Of course you can go higher up the stack to describe the processes themselves and beyond, however for this discussion we will just keep it simple.

What's happening in IT today?

Many activities in I.T. that were once innovations but more recently have been provided as products (with extensive feature differentiation) have now become so ubiquitous and so well defined that they have become little more than a commodity that is suitable for service provision. You can literally consider that chunks of the "computing stack" are moving from an "as a Product" to an "as a Service" world.

This change is the reason why we have the "Infrastructure as a Service" to "Platform as a Service" to whatever else "as a Service" industries. Of course, there are many higher order layers to the stack (e.g processes) but any confusion around the "as a Service" term generally only occurs because we never used to describe these activities with the "as a Product" term.

Had we categorised the previous software industry in terms of "Software as a Product", "Platform as a Product" etc, then the change would have been more obvious.

Why now?

This change requires more than just activities being suitable for utility service provision. It also requires the concept of service provision, the technology to achieve this and a change in business attitude i.e. a willingness of business to adopt these new models. Whilst the concept is old (more on this later), and the technology has been around for some time (yes, it has matured in the last decade but that's about all), both the suitability and change of business attitude are relatively new.

Thanks to the work of Paul Strassman (in the 90's) and then Nick Carr (in the 00's), many business leaders have recognised that not all I.T. is a source of advantage. Instead much of I.T. is a cost of doing business which is ubiquitous and fairly well defined throughout an industry.

It was quite refreshing to recently hear a large group of CIOs, who all spent vast amounts of money maintaining highly customised CRM systems, comment that actually they were all doing the same thing. These systems provided no strategic value, no differential and in reality what they wanted was standardised, low cost services charged on actual consumption basis for what is essentially a cost of doing business. They also wanted this to be provided through a marketplace of service providers with easy switching between them.

This is quite a sea change from a decade ago.

The change from a "as a Product" to an "as a Service" world is happening today because we have the concept, technology, suitability and most importantly this changing business attitude.

An old Concept

The concept of utility service provision for I.T. is not new but dates back to the 1960's. Douglas Parkhill, in this 1966 book - "The Challenge of the Computer Utility" - described a future where many computing activities would be provided through computer utilities analogous to the electricity industry. These computer utilities would have certain characteristics, they would :-

  • provide computing resources remotely and online
  • charge for the use of the resources on the basis of consumption i.e. a utility basis
  • provide elastic & "infinite" supply of resources
  • benefit from economies of scale
  • be multi-tenanted

Douglas noted that these computer utilities would take several forms as per the existing consumption of other utilities. These forms included (but are not limited to) public, private & government utilities. He also noted that eventually we would see competitive markets of computer utilities where consumers could switch providers, consume resources across multiple providers (i.e. a federated use) and consume all manner of hybrid forms (e.g. private and public combinations)

One final note is the term utility means a metered service where the charge is based upon consumption. That charge might be financial or it could be in any other currency (e.g. access to your data).

The Cloud Term

Between 1966 -2007, the general school of thought grew to be :-

  • I.T. wasn't one thing. Many aspects of I.T. created little or no differential value and were simply a cost of doing business (Strassman, 90s)
  • There is a correlation between ubiquity of I.T. and its strategic value (differentiation). The more ubiquitous I.T. was, the less strategic value it created. (Nick Carr, '02)
  • I.T. activities could be categorised into rough groupings such as software, platform and infrastructure (the actual terms used have changed over time but this concept is pre-80's)
  • Certain I.T. activities would be provided through computer utilities as per other utility industries (Parkhill & McCarthy, 60's)
  • There were several forms that these computer utilities could take including public, private, government and all manner of combinations in between. (Douglas Parkhill, 1966)
  • We would see the formation of competitive marketplaces with switching and federation of providers.
  • These computer utilities had certain common characteristics including utility charging, economies of scale, elastic and "infinite" supply etc.(Douglas Parkhill, 1966)
  • Whilst all activities have a lifecycle which they evolve along through the process of commoditisation, the shift from "as a Product" to an "as a Service" world would require several factors (i.e the concept, the technology to achieve this, the suitability of activities for service provision and a change in business attitude.)

Back between '05'-'07, there was a pretty crystal clear idea of what was going to happen:-

A combination of factors (concept, suitability, technology and a change in business attitude) was going to drive those I.T. activities which were common, well defined and a cost of doing business from being provided "as products" to being provided "as services" through large computer utilities. The type of services offered would cover different elements of the computing stack, there would be many different forms of computer utility (public, private & government) and eventually we would see competitive marketplaces with easy switching and consumption across multiple providers.

In '05, James Duncan, myself and many others were starting to build Zimki - a computer utility for the provision of a JavaScript based "Platform as a Service" - for precisely these reasons. The concepts of federation, competitive markets, exchanges and brokerages for service provision of a commodity were well understood.

Unfortunately in late '07 / early '08, the term "Cloud" appeared and the entire industry seemed to go into a tailspin of confusion. During '08, the "Cloud" term became so prevalent that if you mentioned "computer utility" people would tell you that they weren't interested but could you please tell them about "this thing called cloud".

So, what is Cloud?

The best definition for cloud today is NIST's. Using five essential characteristics (include elasticity, measured service etc), four deployment models (private, public, government etc) and three services (application, platform, infrastructure) it nearly packages all the concepts of computer utility, the shift from product to services and the different categories of the computing stack into one overall term - "cloud".

In the process it wipes out all the historical context, trainwrecks the concept of a competitive marketplace with switching and federation, eliminates the principle idea of commoditisation and offers no explanation of why now. It's an awful mechanistic definition which only helps you call something a cloud without any understanding of why.

However, that said, NIST has done a grand job of trying to clean up the mess of 2008.

In that dreadful year, all these well understood concepts of computer utilities, competitive marketplaces, the lifecycle of activities, categorisation of the computing stack and commoditisation were put in a blender, spun at 30,000 rpm and the resultant mishmash was given the name "cloud". It was poured into our collective consciousness along with the endless blatherings of "cloudy" thought leaders over what it meant (I'm as guilty of this as many others)

To be brutal, whilst the fundamentals are sound (commoditisation, computer utilities, the change from products to services etc), the term "Cloud" was nothing more than a Complete Load Of Utter Drivel. It's a sorry tale of confusion and a meaningless, generic term forced upon a real and meaningful change.

My passionate dislike for the term is well known. It irks me that for such an important shift in our industry, I have to use such a term and then spend most of my time explaining the fundamental concepts behind what is going on, why this change is happening and undoing the various "cloud" myths that exist.

Being pragmatic, I'm fully aware that this term has enough momentum that it's going to stay. Shame.