Monday, April 12, 2010

Use cloud and get rid of your sysadmin.

Following on from my Cloud Computing Myths post.

The principle argument behind cloud getting rid of sysadmins is one of "pre-cloud a sysadmin can manage a few hundred machines, in the cloud era with automation a sysadmin can manage tens of thousands of virtual machines". In short, since system admins will be able to manage a two orders of magnitude greater number of virtual machines then we will need less of them.

Let's be first clear what automation means. At the infrastructure layer of the computing stack there are a range of systems, commonly known as orchestration tools, which allow for basic management of a cloud, automatic deployment of virtual infrastructure, configuration management, self-healing, monitoring, auto-scaling and so forth. These tools take advantage of the fact that in the cloud era, infrastructure is code and is created, modified and destroyed through APIs.

Rather than attempting to create specialised infrastructure, the cloud world takes advantage of a bountiful supply of virtual machines provided as standardised components. Hence scaling is achieved not through provision of an ever more powerful machine but deployment of vastly more standardised virtual machines.

Furthermore the concept of a machine also changes. We're moving away from the idea of a virtual machine image for this or that, to one of a basic machine image and all the run time information you require to configure it. The same base image will become a wiki, a web server or part of a n-tier system.

All of these capabilities allow for more ephemeral infrastructure, rapidly changing according to need with rapid deployment and destruction. This creates a range of management problems and hence we have the growth of interest in orchestration tools. These tools vary from specifically focused components to more general solutions and include chef, controltier,CohesiveFT, capistrano, rightscale, scalr and the list goes on.

A favourite example of mine, simply because it acts as a pointer towards the future, is PoolParty. Using a simple syntax of describing infrastructure deployment, PoolParty synthesises the core concepts of this infrastructure change. For example, deploying a system no longer becomes a long architectural review and planning process, an RT ticket requesting some new servers with an inevitable wait, the installation, racking and configuration of those servers followed with change control meetings.

Deploying a system becomes in principle as simple as :-

Pool "my_application" do
Cloud "my_application_server" do
Using EC2
Instance 1...1
Image_id "xxxxx"
Autoscale
end

Cloud "my_database_server" do
Using EC2
Instances 1...1
Image_id "xxxxx"
end

end

It is these concepts of infrastructure as code and automation through orchestration tools when combined with a future of computing resources provided as larger components (pre-built racks and containers) which have led many to assume that cloud will remove the roles of many sysadmins. This is a weak assumption.

A historical review of computing resource usage shows it's price elastic. In short, as the cost for provision of a unit of compute resource reduces then the demand has increased leading to today's proliferation of computing.

Now, depending upon who you talk to, the inefficiency of computer resources in your average data centre runs at 80-90%. Adoption of private clouds should (ignoring the benefits of using commodity hardware) provide a 5 x reduction in price per unit. Based upon historical precedents, you could expect this to be much higher in public cloud and lead to a 10-15x increase in consumption as we find the long tail of applications that companies desire becomes ever more feasible.

Of course, this ignores transient applications (those with a short life time such as weeks, days or hours), componentisation (e.g. self service and use of infrastructure as a base component), co-evolution effects and the larger economies of scale potentially available on public providers.

Given Moore's law, the current level of wastage, a standard VM / Physical server conversion rate, greater efficiencies in public provision, increasing use of commodity hardware and the assumption that expenditure of computing resources will remain flat (any reductions in cost per unit being compensated by increase in workload) then it is entirely feasible that within 5-7 years these effects could lead to a 100x increase in virtual infrastructure (i.e. number of virtual servers compared to current physical servers). It's more than possible that in five years time every large marketing department will have its own 1,000 node hadoop cluster for data processing of consumer behaviour.

So, we come back to the original argument which is "pre-cloud a sysadmin can manage a few hundred machines, in the cloud era with automation a sysadmin can manage tens of thousands of virtual machines". The problem with this argument is that if cloud develops as expected then each company will be managing two orders of magnitude more virtual machines which means there'll be at least as many sysadmins as there are today.

Now whilst the model changes when it comes to platform and software as a service (and there are complications here which I'll leave to another day), the assumption that cloud will lead to less system adminstrators is another one of those cloud myths which hasn't been properly thought through.

P.S. The nature of the role of a sysadmin will change and their skillsets will broaden, however if you're planning to use cloud to reduce their numbers then you might be in for a nasty shock.

P.P.S. Just to clarify, I've been asked by a company which runs 2,000 physical servers whether this means that in 5-7 years they could be running 200,000 virtual servers (some of which will be provided by private and most on public clouds, ideally through an exchange or brokers). This is exactly what I mean. You're going to need orchestration tools just to cope and you'll need sysadmins to be skilled in these and managing a much more complex environment.

Friday, April 09, 2010

Common Cloud Myths

Over the last three years, I've spent an increasingly disproportionate amount of my time dealing with cloud myths. I thought I'd catalogue my favourites by bashing one every other day.

Cloud is Green

The use of cloud infrastructure certainly allows for more efficient provision of infrastructure through matching supply to demand. In general :-

1. For a traditional scenario where every application has its own physical infrastructure then each application requires a capacity of compute resources, storage and network which must exceed its maximum load and provide suitable spare capacity for anticipated growth. This situation is often complicated by two factors. First, most applications contains multiple components and some of those often highly under utilise physical resources (for example load balancing). Second, due to the logistics of provisioning physical equipment then the excess capacity must be sufficiently large. At best, the total compute resources required will significantly exceed the sum of all the individual peak application loads and spare capacity.

2. The shared infrastructure scenario covers networks, storage and compute resources (through virtualisation). Resource requirements are balanced across multiple applications with variable loads and the total spare capacity held is significantly reduced. In an optimal case the total capacity can be reduced to a general spare capacity plus the peak of the sum of the application loads. Virtual Data Centres, provisioning resources according to need, are an example of shared infrastructure.

3. In the case of a private cloud (i.e. a private compute utility), the economics are close to that of a shared scenario. However, there is one important distinction in that a compute utility is about commodity infrastructure. For example, virtual data centres provide highly resilient virtual infrastructure which incur significant costs whereas a private cloud focuses on rapid provision of low cost, good enough virtual infrastructure.

At the nodes (the servers providing virtual machines) of a private cloud, redundant power supplies are seen as an unnecessary cost rather than a benefit. This ruthless focus on commodity infrastructure provides a lower price point per virtual machine but that necessitates that resilience is created in the management layer and application (the design for failure concept). The reasoning for this, is the same reasoning behind RAID (redundant array of inexpensive disks). By pushing resilience into the management layer and combining more lower cost, less resilient hardware you can actually enable higher levels of resilience and performance for a given price point.

However, the downside is that you can't just take what has existed on physical servers and plonk it on a cloud and expect it to work like a highly resilient physical server. You can however do this with a virtual data centre.

This distinction and focus on commodity provision is the difference between a virtual data centre and a private cloud. It's a very subtle but massively important distinction because whilst a virtual data centre has the benefit of reducing educational costs of transition in the short term (being like existing physical environments), it's exactly these characteristics that will make it inefficient compared to private clouds in the longer term.

4. In the case of a public cloud infrastructure (a public compute utility), the concepts are taken further by balancing variable demands of one company for compute resources against another. This is one of many potential economies of scale that can lead to lower unit costs. However unit cost is only one consideration here, there are transitional and outsourcing risks that need to be factored in which is why we often use hybrid solutions combining both public and private clouds.

The overall effect of moving through these different stages is that the provision of infrastructure becomes more efficient and hence we have the "cloud is green" assumption.

I pointed out, back in 2008 at IT@Cork, that this assumption ignored co-evolution, componentisation and price elasticity effects.

By increasing efficiency and reducing cost for provision of infrastructure, a large number of activities which might have once not been economically feasible become economically feasible. Furthermore, the self-service nature of cloud not only increases agility by enabling faster provision of infrastructure but accelerates user innovation through provision of standardised components (i.e. the infrastructure equivalent of a brick). This latter effect can encourage the co-evolution of new industries in the same manner that the commoditisation of electronic switching (from the innovation of the Flemming valve to complex products containing thousands of switches) led to digital calculators and computers which in turn drove further commoditisation and demand for electronic switching.

The effect of these forces is that whilst infrastructure provision may become more efficient, the overall demand for infrastructure will outstrip these gains precisely because infrastructure has become a more efficient and standardised component.

We end up using vastly more of a more efficient resource. Lo and behold, cloud turns out not to be green.

The same effect was noted by Willam Stanley Jevons in the 1850s, when he "observed that England's consumption of coal soared after James Watt introduced his coal-fired steam engine, which greatly improved the efficiency of Thomas Newcomen's earlier design"

Thursday, March 25, 2010

Cloud computing made simple

It has been a truly amazing year since we embarked on our "cloud" journey at Ubuntu, hence I thought I'd review some of the highlights.

We started the journey back in 2008 when Mark Shuttleworth announced our commitment to providing cloud technology to our users. At that time, the cloud world was already in a state of growing confusion, so we adopted an approach of :-

  • make the cloud simple.
  • focus on one layer of the computing stack (infrastructure) to begin with.
  • give our users real technology not promises.
  • help drive standardisation (a key requirements of this shift towards a service world) by adopted public defacto standards.
  • work with leading partners in this growing industry.
  • provide open source systems to avoid lock-in issues.
  • actively work to mitigate risks and concerns over cloud by giving our users options.

Hence, in April'09 as part of Ubuntu 9.04 we launched our hybrid cloud strategy.

Our approach was based around the adoption of Amazon EC2 / S3 & EBS as the public defacto standard rather than the creation of some new APIs (there's too many already).

We provided Ubuntu images for use on Amazon EC2 (public cloud) and the technology to build your own private cloud (known as Ubuntu Enterprise Cloud) that matched the same APIs of Amazon. We also added management tools which could cross both public and private domains because of our adoption of a standard API set.

For 9.10 we significantly improved the robustness and ease of setting up a private cloud (Mark built his own several node system in under 25 mins from bare metal). We provided the base for an application store, improved the management capabilities of Landscape and the features of UEC grew extensively. We also launched training, consultancy and support services for the cloud and a JumpStart program to help companies move into the cloud quickly.

During this time we've worked closely with many partners, I'll mention a few (more details can be found on the Ubuntu Cloud site) :-

  • Eucalyptus whose open source technology we adopted into the distribution as a core part of Ubuntu Enterprise Cloud.
  • Intel's Cloud Builder program to provide best practices on how to create a private cloud using UEC. I'd strongly recommend reading the whitepaper.
  • RightScale & CohesiveFT to provide best of breed public management tools alongside our own Landscape system.
  • Dell, who will offer a range of pre-built clouds using a series of ‘blueprint’ configurations that have been optimised for different use cases and scale. These will include PowerEdge-C hardware, UEC software and full technical support.

In one year, we've made cloud simple for our users. We've brought our "Linux for Humans" philosophy into the cloud by getting rid of complexity, confusion and myth.

If you want to get into cloud, then we offer :-

  • Simple Choices: You can have either private, public or hybrid (i.e. public + private) infrastructure clouds.
  • Simple Setup: If you want to build a private cloud, then Ubuntu makes the set-up ridiculously easy. You can be up and running with your own cloud in minutes. Along with with our community documentation covering CD installation and more advanced options, you can also find detailed information on how to build a cloud through Intel's cloud builder program. However, if building a cloud still sounds too daunting then Dell offers pre-built, fully configured and supported private clouds.
  • Simple Management: You can use the same tools for both your private and public clouds because we've standardised around a common set of APIs. There's no need to learn one set of systems for private and another for public.
  • Simple Bursting: Since we provide common machine images which run on both public and private cloud offerings combined with standardised APIs, then the process of moving infrastructure and combining both private and public clouds is ... simpler.
  • Enterprise Help: If you still need help then we offer it, including 24x7 support and a jumpstart program to get your company into the cloud.
  • Open source: UEC, the Ubuntu machine images and all the basic tools are open sourced. We're committed to providing open source systems and following through on a genuine open source approach i.e. the system is open source and free and so are all the security patches and version upgrades.

The results of this year have been very encouraging. We recently estimated that there are now over 7,000 private clouds built with UEC, however with 7% of users in our annual Ubuntu User Survey saying that they have built a UEC cloud, the true figure might be very much higher. It was great to hear that almost 70% of users felt Ubuntu was a viable platform for the cloud but there were several surprising statistics including :-

  • 54% were using the cloud in some form or another (software, platform or infrastructure). However giving the fuzziness of the term cloud, this can only be seen as a signal of intent to use online services.
  • Only 10% had used public cloud providers (such as Amazon) for infrastructure. What was quite remarkable was that given the relatively recent availability of UEC, almost as many people had built private clouds as had used public cloud providers.
  • 60% felt that the use of private cloud was more important to their organisation, 25% thought that both private and public was of equal importance whilst only 15% felt that public cloud was the most important.

Whilst this survey was targetted at Ubuntu users, the data we receive from external sources suggest that Ubuntu is becoming the dominant operating system for consumers of the infrastructure cloud space. Even a simple ranking of search terms using Google's Insight around cloud computing show how significant a player Ubuntu is.

Whilst this is great news, what really pleases me is that we're making cloud simple and real for organisations and listening to what they need. We're getting away from the confusion over cloud, the tireless consultant drivel over whether private cloud is cloud computing and the endless pontifications and forums debating vague futures. Instead, we're giving real people, real technology which does exactly what they want.

Over the next year we're going to be tackling issues around creating competitive marketplaces (i.e. more choice for our users), simplfying self-service IT capabilities and orchestration and providing a wide range of open source stacks and platforms to use in the cloud.

We're going to continue to drive down this path of commoditisation by providing common workloads for the cloud (the same as we've been doing for server) and helping businesses to standardise that which is just cost of doing business.

Regardless of any attempts to badge "cloud" as just a more advanced flavour of virtualisation or describe it as "not real yet" by various late vendors, we will be doing our best to bust the various "cloud" myths and push the industry towards competitive marketplaces of computer utilities through defacto standardisation.

Commoditise! Commoditise! Commoditise!

I'm also delighted about our partners successes, with RightScale passing the million server mark, Amazon's continual growth and leadership with the introduction of spot markets, Dell's outstanding move to make cloud mainstream, Intel's push to make cloud easier & Eucalyptus' continued adoption and the appointment of Marten Mickos as CEO.

P.S. if you want to keep track of what's happening with Ubuntu in the cloud, a good place to start is our cloud blog or following our twitter feed, ubuntucloud.

Sunday, March 14, 2010

Cloud rant

For the last two posts, I've had a pop at the term "cloud", I'd like to now explain my reasoning in more detail.

First, as always, some background information.

What we know
The transition from a product to a services world in I.T. is very real. It's happening now because of the confluence of concept, suitability, technology and a change it business attitude. Overall it's driven by the process of commoditisation and it is the very ubiquity of specific I.T. activities that makes them potentially suitable for volume operations. I say potentially because volume operations is only viable for activities which are both well defined and ubiquitous.

Fortunately ubiquity has a relationship to certainty (or in other words how well understood, defined and therefore certain an activity is) which is why these activities are suitable for provision on the basis of volume operations through large computer utilities.

Lifecycle
For many years I've talked about lifecycle and the evolution of business activities. Any activity goes through various stages from its first innovation (as per the use of computer resources in the Z3 in 1941) to custom built examples to products which describe the activity. Usually, the activity ends up becoming a ubiquitous and well defined commodity (assuming there are no natural limits or constraints).

During this lifecycle, as the activity becomes more defined in the product stage, service models can often arise (for example the rental model of the managed hosting industry for computing infrastructure or the early subscription like models of electricity provision). As the activity becomes more of a commodity the existence of these service models tends to lead to the rise of utility services (as with electricity provision).

An essential requirement for the growth of the utility model (and the type of volume operations necessary to support it) is that consumers view that what's provided is a commodity. It's little more than a cost of doing business and a standardised version is good enough.

The latter is why a change of attitude is critical in development of the utility service model. If, for example, consumers still view the activity as creating some form of competitive advantage (whether true or not), they are unlikely to adopt a utility model of standard services.

The change in attitude
Over the last decade the attitude of business towards certain I.T. activities has changed dramatically. Recently, a group of 60 odd CIOs & Architects highlighted that many of the I.T. related activities they undertook were commonplace and well defined, particularly across their industry & geography.

Now that doesn't mean they did things the same way, quite the reverse.

Taking just one activity, ERP, then all of these companies agreed that whilst they gain no competitive advantage in ERP, it was an essential cost of doing business. They also agreed that they all had their own customised processes for ERP which they invested heavily in. The shock was their agreement that these different processes provided no differential benefit. It was estimated that for this one activity alone, across this small group of companies, then $600 million p.a. was spent maintaining differences which provided no value.

By reducing customisation through the provision of ERP as standardised services, then each company would significantly benefit in terms of cost savings. Standardisation of processes and removing costs associated with customisation is seen as one of the major benefits of the transition from an "as a Product" to an "as a Service" world.

Let's be clear here, "as a Service" was considered shorthand for "provision of a commodity activity through standardised services via a competitive marketplace of computer utilities". These companies were not looking for an outsourcing arrangement for a highly customised service for their needs, they were looking for a commodity to be treated as a commodity.

The concepts of computer utilities offering elastic and infinite supply on demand, provision of activities through services and commoditisation are all tightly coupled.

The benefits & risks of "cloud"
The benefits of this shift towards service provision via large computer utilities has been discussed extensively for the last 40 years :-

  • economies of scale (volume operations)
  • focus on core activities (outsourcing to a service provider)
  • pay per use (utility charging)
  • increased consumer innovation ( componentisation)

However, one critical benefit that gets missed is standardisation itself.

The risks associated with this change (ignoring the disruptive effect on the product industry) can be classified into transitional risks (related to the change in business relationship) and generic outsourcing risks. I've categorised these below for completeness.

Transitional Risks

  • Confusion over the new models.
  • Trust in the new providers.
  • Transparency of relationships.
  • Governance of these new models (including auditing, security & management).
  • Security of supply

Outsourcing Risks
  • Suitability of the activity for service provision.
  • Vendor lock-in (& exit costs).
  • The availability of second sourcing options.
  • Pricing competition.
  • Loss of strategic control.
Transitional risks can be mitigated through standard supply chain management techniques. For example with electricity we often combine both public and private sources of provision (a hybrid option). However outsourcing risks require the formation of competitive marketplaces in order to mitigate. Whilst the latter is a genuine concern for companies, even without these marketplaces the benefits of this change are still attractive.

The problem with the term "Cloud"
This change in I.T. is all about standardised service provision of commodity activities through a competitive marketplace of computer utilities. The notion of utility conjures up easily understood and familiar models.

Few would have a problem in understanding how access to a standardised form of electricity through a marketplace has allowed for a huge range of new innovations built upon consuming electricity. Few would have a problem in understanding how the providers themselves have sought new innovative ways of generating electricity. Few would ever consider electricity itself as a form of innovation, to most it comes from a plug and it is critical that it is standardised.

This is a really important point because our companies comprise of value chains that are full of components which are evolving to become commodity and utility like services. This commoditisation enables new higher order systems to be created and new businesses to form e.g. electricity enabled radio, television but also destroys old business. The key here is that whilst commoditisation enables innovation it also destroys the past. The two are different,

The problem with the term "cloud" beyond being fuzzy, is it often used to describe this change in I.T. as something new and innovative. The term helps disguise a fundamental shift towards a world where the bits don't matter and it's all about services. The term allows for all manner of things to be called cloud, many of which have little to do with the standardisation of an activity and its provision through utility services.

You could easily argue the term is misleading as it encourages customisation and distracts the consumer from what should be their focus - standardised services through a competitive marketplace of computer utilities.

Alas, as I said we ALL have to use the term today because of its momentum.

At Ubuntu we focus on commodity provision of activities (common workloads), providing our users with the technology to build a private computer utility (nee "Cloud", as part of a hybrid strategy), the adoption of the defacto standard of EC2 & S3 and we also provide all the technology as open source to encourage the formation of competitive markets.

We use the term "cloud" because it's what customers, analysts and others expect to hear. This of course doesn't stop us from explaining what is really happening and busting the "cloud" myths that exist.

Saturday, March 13, 2010

Is your cloud a poodle?

Since we're fond of replacing meaningful concepts such as commoditisation, lifecycle, categorisation and computer utilities with bland terms like "cloud", I thought I'd follow the trend on to its next logical conclusion - Poodle Computing.

The shift of I.T. activities from being provided "as a Product" to being provided "as a Service" through large computer utilities has an obvious next step - the formation of competitive marketplaces. These marketplaces will require standardisation of what is after all a commodity (i.e. ubiquitous and well defined enough to be suitable for service provision through volume operations) and the ability of consumers to switch easily between and consume resources over multiple providers (which in turn requires multiple providers, access to code & data, interoperability of providers and an overall low exit costs)

I won't bore you with the mechanics of this and the eventual formation of brokerages & exchanges, I covered this subject extensively in 2007 when I made my "6 years from now you'll be seeing job adverts for computer resource brokers" prediction.

However, in this future world of brokerages and fungible compute resources (or fungitility as I jokingly called it) the consumer will become ever more distanced from the source of provision. This will be no different to the many other forms of utilities where vibrant exchange markets exist and what the consumer purchases often has gone through the hands of brokers. You don't actually know which power station generated the electricity you consume.

So this brings me to the title of the post. As consumer and the source becomes more distanced, it reminds me of Peter Steiner's cartoon"On the Internet, nobody knows you're a dog".

On that basis, what sort of dog flavour of computing resource will you be consuming?

By introducing the concept of "dog computing" to cover this "cloud of clouds" world (hey, they're both meaningless) then the marketing possibilities will become endless and a lot more fun.

I can see the conversation now, walking into a lean and mean sales organisation and saying to the CEO that they are using "Poodle Computing". Shouldn't they be using our brand new "Pitbull Computing" or at least upgrading to "Springer Spaniel"?

We could always call things what they are (computer utilities & competitive markets of computer utilities") but I suspect we will end up with "cloud of clouds", "cloud exchanges" and an OTC market of ominous sounding "cloudy futures".

Friday, March 12, 2010

What is Cloud?

Before we can discuss this term, a bit of history and background is needed.

Activities

All business activities undergo a lifecycle, they evolve through distinct stages including :-

  • the first introduction of a new activity (its innovation)
  • the custom built examples replicating this activity (the copying phase)
  • the introduction of products which provide that activity ( the product stage, including numerous rounds of feature differentiation which are also unfortunately called product innovation)
  • The activity becoming more of a commodity (ubiquitous, well-defined and with no qualitative differentiation). In certain circumstances that commodity can be provided through utility services.

It should be noted that the characteristics of an activity changes as it move through its life-cycle. As a commodity it's of little strategic value (or differentiation) between competitors whereas in its early stages it can often be a source of competitive advantage (a differential).

Information Technology

At any one moment in time, I.T. consists of a mass of different activities at different stages of their life-cycle. Some of those activities are provided through discrete software applications (an example might be ERP), other activities relate to the use of platforms (developing a new system using RoR or provisioning of a large database etc) whilst others relate to the provision of infrastructure (compute resource, storage, networks).

You can categorise these activities into a computing stack of infrastructure, platform and software. Of course you can go higher up the stack to describe the processes themselves and beyond, however for this discussion we will just keep it simple.

What's happening in IT today?

Many activities in I.T. that were once innovations but more recently have been provided as products (with extensive feature differentiation) have now become so ubiquitous and so well defined that they have become little more than a commodity that is suitable for service provision. You can literally consider that chunks of the "computing stack" are moving from an "as a Product" to an "as a Service" world.

This change is the reason why we have the "Infrastructure as a Service" to "Platform as a Service" to whatever else "as a Service" industries. Of course, there are many higher order layers to the stack (e.g processes) but any confusion around the "as a Service" term generally only occurs because we never used to describe these activities with the "as a Product" term.

Had we categorised the previous software industry in terms of "Software as a Product", "Platform as a Product" etc, then the change would have been more obvious.

Why now?

This change requires more than just activities being suitable for utility service provision. It also requires the concept of service provision, the technology to achieve this and a change in business attitude i.e. a willingness of business to adopt these new models. Whilst the concept is old (more on this later), and the technology has been around for some time (yes, it has matured in the last decade but that's about all), both the suitability and change of business attitude are relatively new.

Thanks to the work of Paul Strassman (in the 90's) and then Nick Carr (in the 00's), many business leaders have recognised that not all I.T. is a source of advantage. Instead much of I.T. is a cost of doing business which is ubiquitous and fairly well defined throughout an industry.

It was quite refreshing to recently hear a large group of CIOs, who all spent vast amounts of money maintaining highly customised CRM systems, comment that actually they were all doing the same thing. These systems provided no strategic value, no differential and in reality what they wanted was standardised, low cost services charged on actual consumption basis for what is essentially a cost of doing business. They also wanted this to be provided through a marketplace of service providers with easy switching between them.

This is quite a sea change from a decade ago.

The change from a "as a Product" to an "as a Service" world is happening today because we have the concept, technology, suitability and most importantly this changing business attitude.

An old Concept

The concept of utility service provision for I.T. is not new but dates back to the 1960's. Douglas Parkhill, in this 1966 book - "The Challenge of the Computer Utility" - described a future where many computing activities would be provided through computer utilities analogous to the electricity industry. These computer utilities would have certain characteristics, they would :-

  • provide computing resources remotely and online
  • charge for the use of the resources on the basis of consumption i.e. a utility basis
  • provide elastic & "infinite" supply of resources
  • benefit from economies of scale
  • be multi-tenanted

Douglas noted that these computer utilities would take several forms as per the existing consumption of other utilities. These forms included (but are not limited to) public, private & government utilities. He also noted that eventually we would see competitive markets of computer utilities where consumers could switch providers, consume resources across multiple providers (i.e. a federated use) and consume all manner of hybrid forms (e.g. private and public combinations)

One final note is the term utility means a metered service where the charge is based upon consumption. That charge might be financial or it could be in any other currency (e.g. access to your data).

The Cloud Term

Between 1966 -2007, the general school of thought grew to be :-

  • I.T. wasn't one thing. Many aspects of I.T. created little or no differential value and were simply a cost of doing business (Strassman, 90s)
  • There is a correlation between ubiquity of I.T. and its strategic value (differentiation). The more ubiquitous I.T. was, the less strategic value it created. (Nick Carr, '02)
  • I.T. activities could be categorised into rough groupings such as software, platform and infrastructure (the actual terms used have changed over time but this concept is pre-80's)
  • Certain I.T. activities would be provided through computer utilities as per other utility industries (Parkhill & McCarthy, 60's)
  • There were several forms that these computer utilities could take including public, private, government and all manner of combinations in between. (Douglas Parkhill, 1966)
  • We would see the formation of competitive marketplaces with switching and federation of providers.
  • These computer utilities had certain common characteristics including utility charging, economies of scale, elastic and "infinite" supply etc.(Douglas Parkhill, 1966)
  • Whilst all activities have a lifecycle which they evolve along through the process of commoditisation, the shift from "as a Product" to an "as a Service" world would require several factors (i.e the concept, the technology to achieve this, the suitability of activities for service provision and a change in business attitude.)

Back between '05'-'07, there was a pretty crystal clear idea of what was going to happen:-

A combination of factors (concept, suitability, technology and a change in business attitude) was going to drive those I.T. activities which were common, well defined and a cost of doing business from being provided "as products" to being provided "as services" through large computer utilities. The type of services offered would cover different elements of the computing stack, there would be many different forms of computer utility (public, private & government) and eventually we would see competitive marketplaces with easy switching and consumption across multiple providers.

In '05, James Duncan, myself and many others were starting to build Zimki - a computer utility for the provision of a JavaScript based "Platform as a Service" - for precisely these reasons. The concepts of federation, competitive markets, exchanges and brokerages for service provision of a commodity were well understood.

Unfortunately in late '07 / early '08, the term "Cloud" appeared and the entire industry seemed to go into a tailspin of confusion. During '08, the "Cloud" term became so prevalent that if you mentioned "computer utility" people would tell you that they weren't interested but could you please tell them about "this thing called cloud".

So, what is Cloud?

The best definition for cloud today is NIST's. Using five essential characteristics (include elasticity, measured service etc), four deployment models (private, public, government etc) and three services (application, platform, infrastructure) it nearly packages all the concepts of computer utility, the shift from product to services and the different categories of the computing stack into one overall term - "cloud".

In the process it wipes out all the historical context, trainwrecks the concept of a competitive marketplace with switching and federation, eliminates the principle idea of commoditisation and offers no explanation of why now. It's an awful mechanistic definition which only helps you call something a cloud without any understanding of why.

However, that said, NIST has done a grand job of trying to clean up the mess of 2008.

In that dreadful year, all these well understood concepts of computer utilities, competitive marketplaces, the lifecycle of activities, categorisation of the computing stack and commoditisation were put in a blender, spun at 30,000 rpm and the resultant mishmash was given the name "cloud". It was poured into our collective consciousness along with the endless blatherings of "cloudy" thought leaders over what it meant (I'm as guilty of this as many others)

To be brutal, whilst the fundamentals are sound (commoditisation, computer utilities, the change from products to services etc), the term "Cloud" was nothing more than a Complete Load Of Utter Drivel. It's a sorry tale of confusion and a meaningless, generic term forced upon a real and meaningful change.

My passionate dislike for the term is well known. It irks me that for such an important shift in our industry, I have to use such a term and then spend most of my time explaining the fundamental concepts behind what is going on, why this change is happening and undoing the various "cloud" myths that exist.

Being pragmatic, I'm fully aware that this term has enough momentum that it's going to stay. Shame.

Wednesday, January 13, 2010

Mystic Me 3.0

It's time for a spot of bleary eyed crystal ball gazing.

Last year, my predictions were fairly reasonable with 7 hits covering the commercial release of PLED TVs to our beloved government economists saying that 2010 would be worse than expected.

The jury is still out on house prices [Update : the December 2009 figures showed the first annual increase in house prices - 2.5% - since May 2008.] whilst we await the land registry report but alas two of the predictions were wide of the mark. The FTSE 100 failed to drop below 3,500, only hitting 3, 512 - no cigar there then - and Yahoo wasn't sold.

So, with the usual added vagueness, looseness of terms and general get out clauses, yawn with delight for :-

Mystic Me Predictions for 2010.

  1. The number of mergers & acquisitions in the cloud computing and open source industries will reach fever pitch, surpassing previous years.
  2. The first examples of people trading on variability in cloud infrastructure prices and the early formation of brokerage concepts will appear.
  3. There will be no let-up in end user confusion surrounding cloud computing as would be thought leaders will embark on an orgy of term redefinition. Expect lots and lots of heated debates on how cloud isn't cloud computing isn't utility computing.
  4. The distorted creative destruction meme of modern society (i.e. "out with the old, in with the new") will get ahead of our desire to consume technology. Despite many predicting the death of the book, paperbacks will have a surprisingly good year.
  5. RPI in the UK will rise sharply and the FTSE 100 will drop below 3,000 during the year. Judging on past performance, the MPC will keep interest rates low because they're barking mad.
  6. Under howls of protest, banks will be given more taxpayers cash. This will be despite being bailed out, given free cash through quantitative easing and then splashing lots of dosh on bonuses. The tired arguments that "no-one saw this second crisis" coming and that we "can't let the banking system fail" will be trotted out to order.
  7. Despite independent estate agent surveys suggesting that house prices have risen a gazillion percent in the last minute, Land Registry house prices will continue to drop in the U.K.
  8. Environmental forecasters will be befuddled by Arctic summer ice disappearance exceeding the worst predictions of current climate models.
  9. There will be legal attempts to claim and quantify ownership of social networks as company IP.
  10. The new Doctor Who will be pants and the attempts to spice it up and make it more gritty will look rather sad.

Wednesday, December 30, 2009

The king was in his counting house ...

... handing out our money.

This was the year that Mervyn King & Alistair Darling managed to spectacularly fritter away billions of taxpayers' money.

I was never opposed to lending money to banks but quantitative easing (QE, a dishonest way of printing cash and giving it away in truckloads to the usual cronies) was disgraceful. If you're going to print money then at least have some direct investment, don't just hope that the export economy and money supply will magically solve our problems. 

QE combined with low interest rates may be bully for banks, shareholders and homeowners by creating an influx of cheap foreign capital but in a mainly import led economy it will hit the cost of raw materials and inbound goods whilst squeezing the spending of savers. The net effect is a trade - inflation and a weakening internal economy in order to maintain stock and house prices. Great for banks, the wealthy and those in unsustainable debt but sucks for ordinary people and pensioners who were not responsible for this mess. As I've said many times before, this will just make the recession deeper and longer. However, it's a bit like boiling frogs - throw them into hot water and they try and get out but put them into tepid water and slowly raise the temperature and they won't notice. In this case, the frogs are called savers.

The amounts of money to boil our own is huge. The tally to date is that the taxpayer has been exposed to £1 trillion of potential debt through cash injections, state guarantees, quantitative easing and other interventions. As a result of this, the taxpayer is expected to lose anywhere between ten to a hundred billion. All of this is to prop up an industry which will spend the next few years trying not to pay tax because of "losses". 

Why is it that when the taxpayer acts as a lender of last resort, we have to make a loss into the bargain? When the hard up resort to loan sharks, you never hear tales of some financial wheeze where money is given away.

Of course, it's different because we couldn't let the banks fail despite no-one explaining why not? Still that doesn't mean we have to be 'soft' and being the lender of last resort should be a time of piracy. For some reason the city, unlike the poor, got let off the hook.

We could (and should) have demanded equity equal to any loans plus the loan capital plus punitive interest rates, but we didn't. Where's our pound of flesh and 2000% APR?

We could (and should) have invested heavily in social housing, bought out the building industry when it was on its knees and grew our state owned banks by providing liquidity into the economy. 

We didn't. We're not going to. Our institutions are soft.

What did happen was that Meryvn & Darling were cheered by the financial giants like a pub landlord who has wiped the tab clean for his heaviest drinkers. Naturally, the taxpayer got lumbered with the bill and the underlying causes of the mess (huge debt, delusional valuation, excessive gambling, economic instability) have been unresolved.

Expect more bad news to come.

At least Darling has got a dubious excuse in trying to mess things up for the Conservatives. If only some of the largesse had been spent on things that really matter, like combating global warming (which from the Copenhagen Accord laughingly only gets £60 billion a year by 2020).

A wasted opportunity but then that's how I feel about New Labour - a decade of disappointment. 

Whilst the noughties have been personally good for me, in general it failed to live up to the expectations. Unless of course you consider that WAGs, myspace house parties, wii fit, 4x4's, an endless war on terror, draconian legislation reducing civil liberties, excessive celebrities and a highly materialistic and self serving environment are the pinnacle of human nature.

To summarise the noughties, you'd have to say "nought for the environment, nought for social mobility and lots of noughts for bankers".

On a positive note, Doctor Who was utterly brilliant.

[Update - Nov '12 - Mervyn is still in power but will apparently be leaving in 2013. Depressing how things turned out.  Some typo's and tidy ups needed in this post ... cleaned up]

Monday, December 14, 2009

Where is Amazon heading?

There is something that I've always found confusing about Amazon's cloud strategy.

The development of EC2 & S3 makes good sense given the suitability of these activities for large scale volume operations (an activity that Amazon, as a book-seller, specialises in).

The growth of an ecosystem around these core services and the provision of these services through APIs are ideal. The solving of some of the transitional educational barriers to cloud (such as persistency through EBS) seems spot on and ... well the list goes.

However, I've never quite understood why Amazon chooses to just cannibalise its own ecosystem (the creation of a hadoop service when cloudera existed, the creation of autoscaling when many alternatives existed) rather than buying-out some of those groups. I understand why you'd acquire those capabilities but I'd have mixed in acquisition because it sends strong signalling for others to join the party. There's a negative feedback loop here which could be easily avoided.
[By 2017, despite grumbling of "Amazon's eating our business model" ... they continue to be able play the game.  The negative feedback loop doesn't seem to be as big as I had anticipated]

Given that, I can't be sure of where Amazon is going to head - more copying or a shift to acquisition and a bit of both?

Other than the eventual need to move into a platform space. the moves towards a spot market could suggest that Amazon might attempt to set itself up as the computing exchange for basic infrastructure resources. To do this, it would also need to define itself as the industry standard (not just the defacto) probably through an IETF certification route and hence encourage other providers to adopt this standard. When this might happen (if at all) is tricky because it depends so much on how competitors play the game.
[I never did understand their plan, I still don't.  Most seemed hell bent on oblivion which itself was odd.]

Fortunately for Amazon, it already has several alternative open source implementations to support any standardisation claim and these open source technologies (such as Ubuntu Enterprise Cloud) provide a quick means for providers to get started.
[At the time of writing Ubuntu was starting to take over the cloud space]

There is huge future value in the exchange, brokerage and trading businesses for computing resources. It's what I was intending to go after with Zimki, all those years back but that was in the platform space.

I'm not going to make any predictions for now, I'll leave that until early January. However, if I was a betting man then I wouldn't be surprised if, over a variable amount of time, Amazon :-
[e.g. next 3 to 15 years, not immediately. I must remember to put dates with predictions, as fairly useless otherwise]
  • Goes for IETF standardisation of EC2 & S3
    [Hasn't happened]
  • Allows reserved instances to be traded on its spot market hence creating a basic form of commodity exchange
    [this happened]
  • Enters the platform layer of the computing stack with provision of a development platform
    [this happened with Lambda]
  • Allows other providers (who adopt the EC2 / S3 standard) to sell instances on the exchange
    [Hasn't happened]
  • Exits the infrastructure provision business by selling on the EC2 / S3 services (at a phenomenal premium) whilst keeping the exchange and any established brokerage business i.e. keep the meta data (which is useful for mining an ecosystem but allow others to provide hence overcoming monopoly issues by creating a market)
    [Hasn't happened but I'm still expecting the monopoly issues to raise its head]
--- 8th April 2017
[Added some additional commentary]

Mystic Meg Epic Fail

I hate predictions.

Don't get me wrong, I don't mind the "oh, it's already happening but I'll pretend it's new" type of predictions because you're guaranteed to look good.

I can happily quote that "the cloud market will grow", "standards, portability and interoperability will become increasingly important" and "the platform layer will be a major market" will full knowledge that these are safe bets.

Problem is, that these aren't really predictions and I've got a big mouth. Hence, I tend to make predictions which tend to explode rather nastily.

For example, back in 2002 I was predicting a financial meltdown in 2005 due to the massive growth in debt. Did it happen? Nope. I was out by a couple of years but that's the point of prediction, the when is vastly more important than the what.

That said, I can happily get the what wrong as well. Hence back in January 2009 when the FTSE was at 4,608, growing rapidly and many were talking about a rebound - I had to go and predict that it would drop to 3,500 within the year. Did it? Nope, it got close at 3,512 but never quite made it (back to the drawing board with my economic model again).

However, I'd be safe talking about cloud wouldn't I? Turns out that I get that wrong too. Hence back in 2007, I was predicting that "six years from now, you'll be seeing job adverts for computer resource brokers".

Earlier this year, I realised that prediction was going to be spectacularly wrong and happen much sooner. Eventually, I even admitted as much.

Adding salt to a fresh wound, is Amazon's announcement of a fully fledged spot market.

I suspect, it won't take long for someone to offer spread betting on the Amazon spot price or for some form of OTC derivative to mitigate against fluctuation in price and cover the risk of paying the full on demand price (because of failure to buy). Of course, this would work a lot better if users could resell reserved instances on the spot market providing the basis for a commodity exchange.

Opening up the spot market to the resell of instances between consumers will enable market pricing, making reserved instances more attractive. This will provide Amazon itself with future capacity planning information.

An alternative would be for users to resell reserved instances back to Amazon for sale on the spot market. However, this depends upon upon a quartet of objective, offers, availability and pricing.

For example, if revenue is the main objective, then there are scenarios (especially in the early days) where an increased revenue will be generated by selling a smaller number of instances at a higher spot price, leaving unfulfilled demand and capacity. It should be remembered that this is not market pricing but Amazon pricing.

Under a revenue objective, the conditions where it will be viable for Amazon to increase capacity on the spot market by the re-purchase of reserved instances (presuming Amazon isn't playing a double booking game with reserved instances, which are in essence a forward contract) will be limited.

It all depends upon this quartet and the only thing that I'm sure of, is that my prediction is out by a few years.

Ouch ... damn, how I hate predictions.

Friday, December 11, 2009

Cloud Camp Frankfurt

A few months ago I provided an introductory talk on cloud computing at Cloud Camp Frankfurt. I was asked to be vendor neutral, so it is light on Ubuntu Enterprise Cloud.

They've put the video of my talk up, so I thought I'd provided some links. Please note, it is split into two parts.

Cloud Computing - Part I

Cloud Computing - Part II

There are more videos on the Cloud Camp Frankfurt site, they're worth watching as the event was a blast.

Monday, December 07, 2009

Old yet new ...

I'm just comparing two of my talks, both on cloud computing and if anyone has time, I'd like some feedback.

The first is my recent talk from OSCON in 2009 covering "What is cloud computing and why IT matters", the second is my talk from OSCON in 2007 covering "Commoditisation of IT"

They both cover the same topic matter but with a different viewpoint (N.B. terms have changed since the 2007 talk but I'd like some feedback on style & content.)

Both are 15 minutes long but which was better and more importantly, why?

OSCON 2009: What is cloud computing and why IT matters

OSCON 2007: Commoditisation of IT