Cloud "my_application_server" do
Instance 1...1
Image_id "xxxxx"
Autoscale
end
Cloud "my_database_server" do
Using EC2
Instances 1...1
Image_id "xxxxx"
end
end
A node between the physical and digital.
The rants and raves of Simon Wardley.
Industry and technology mapper, business strategist, destroyer of undeserved value.
"I like ducks, they're fowl but not through choice"
Over the last three years, I've spent an increasingly disproportionate amount of my time dealing with cloud myths. I thought I'd catalogue my favourites by bashing one every other day.
Cloud is Green
The use of cloud infrastructure certainly allows for more efficient provision of infrastructure through matching supply to demand. In general :-
1. For a traditional scenario where every application has its own physical infrastructure then each application requires a capacity of compute resources, storage and network which must exceed its maximum load and provide suitable spare capacity for anticipated growth. This situation is often complicated by two factors. First, most applications contains multiple components and some of those often highly under utilise physical resources (for example load balancing). Second, due to the logistics of provisioning physical equipment then the excess capacity must be sufficiently large. At best, the total compute resources required will significantly exceed the sum of all the individual peak application loads and spare capacity.
2. The shared infrastructure scenario covers networks, storage and compute resources (through virtualisation). Resource requirements are balanced across multiple applications with variable loads and the total spare capacity held is significantly reduced. In an optimal case the total capacity can be reduced to a general spare capacity plus the peak of the sum of the application loads. Virtual Data Centres, provisioning resources according to need, are an example of shared infrastructure.
3. In the case of a private cloud (i.e. a private compute utility), the economics are close to that of a shared scenario. However, there is one important distinction in that a compute utility is about commodity infrastructure. For example, virtual data centres provide highly resilient virtual infrastructure which incur significant costs whereas a private cloud focuses on rapid provision of low cost, good enough virtual infrastructure.
At the nodes (the servers providing virtual machines) of a private cloud, redundant power supplies are seen as an unnecessary cost rather than a benefit. This ruthless focus on commodity infrastructure provides a lower price point per virtual machine but that necessitates that resilience is created in the management layer and application (the design for failure concept). The reasoning for this, is the same reasoning behind RAID (redundant array of inexpensive disks). By pushing resilience into the management layer and combining more lower cost, less resilient hardware you can actually enable higher levels of resilience and performance for a given price point.
However, the downside is that you can't just take what has existed on physical servers and plonk it on a cloud and expect it to work like a highly resilient physical server. You can however do this with a virtual data centre.
This distinction and focus on commodity provision is the difference between a virtual data centre and a private cloud. It's a very subtle but massively important distinction because whilst a virtual data centre has the benefit of reducing educational costs of transition in the short term (being like existing physical environments), it's exactly these characteristics that will make it inefficient compared to private clouds in the longer term.
4. In the case of a public cloud infrastructure (a public compute utility), the concepts are taken further by balancing variable demands of one company for compute resources against another. This is one of many potential economies of scale that can lead to lower unit costs. However unit cost is only one consideration here, there are transitional and outsourcing risks that need to be factored in which is why we often use hybrid solutions combining both public and private clouds.
The overall effect of moving through these different stages is that the provision of infrastructure becomes more efficient and hence we have the "cloud is green" assumption.
I pointed out, back in 2008 at IT@Cork, that this assumption ignored co-evolution, componentisation and price elasticity effects.
By increasing efficiency and reducing cost for provision of infrastructure, a large number of activities which might have once not been economically feasible become economically feasible. Furthermore, the self-service nature of cloud not only increases agility by enabling faster provision of infrastructure but accelerates user innovation through provision of standardised components (i.e. the infrastructure equivalent of a brick). This latter effect can encourage the co-evolution of new industries in the same manner that the commoditisation of electronic switching (from the innovation of the Flemming valve to complex products containing thousands of switches) led to digital calculators and computers which in turn drove further commoditisation and demand for electronic switching.
The effect of these forces is that whilst infrastructure provision may become more efficient, the overall demand for infrastructure will outstrip these gains precisely because infrastructure has become a more efficient and standardised component.
We end up using vastly more of a more efficient resource. Lo and behold, cloud turns out not to be green.
The same effect was noted by Willam Stanley Jevons in the 1850s, when he "observed that England's consumption of coal soared after James Watt introduced his coal-fired steam engine, which greatly improved the efficiency of Thomas Newcomen's earlier design"
It has been a truly amazing year since we embarked on our "cloud" journey at Ubuntu, hence I thought I'd review some of the highlights.
We started the journey back in 2008 when Mark Shuttleworth announced our commitment to providing cloud technology to our users. At that time, the cloud world was already in a state of growing confusion, so we adopted an approach of :-
Hence, in April'09 as part of Ubuntu 9.04 we launched our hybrid cloud strategy.
Our approach was based around the adoption of Amazon EC2 / S3 & EBS as the public defacto standard rather than the creation of some new APIs (there's too many already).
We provided Ubuntu images for use on Amazon EC2 (public cloud) and the technology to build your own private cloud (known as Ubuntu Enterprise Cloud) that matched the same APIs of Amazon. We also added management tools which could cross both public and private domains because of our adoption of a standard API set.
For 9.10 we significantly improved the robustness and ease of setting up a private cloud (Mark built his own several node system in under 25 mins from bare metal). We provided the base for an application store, improved the management capabilities of Landscape and the features of UEC grew extensively. We also launched training, consultancy and support services for the cloud and a JumpStart program to help companies move into the cloud quickly.
During this time we've worked closely with many partners, I'll mention a few (more details can be found on the Ubuntu Cloud site) :-
In one year, we've made cloud simple for our users. We've brought our "Linux for Humans" philosophy into the cloud by getting rid of complexity, confusion and myth.
If you want to get into cloud, then we offer :-
The results of this year have been very encouraging. We recently estimated that there are now over 7,000 private clouds built with UEC, however with 7% of users in our annual Ubuntu User Survey saying that they have built a UEC cloud, the true figure might be very much higher. It was great to hear that almost 70% of users felt Ubuntu was a viable platform for the cloud but there were several surprising statistics including :-
Whilst this survey was targetted at Ubuntu users, the data we receive from external sources suggest that Ubuntu is becoming the dominant operating system for consumers of the infrastructure cloud space. Even a simple ranking of search terms using Google's Insight around cloud computing show how significant a player Ubuntu is.
Whilst this is great news, what really pleases me is that we're making cloud simple and real for organisations and listening to what they need. We're getting away from the confusion over cloud, the tireless consultant drivel over whether private cloud is cloud computing and the endless pontifications and forums debating vague futures. Instead, we're giving real people, real technology which does exactly what they want.
Over the next year we're going to be tackling issues around creating competitive marketplaces (i.e. more choice for our users), simplfying self-service IT capabilities and orchestration and providing a wide range of open source stacks and platforms to use in the cloud.
We're going to continue to drive down this path of commoditisation by providing common workloads for the cloud (the same as we've been doing for server) and helping businesses to standardise that which is just cost of doing business.
Regardless of any attempts to badge "cloud" as just a more advanced flavour of virtualisation or describe it as "not real yet" by various late vendors, we will be doing our best to bust the various "cloud" myths and push the industry towards competitive marketplaces of computer utilities through defacto standardisation.
Commoditise! Commoditise! Commoditise!
I'm also delighted about our partners successes, with RightScale passing the million server mark, Amazon's continual growth and leadership with the introduction of spot markets, Dell's outstanding move to make cloud mainstream, Intel's push to make cloud easier & Eucalyptus' continued adoption and the appointment of Marten Mickos as CEO.
P.S. if you want to keep track of what's happening with Ubuntu in the cloud, a good place to start is our cloud blog or following our twitter feed, ubuntucloud.
Since we're fond of replacing meaningful concepts such as commoditisation, lifecycle, categorisation and computer utilities with bland terms like "cloud", I thought I'd follow the trend on to its next logical conclusion - Poodle Computing.
The shift of I.T. activities from being provided "as a Product" to being provided "as a Service" through large computer utilities has an obvious next step - the formation of competitive marketplaces. These marketplaces will require standardisation of what is after all a commodity (i.e. ubiquitous and well defined enough to be suitable for service provision through volume operations) and the ability of consumers to switch easily between and consume resources over multiple providers (which in turn requires multiple providers, access to code & data, interoperability of providers and an overall low exit costs)
I won't bore you with the mechanics of this and the eventual formation of brokerages & exchanges, I covered this subject extensively in 2007 when I made my "6 years from now you'll be seeing job adverts for computer resource brokers" prediction.
However, in this future world of brokerages and fungible compute resources (or fungitility as I jokingly called it) the consumer will become ever more distanced from the source of provision. This will be no different to the many other forms of utilities where vibrant exchange markets exist and what the consumer purchases often has gone through the hands of brokers. You don't actually know which power station generated the electricity you consume.
So this brings me to the title of the post. As consumer and the source becomes more distanced, it reminds me of Peter Steiner's cartoon"On the Internet, nobody knows you're a dog".
On that basis, what sort of dog flavour of computing resource will you be consuming?
By introducing the concept of "dog computing" to cover this "cloud of clouds" world (hey, they're both meaningless) then the marketing possibilities will become endless and a lot more fun.
I can see the conversation now, walking into a lean and mean sales organisation and saying to the CEO that they are using "Poodle Computing". Shouldn't they be using our brand new "Pitbull Computing" or at least upgrading to "Springer Spaniel"?
We could always call things what they are (computer utilities & competitive markets of computer utilities") but I suspect we will end up with "cloud of clouds", "cloud exchanges" and an OTC market of ominous sounding "cloudy futures".
Before we can discuss this term, a bit of history and background is needed.
Activities
All business activities undergo a lifecycle, they evolve through distinct stages including :-
It should be noted that the characteristics of an activity changes as it move through its life-cycle. As a commodity it's of little strategic value (or differentiation) between competitors whereas in its early stages it can often be a source of competitive advantage (a differential).
Information Technology
At any one moment in time, I.T. consists of a mass of different activities at different stages of their life-cycle. Some of those activities are provided through discrete software applications (an example might be ERP), other activities relate to the use of platforms (developing a new system using RoR or provisioning of a large database etc) whilst others relate to the provision of infrastructure (compute resource, storage, networks).
You can categorise these activities into a computing stack of infrastructure, platform and software. Of course you can go higher up the stack to describe the processes themselves and beyond, however for this discussion we will just keep it simple.
What's happening in IT today?
Many activities in I.T. that were once innovations but more recently have been provided as products (with extensive feature differentiation) have now become so ubiquitous and so well defined that they have become little more than a commodity that is suitable for service provision. You can literally consider that chunks of the "computing stack" are moving from an "as a Product" to an "as a Service" world.
This change is the reason why we have the "Infrastructure as a Service" to "Platform as a Service" to whatever else "as a Service" industries. Of course, there are many higher order layers to the stack (e.g processes) but any confusion around the "as a Service" term generally only occurs because we never used to describe these activities with the "as a Product" term.
Had we categorised the previous software industry in terms of "Software as a Product", "Platform as a Product" etc, then the change would have been more obvious.
Why now?
This change requires more than just activities being suitable for utility service provision. It also requires the concept of service provision, the technology to achieve this and a change in business attitude i.e. a willingness of business to adopt these new models. Whilst the concept is old (more on this later), and the technology has been around for some time (yes, it has matured in the last decade but that's about all), both the suitability and change of business attitude are relatively new.
Thanks to the work of Paul Strassman (in the 90's) and then Nick Carr (in the 00's), many business leaders have recognised that not all I.T. is a source of advantage. Instead much of I.T. is a cost of doing business which is ubiquitous and fairly well defined throughout an industry.
It was quite refreshing to recently hear a large group of CIOs, who all spent vast amounts of money maintaining highly customised CRM systems, comment that actually they were all doing the same thing. These systems provided no strategic value, no differential and in reality what they wanted was standardised, low cost services charged on actual consumption basis for what is essentially a cost of doing business. They also wanted this to be provided through a marketplace of service providers with easy switching between them.
This is quite a sea change from a decade ago.
The change from a "as a Product" to an "as a Service" world is happening today because we have the concept, technology, suitability and most importantly this changing business attitude.
An old Concept
The concept of utility service provision for I.T. is not new but dates back to the 1960's. Douglas Parkhill, in this 1966 book - "The Challenge of the Computer Utility" - described a future where many computing activities would be provided through computer utilities analogous to the electricity industry. These computer utilities would have certain characteristics, they would :-
Douglas noted that these computer utilities would take several forms as per the existing consumption of other utilities. These forms included (but are not limited to) public, private & government utilities. He also noted that eventually we would see competitive markets of computer utilities where consumers could switch providers, consume resources across multiple providers (i.e. a federated use) and consume all manner of hybrid forms (e.g. private and public combinations)
One final note is the term utility means a metered service where the charge is based upon consumption. That charge might be financial or it could be in any other currency (e.g. access to your data).
The Cloud Term
Between 1966 -2007, the general school of thought grew to be :-
Back between '05'-'07, there was a pretty crystal clear idea of what was going to happen:-
A combination of factors (concept, suitability, technology and a change in business attitude) was going to drive those I.T. activities which were common, well defined and a cost of doing business from being provided "as products" to being provided "as services" through large computer utilities. The type of services offered would cover different elements of the computing stack, there would be many different forms of computer utility (public, private & government) and eventually we would see competitive marketplaces with easy switching and consumption across multiple providers.
In '05, James Duncan, myself and many others were starting to build Zimki - a computer utility for the provision of a JavaScript based "Platform as a Service" - for precisely these reasons. The concepts of federation, competitive markets, exchanges and brokerages for service provision of a commodity were well understood.
Unfortunately in late '07 / early '08, the term "Cloud" appeared and the entire industry seemed to go into a tailspin of confusion. During '08, the "Cloud" term became so prevalent that if you mentioned "computer utility" people would tell you that they weren't interested but could you please tell them about "this thing called cloud".
So, what is Cloud?
The best definition for cloud today is NIST's. Using five essential characteristics (include elasticity, measured service etc), four deployment models (private, public, government etc) and three services (application, platform, infrastructure) it nearly packages all the concepts of computer utility, the shift from product to services and the different categories of the computing stack into one overall term - "cloud".
In the process it wipes out all the historical context, trainwrecks the concept of a competitive marketplace with switching and federation, eliminates the principle idea of commoditisation and offers no explanation of why now. It's an awful mechanistic definition which only helps you call something a cloud without any understanding of why.
However, that said, NIST has done a grand job of trying to clean up the mess of 2008.
In that dreadful year, all these well understood concepts of computer utilities, competitive marketplaces, the lifecycle of activities, categorisation of the computing stack and commoditisation were put in a blender, spun at 30,000 rpm and the resultant mishmash was given the name "cloud". It was poured into our collective consciousness along with the endless blatherings of "cloudy" thought leaders over what it meant (I'm as guilty of this as many others)
To be brutal, whilst the fundamentals are sound (commoditisation, computer utilities, the change from products to services etc), the term "Cloud" was nothing more than a Complete Load Of Utter Drivel. It's a sorry tale of confusion and a meaningless, generic term forced upon a real and meaningful change.
My passionate dislike for the term is well known. It irks me that for such an important shift in our industry, I have to use such a term and then spend most of my time explaining the fundamental concepts behind what is going on, why this change is happening and undoing the various "cloud" myths that exist.
Being pragmatic, I'm fully aware that this term has enough momentum that it's going to stay. Shame.
It's time for a spot of bleary eyed crystal ball gazing.
Last year, my predictions were fairly reasonable with 7 hits covering the commercial release of PLED TVs to our beloved government economists saying that 2010 would be worse than expected.
The jury is still out on house prices [Update : the December 2009 figures showed the first annual increase in house prices - 2.5% - since May 2008.] whilst we await the land registry report but alas two of the predictions were wide of the mark. The FTSE 100 failed to drop below 3,500, only hitting 3, 512 - no cigar there then - and Yahoo wasn't sold.
So, with the usual added vagueness, looseness of terms and general get out clauses, yawn with delight for :-
Mystic Me Predictions for 2010.
I hate predictions.
Don't get me wrong, I don't mind the "oh, it's already happening but I'll pretend it's new" type of predictions because you're guaranteed to look good.
I can happily quote that "the cloud market will grow", "standards, portability and interoperability will become increasingly important" and "the platform layer will be a major market" will full knowledge that these are safe bets.
Problem is, that these aren't really predictions and I've got a big mouth. Hence, I tend to make predictions which tend to explode rather nastily.
For example, back in 2002 I was predicting a financial meltdown in 2005 due to the massive growth in debt. Did it happen? Nope. I was out by a couple of years but that's the point of prediction, the when is vastly more important than the what.
That said, I can happily get the what wrong as well. Hence back in January 2009 when the FTSE was at 4,608, growing rapidly and many were talking about a rebound - I had to go and predict that it would drop to 3,500 within the year. Did it? Nope, it got close at 3,512 but never quite made it (back to the drawing board with my economic model again).
However, I'd be safe talking about cloud wouldn't I? Turns out that I get that wrong too. Hence back in 2007, I was predicting that "six years from now, you'll be seeing job adverts for computer resource brokers".
Earlier this year, I realised that prediction was going to be spectacularly wrong and happen much sooner. Eventually, I even admitted as much.
Adding salt to a fresh wound, is Amazon's announcement of a fully fledged spot market.
I suspect, it won't take long for someone to offer spread betting on the Amazon spot price or for some form of OTC derivative to mitigate against fluctuation in price and cover the risk of paying the full on demand price (because of failure to buy). Of course, this would work a lot better if users could resell reserved instances on the spot market providing the basis for a commodity exchange.
Opening up the spot market to the resell of instances between consumers will enable market pricing, making reserved instances more attractive. This will provide Amazon itself with future capacity planning information.
An alternative would be for users to resell reserved instances back to Amazon for sale on the spot market. However, this depends upon upon a quartet of objective, offers, availability and pricing.
For example, if revenue is the main objective, then there are scenarios (especially in the early days) where an increased revenue will be generated by selling a smaller number of instances at a higher spot price, leaving unfulfilled demand and capacity. It should be remembered that this is not market pricing but Amazon pricing.
Under a revenue objective, the conditions where it will be viable for Amazon to increase capacity on the spot market by the re-purchase of reserved instances (presuming Amazon isn't playing a double booking game with reserved instances, which are in essence a forward contract) will be limited.
It all depends upon this quartet and the only thing that I'm sure of, is that my prediction is out by a few years.
Ouch ... damn, how I hate predictions.
A few months ago I provided an introductory talk on cloud computing at Cloud Camp Frankfurt. I was asked to be vendor neutral, so it is light on Ubuntu Enterprise Cloud.
They've put the video of my talk up, so I thought I'd provided some links. Please note, it is split into two parts.
There are more videos on the Cloud Camp Frankfurt site, they're worth watching as the event was a blast.
I'm just comparing two of my talks, both on cloud computing and if anyone has time, I'd like some feedback.
The first is my recent talk from OSCON in 2009 covering "What is cloud computing and why IT matters", the second is my talk from OSCON in 2007 covering "Commoditisation of IT"
They both cover the same topic matter but with a different viewpoint (N.B. terms have changed since the 2007 talk but I'd like some feedback on style & content.)
Both are 15 minutes long but which was better and more importantly, why?
OSCON 2009: What is cloud computing and why IT matters
OSCON 2007: Commoditisation of IT