Wednesday, December 30, 2009

The king was in his counting house ...

... handing out our money.

This was the year that Mervyn King & Alistair Darling managed to spectacularly fritter away billions of taxpayers' money.

I was never opposed to lending money to banks but quantitative easing (QE, a dishonest way of printing cash and giving it away in truckloads to the usual cronies) was disgraceful. If you're going to print money then at least have some direct investment, don't just hope that the export economy and money supply will magically solve our problems. 

QE combined with low interest rates may be bully for banks, shareholders and homeowners by creating an influx of cheap foreign capital but in a mainly import led economy it will hit the cost of raw materials and inbound goods whilst squeezing the spending of savers. The net effect is a trade - inflation and a weakening internal economy in order to maintain stock and house prices. Great for banks, the wealthy and those in unsustainable debt but sucks for ordinary people and pensioners who were not responsible for this mess. As I've said many times before, this will just make the recession deeper and longer. However, it's a bit like boiling frogs - throw them into hot water and they try and get out but put them into tepid water and slowly raise the temperature and they won't notice. In this case, the frogs are called savers.

The amounts of money to boil our own is huge. The tally to date is that the taxpayer has been exposed to £1 trillion of potential debt through cash injections, state guarantees, quantitative easing and other interventions. As a result of this, the taxpayer is expected to lose anywhere between ten to a hundred billion. All of this is to prop up an industry which will spend the next few years trying not to pay tax because of "losses". 

Why is it that when the taxpayer acts as a lender of last resort, we have to make a loss into the bargain? When the hard up resort to loan sharks, you never hear tales of some financial wheeze where money is given away.

Of course, it's different because we couldn't let the banks fail despite no-one explaining why not? Still that doesn't mean we have to be 'soft' and being the lender of last resort should be a time of piracy. For some reason the city, unlike the poor, got let off the hook.

We could (and should) have demanded equity equal to any loans plus the loan capital plus punitive interest rates, but we didn't. Where's our pound of flesh and 2000% APR?

We could (and should) have invested heavily in social housing, bought out the building industry when it was on its knees and grew our state owned banks by providing liquidity into the economy. 

We didn't. We're not going to. Our institutions are soft.

What did happen was that Meryvn & Darling were cheered by the financial giants like a pub landlord who has wiped the tab clean for his heaviest drinkers. Naturally, the taxpayer got lumbered with the bill and the underlying causes of the mess (huge debt, delusional valuation, excessive gambling, economic instability) have been unresolved.

Expect more bad news to come.

At least Darling has got a dubious excuse in trying to mess things up for the Conservatives. If only some of the largesse had been spent on things that really matter, like combating global warming (which from the Copenhagen Accord laughingly only gets £60 billion a year by 2020).

A wasted opportunity but then that's how I feel about New Labour - a decade of disappointment. 

Whilst the noughties have been personally good for me, in general it failed to live up to the expectations. Unless of course you consider that WAGs, myspace house parties, wii fit, 4x4's, an endless war on terror, draconian legislation reducing civil liberties, excessive celebrities and a highly materialistic and self serving environment are the pinnacle of human nature.

To summarise the noughties, you'd have to say "nought for the environment, nought for social mobility and lots of noughts for bankers".

On a positive note, Doctor Who was utterly brilliant.

[Update - Nov '12 - Mervyn is still in power but will apparently be leaving in 2013. Depressing how things turned out.  Some typo's and tidy ups needed in this post ... cleaned up]

Monday, December 14, 2009

Where is Amazon heading?

There is something that I've always found confusing about Amazon's cloud strategy.

The development of EC2 & S3 makes good sense given the suitability of these activities for large scale volume operations (an activity that Amazon, as a book-seller, specialises in).

The growth of an ecosystem around these core services and the provision of these services through APIs are ideal. The solving of some of the transitional educational barriers to cloud (such as persistency through EBS) seems spot on and ... well the list goes.

However, I've never quite understood why Amazon chooses to just cannibalise its own ecosystem (the creation of a hadoop service when cloudera existed, the creation of autoscaling when many alternatives existed) rather than buying-out some of those groups. I understand why you'd acquire those capabilities but I'd have mixed in acquisition because it sends strong signalling for others to join the party. There's a negative feedback loop here which could be easily avoided.
[By 2017, despite grumbling of "Amazon's eating our business model" ... they continue to be able play the game.  The negative feedback loop doesn't seem to be as big as I had anticipated]

Given that, I can't be sure of where Amazon is going to head - more copying or a shift to acquisition and a bit of both?

Other than the eventual need to move into a platform space. the moves towards a spot market could suggest that Amazon might attempt to set itself up as the computing exchange for basic infrastructure resources. To do this, it would also need to define itself as the industry standard (not just the defacto) probably through an IETF certification route and hence encourage other providers to adopt this standard. When this might happen (if at all) is tricky because it depends so much on how competitors play the game.
[I never did understand their plan, I still don't.  Most seemed hell bent on oblivion which itself was odd.]

Fortunately for Amazon, it already has several alternative open source implementations to support any standardisation claim and these open source technologies (such as Ubuntu Enterprise Cloud) provide a quick means for providers to get started.
[At the time of writing Ubuntu was starting to take over the cloud space]

There is huge future value in the exchange, brokerage and trading businesses for computing resources. It's what I was intending to go after with Zimki, all those years back but that was in the platform space.

I'm not going to make any predictions for now, I'll leave that until early January. However, if I was a betting man then I wouldn't be surprised if, over a variable amount of time, Amazon :-
[e.g. next 3 to 15 years, not immediately. I must remember to put dates with predictions, as fairly useless otherwise]
  • Goes for IETF standardisation of EC2 & S3
    [Hasn't happened]
  • Allows reserved instances to be traded on its spot market hence creating a basic form of commodity exchange
    [this happened]
  • Enters the platform layer of the computing stack with provision of a development platform
    [this happened with Lambda]
  • Allows other providers (who adopt the EC2 / S3 standard) to sell instances on the exchange
    [Hasn't happened]
  • Exits the infrastructure provision business by selling on the EC2 / S3 services (at a phenomenal premium) whilst keeping the exchange and any established brokerage business i.e. keep the meta data (which is useful for mining an ecosystem but allow others to provide hence overcoming monopoly issues by creating a market)
    [Hasn't happened but I'm still expecting the monopoly issues to raise its head]
--- 8th April 2017
[Added some additional commentary]

Mystic Meg Epic Fail

I hate predictions.

Don't get me wrong, I don't mind the "oh, it's already happening but I'll pretend it's new" type of predictions because you're guaranteed to look good.

I can happily quote that "the cloud market will grow", "standards, portability and interoperability will become increasingly important" and "the platform layer will be a major market" will full knowledge that these are safe bets.

Problem is, that these aren't really predictions and I've got a big mouth. Hence, I tend to make predictions which tend to explode rather nastily.

For example, back in 2002 I was predicting a financial meltdown in 2005 due to the massive growth in debt. Did it happen? Nope. I was out by a couple of years but that's the point of prediction, the when is vastly more important than the what.

That said, I can happily get the what wrong as well. Hence back in January 2009 when the FTSE was at 4,608, growing rapidly and many were talking about a rebound - I had to go and predict that it would drop to 3,500 within the year. Did it? Nope, it got close at 3,512 but never quite made it (back to the drawing board with my economic model again).

However, I'd be safe talking about cloud wouldn't I? Turns out that I get that wrong too. Hence back in 2007, I was predicting that "six years from now, you'll be seeing job adverts for computer resource brokers".

Earlier this year, I realised that prediction was going to be spectacularly wrong and happen much sooner. Eventually, I even admitted as much.

Adding salt to a fresh wound, is Amazon's announcement of a fully fledged spot market.

I suspect, it won't take long for someone to offer spread betting on the Amazon spot price or for some form of OTC derivative to mitigate against fluctuation in price and cover the risk of paying the full on demand price (because of failure to buy). Of course, this would work a lot better if users could resell reserved instances on the spot market providing the basis for a commodity exchange.

Opening up the spot market to the resell of instances between consumers will enable market pricing, making reserved instances more attractive. This will provide Amazon itself with future capacity planning information.

An alternative would be for users to resell reserved instances back to Amazon for sale on the spot market. However, this depends upon upon a quartet of objective, offers, availability and pricing.

For example, if revenue is the main objective, then there are scenarios (especially in the early days) where an increased revenue will be generated by selling a smaller number of instances at a higher spot price, leaving unfulfilled demand and capacity. It should be remembered that this is not market pricing but Amazon pricing.

Under a revenue objective, the conditions where it will be viable for Amazon to increase capacity on the spot market by the re-purchase of reserved instances (presuming Amazon isn't playing a double booking game with reserved instances, which are in essence a forward contract) will be limited.

It all depends upon this quartet and the only thing that I'm sure of, is that my prediction is out by a few years.

Ouch ... damn, how I hate predictions.

Friday, December 11, 2009

Cloud Camp Frankfurt

A few months ago I provided an introductory talk on cloud computing at Cloud Camp Frankfurt. I was asked to be vendor neutral, so it is light on Ubuntu Enterprise Cloud.

They've put the video of my talk up, so I thought I'd provided some links. Please note, it is split into two parts.

Cloud Computing - Part I

Cloud Computing - Part II

There are more videos on the Cloud Camp Frankfurt site, they're worth watching as the event was a blast.

Monday, December 07, 2009

Old yet new ...

I'm just comparing two of my talks, both on cloud computing and if anyone has time, I'd like some feedback.

The first is my recent talk from OSCON in 2009 covering "What is cloud computing and why IT matters", the second is my talk from OSCON in 2007 covering "Commoditisation of IT"

They both cover the same topic matter but with a different viewpoint (N.B. terms have changed since the 2007 talk but I'd like some feedback on style & content.)

Both are 15 minutes long but which was better and more importantly, why?

OSCON 2009: What is cloud computing and why IT matters

OSCON 2007: Commoditisation of IT

Private or Public clouds?

There is ample evidence to suggest that many common and well defined activities in I.T. are shifting from a product to a service based economy. Naturally this change creates a broad range of risks including :-

  • the risk of doing nothing as competitors gain advantage from economies of scale through volume operations, utility charging, ability to focus on core activities and a faster speed to market through componentisation .
  • transitional risks including confusion, security of supply, trust in new providers, transparency and governance.
  • outsourcing risks including suitability, vendor lock-in, pricing competition, second sourcing options and loss of strategic control.

For any organisation, it is a case of balancing the risk of not using the cloud against the risks of using it. There appears to be two general schools of thought on this subject.

The first school states that whilst the outsourcing risks will be solved by the formation of competitive utility computing markets (with easy switching between providers) these markets do not exist today. Hence, whilst the movement to public clouds is considered inevitable (bar for the largest companies and governments), we're still in a time of transition. Private clouds can help solve many of these transitional risks whilst preparing for a future movement towards public cloud services.

The first school accepts that private clouds have a role, it emphasises the importance of standards, of reducing barriers to education and promotes a hybrid model of both public and private clouds. It encourages a compromise between economies of scale and transitional risks during this time of change.

I've been an advocate of this first school of thought for many years (since before 2006).

The second school of thought states that private clouds aren't cloud computing and advocates adoption of public clouds. It dismisses the transitional phase and talks of continuous innovation in the provision of what is fundamentally a commodity (commonplace, well defined and hence suitable for service provision). It is almost purist by nature, sometimes describing public clouds as true cloud computing and finding little distinction between private clouds and virtualisation platforms.

I don't subscribe to this second school of thought. Basic economic sense and risk management would suggest that in this time of transition, organisations will attempt to gain some of the benefits whilst mitigating the dangers of this phase. Hence, for the next few years I'd expect the cloud industry to be dominated by hybrid models.

After which I'd expect it to become more slanted towards public provision (as competitive markets form) but nevertheless hybrid models will continue to have a role.

Sunday, December 06, 2009


Over the years, I've often discussed the ideas of physical and human (intellectual and social) capital within organisations. I thought I'd cover some old ground again.

Organisations simply exist between the intersection of a network of people and a mass of activities undertaken. Remove this and you're left with what an organisation really is, nothing bar any remaining residual capital.

The act of people interacting with activities creates several forms of capital, three of the most interesting are physical, intellectual and social. All of these forms of capital are susceptible to the ravages of commoditisation.

We've already experienced the effects of commoditisation on physical assets. For example, the news industry was once able to use physical assets (large and expensive printing machinery) to control the activity of publishing - not only what was published but whom. In days of past if you wanted to be a journalist your options were limited.

Today, the digitisation of content and the spread of the means of mass communication have changed the rules and commoditised this activity. The barriers to entry have been severely diminished and anyone can publish, This means news organisations have been forced to seek other means of differentiation, value and ultimately control.

Equally, many forms of intellectual capital has slowly been commoditised. Whereas in the past you needed direct access to a lawyer to help with the arcane knowledge of how to write a will, today you can download forms online.

All manner of knowledge has been neatly codified, commodified (given a value for access) and ultimately commoditised (become standardised, commonly available, relentlessly driven to a lower cost) through market forces.

Access to knowledge can (and has been) an important mechanism of control for some organisations. The commoditisation of such knowledge diminishes this means of control. As a journalist may find they are less dependent on a news company in order to publish, a budding lawyer may find it easier to access the knowledge they need without a law firm.

Obviously both types of organisations still provide the benefits created by the internal ecosystem of a network of people and a mass of different activities (i.e. rapid access to certain skills, specialists and supporting structures). However, both types of organisation will also have social capital - interactions, reputation and relationships with others.

Hence a journalist or lawyer my choose to work with one particular organisation because it can offer access to the right people, provides a prestigious network and has a high amount of social capital.

I mention this because many social network tools are currently busy codifying relationships between people. Furthermore, some are also trying to identify and provide measurable value in those networks (the act of commodification). Could this onslaught also lead to the commoditisation of business networks?

Will we see a future where we can buy and sell access to a social network? How will this impact organisations who depend upon their networks and use access as a means of control? Will companies also attempt to control and own this network more tightly?

These are just some of the questions I suspect we will be facing over the next few years.

Friday, December 04, 2009


I rarely watch T.V. series, however FlashForward was recommended to me. I have to say it's outstanding.

The principle behind the story is fairly straightforward. The entire world experiences an event which causes everyone to see the "future", six months ahead.

Of course, it's not quite that simple. The "future" seen is that of an alternative world (from the many world principle) and therefore doesn't necessarily represent the character's current future but instead another timeline where the character may have made other choices in their past.

This creates a continuous drama of whether the future visions are right or wrong and an obvious illusion of choice where people try to change (or make happen) a future which is not necessarily theirs.

This paradox is illustrated with one particular character Dr. Olivia Benford. In the "current" world Olivia is married to a recovering alcoholic, Mark Benford. However, in her "future" vision she is with another of the characters Dr. Lloyd Simcoe. The paradox is that the "future" vision is that of another timeline, hence there are three obvious possibilities for this other timeline. Either :-

  • She is married to Mark (as per the "current" timeline) but has an affair with Lloyd
  • The interpretation of the vision is incorrect (i.e. Lloyd is just staying in the house for some other reason)
  • Her timeline is different in the other world i.e. she attended Harvard, met LLoyd and was never married to Mark (or divorced him due to his drinking etc).

This is the deliciousness of the series which has many examples of these paradoxes woven into the plot. You're kept guessing whether the visions being seen are a possible future of the current timeline or not.

All the time, the characters in the "current" world are making choices based upon the belief that their "future" visions are correct and in some cases they are even trying to make them happen (even when those visions aren't possible because of other past choices).

Of course, there is also the subplot of what caused the event, will it happen again and several other plot lines. I hope they don't mess this up like Lost which became a repetitive bore.

For the time being though, it is utterly brilliant.

Monday, November 30, 2009

The U.S. Patent system makes me laugh ...

It seems that Microsoft is seeking a patent for data migration in the cloud, something which we provided in Zimki back in 2007 and had been publicly talked about by various people for many years before that (though in those days it was called utility computing).

Well, at least the patent adds more weight to the idea that when Azure launches, it will be with a variety of ISP's, a buy your own Azure container and I'd hazard a guess at the illusion of an open marketplace based upon open standards.

The battle for Helms Deep approaches.

As for the U.S patent system, well I would normally argue that patents should be "limited in duration to a timeframe in which society could be reasonably expected to independently create such an innovation" - except of course I'm from the U.K. where we already have more robust view on patenting software.

So please, by all means keep on hampering your technology sector and turn it into a legal quagmire. I'm obviously hoping that the U.K. won't follow suit.


For many many years, I've talked about the evolution of activities (from innovation to commodity), how this enables further innovation (componentisation) and why organisations compete in ecosystems (Red Queen Hypothesis). I've also hypothesised an S-Curve of ubiquity vs certainty to describe this evolutionary change, demonstrated techniques to manage such activity life-cycle, shown how Gartner's Hype Cycle can be derived and catalogued the underlying interconnection between Enterprise 2.0, cloud, SOA and web 2.0.

Rather than bore you with the details again, I thought I'd concentrate on a couple of diagrams to explain many of the structural aspects of this.

Figure 1, provides an overview of how activities changes from innovation to commodity, and more importantly how techniques, focus and strategy changes.

Figure 1 - Lifecycle (click on image for higher resolution)

All business activities exists somewhere on this S-Curve and all of them are moving from innovation to commodity. Hence any organisation can be characterised as a network of people interacting with a network of constantly evolving activities, with the organisation itself simply being the intersection between activities and people. You can actually visualise this, though I tend to focus on the network of activities rather than people. By mapping out lines of business against state of evolution, I've found useful tricks for managing activties along with patterns for competing with others.

Learning how to manage activities at their different life-cycle stages and hence knowing how to drive innovation, leverage emerging activities and commoditise that which is cost of doing business is essential for any company. This act is one of balancing the old innovation paradox between survival today (efficiency through co-ordination, coherence & hence order) and survival tomorrow (innovation of new activities, hence deviation, serendipity & disorder).

Since I've often talked about the organisational details of managing life-cycle (the use of pioneers, colonisers and town planners), I thought I'd instead just concentrate on the basic structure. Figure 2 provides the structural elements that are important:-
  • Core services : these are core services that the organisation provides, in the I.T. world this is the area where SOA, cloud and other "service" concepts are most relevant.
  • Ecosystem : this is the ecosystem that an organisation creates around its core services including workforce, partners, community, channels, customers etc.
  • Innovation at "the edge": in general, the larger the ecosystem then the greater the potential for innovation to occur. The concept of innovation at the edge simply refers to expanding the ecosystem as wide as possible.

Figure 2 - Ecosystem (click on image for higher resolution)

In a typical example (e.g. Salesforce & Amazon), the company providing the core services enables and encourages a wider ecosystem to develop around it.

By monitoring new activities (i.e. innovations), and the early adoption of new but similar activities (copying is one of the many signals of success), the company can look to leverage any innovation for the benefit of the wider ecosystem. Methods which can help enable this vary from the provision of an application store to increasing awareness of innovations through the use of Enterprise 2.0 techniques.

By monitoring signals of wider adoption (i.e ubiquity) and early formation of standards (increased definition and certainty of the activity), the company can elect to drive an activity towards commoditisation and provision as a core service. This correspondingly completes the circle and encourages further innovation in the ecosystem through componentisation.

Quite simply put, the structure is simply designed to feed off and accelerate the normal process of evolution of activities (technical or otherwise). This results in a situation where apparent innovation (others are innovating for you), customer focus (leveraging of consumption data to deliver what people want) and efficiency (economies of scale) can all increase with the size of the ecosystem.

I mention this because the centralist approach is to provide all activities at the centre as opposed to "crowdsourcing" both creation and identification of innovation to a wider ecosystem built around a core of common services. Whilst centralisation is not an invalid approach, it is generally highly ineffective when competing against a company which creates a broad ecosystem. This is shown in figure 3.

Figure 3 - Competitive Landscape (click on image for higher resolution)

So with this in mind, I turn to the U.K. Government cloud efforts. It's worth remembering that the U.K. Gov I.T. currently employs over 35,000 people and outsources nearly 65% of its budget (from a total figure of £16bn+). This pretty much makes the internal part of U.K. Gov I.T. equivalent to Google PLUS Amazon whist the outsourcing part is far bigger.

The approach of providing standardised & centralised core services for commodity activities (such as computing infrastructure) is fine. However, as there is no actual competitive utility computing market, an approach of outsourcing to a group of vendors would be unwise (it's worth noting that both Google & Amazon adopted to build in-house). Fortunately, one set of standards - EC2 & S3 - are emerging.

Given this, a sensible strategy would be to adopt these emerging standards, consume conforming external services and where needed build using an open sourced technology (there are several to choose from, Ubuntu Enterprise Cloud would be one). Design for a mix of private / public (hybrid) with a near future view of using multiple public providers.  Develop any needed new elements in-house whilst outsourcing standard components i.e. data centre floor space, power provision etc.

The second area of concern is the Government Application Store. Whilst providing a centralised mechanism of application fulfillment is a sensible approach, the concern should be how those application are developed and provided to the store. Whilst some applications have become commoditised enough to be provided as centrally managed applications or services, it is absolutely essential that the application store should encourage a wider ecosystem (especially at local government level and within communities) for the development of new applications.

A fairly sound approach would be to combine the above with mechanisms of ecosystem monitoring and encouragement for the wider adoption of successful activities. An alternative centralist nightmare, would be where the core (i.e. some form of centralised council) attempts to predict the future whilst defining and developing activities to be adopted (a dis-functional programme management approach). This will lead to the normal flurry of massive development costs, vendor dependency and lock-in.

So why should I care? Well, I live in the U.K. and our Government exists in a wider competitive ecosystem of national governments. The role of U.K. Gov I.T. is to get this stuff right, the same with any other Government.

Thursday, October 22, 2009

Ubuntu Live Cloud Roadshow.

Over the next few weeks, I'm speaking at a couple of Canonical events on the subject of cloud computing. The first event is in New York, aimed at large businesses and I'll be looking at the question of public or private clouds.

The agenda for the evening covers Canonical's cloud strategy and an introduction to cloud computing in general. Both John Willis and I will be speaking there.

There are few tickets still available, if you're interested in coming along.

Thursday, October 15, 2009

Trading Amazon Instances.

The problem with predictions is not the knowing of what but the knowing of when.

I spectacularly failed to predict the current financial crisis having expected it to occur several years earlier. I'm also expecting that my 2007 prediction of when cloud computing brokers will appear (along with exchanges in computing resources, futures and swaps in resources) will also fail.

This became clear as soon as Amazon introduced reserved instances. The only thing preventing the formation of a trading market in EC2 instances, is that reserved instances are not transferable. As soon as that changes, we will quickly move from reserved instances and the spot price to a commodity exchange and the usual paraphernalia of derivatives to spread betting.

Whilst I predicted that "six years from now, you'll be seeing job adverts for computer resource brokers", it now seems likely to be earlier.

Wednesday, October 14, 2009

Why cloud vendors lie ...

I was a last minute speaker on IPExpo's schedule last week. Despite a severe bout of speakers nerves, the talk went well. Unfortunately, it wasn't recorded and hence I've made my own video.

Why Cloud vendors lie and how they steal your money
(approx. 43 mins)

Sunday, September 27, 2009

If Sauron was the Microsoft Cloud Strategist

Back in March 2008, I wrote a post which hypothesised that a company, such as Microsoft, could conceivably create a cloud environment that meshes together many ISPs / ISVs and end consumers into a "proprietary" yet "open" cloud marketplace and consequently supplant the neutrality of the web.

This is only a hypothesis and the strategy would have to be straight out of the "Art of War" and attempt to misdirect all parties whilst the ground work is laid. Now, I have no idea what Microsoft is planning but let's pretend that Sauron was running their cloud strategy.

In such circumstances, you could reasonably expect an attack on internet freedom based upon the following actions :-

Development of the Ecosystem (from my Nov'08 post)

    When the Azure services platform is launched, we will see the creation of an ecosystem based upon the following concepts:-

    1. build and release applications to Microsoft's own cloud environment providing Azure and the Azure Services.
    2. build and release applications to a number of different ISPs providing Azure and specific Azure Services (i.e. SQL, .Net and Sharepoint services).
    3. purchase server versions of Azure and specific Azure services for your own infrastructure.
    4. buy a ready made scaleable "Azure" container cloud for all those large data centre needs of yours.

    Since the common component in all of this will be the Azure platform itself, then migration between all these options will be easy as pie through the Windows Azure Fabric Controller

Growth of the Ecosystem

The Azure platform will benefit from componentisation effects rapidly increasing the development speed of applications. Given the large pre-installed base of .NET developers, these effects would encourage rapid adoption especially as a marketplace of providers exists with portability between them (overcoming one of the adoption concerns). However, MSFT will purposefully retain specific elements of the Azure Services (those with strong network effects through data & meta data - user profiles etc). This will create accusations of lock-in.

To combat this, the entire services will be based upon open standards (for the data but not the meta data) and support will be given to an open source equivalent of the Azure platform (i.e mono) . The argument will be presented strongly that the Azure market is open, based upon open standards with a fledgling open source alternative. Hence, into this mix will be launched :-

  1. A wide variety of ISV applications running on the Azure Platform.
  2. A strong message of openness, portability and support for open standards (particularly for the reasons of interoperability)
  3. A fledgling open source equivalent.

In order to achieve the above and to allow for the development of the future strategy, the office suite of tools must be based upon open standards. Both Mr. Edward's & I share concerns in this space.


With the growth of the Azure marketplace and applications built in this platform, a range of communication protocols will be introduced to enhance productivity in both the office platform (which will increasingly be tied into the network effect aspects of Azure) and Silverlight (which will be introduced to every device to create a rich interface). Whilst the protocols will be open, many of the benefits will only come into effect through aggregated & meta data (i.e. within the confines of the Azure market). The purpose of this approach, is to reduce the importance of the browser as a neutral interface to the web and to start the process of undermining the W3C technologies.

The net effect

The overall effect of this approach would be to create the illusion of an open marketplace on Azure which is rapidly adopted because of the componentisation effects created and pre-existing skills base. Into this marketplace will be provided beneficial protocols for communication which are again "open". Despite its popularity, no accusation of monopoly can be levelled because users are free to choose providers and a fledgling open source equivalent exists.

However, the reality would be to create a market that has surrendered technological direction to one vendor with new protocols designed to undermine the W3C. All participants (whether ISPs, ISVs, consumer or manufacturers) will find themselves dependent upon the one vendor because of the strong network effects created through data (including aggregated and meta data effects).

Following such a strategy, then it could be Game, Set and Match to MSFT for the next twenty years and the open source movement will find itself crushed by this juggernaut. Furthermore, companies such as Google, that depend upon the neutrality of the interface to the web will find themselves seriously disadvantaged.

Now, I'm not saying this is going to happen but I have my own personal views. I warned about the dangers of a proprietary play in this space back in 2007 and the importance of open source in protecting our freedoms. I'm mentioning this again because I keep on hearing people say that "open source has won".

The problem is that we might be fighting the wrong fight, and the battle for middle earth has already begun.

Tuesday, September 22, 2009

Is the enterprise ready for the cloud?

There are many known risks associated with the cloud, some are transitional in nature (i.e. related to the transformation of an industry from a product to a service based economy) whilst others are general to the outsourcing of any activity. These always have to be balanced against the risk of doing nothing and the loss of competitive effectiveness and/or efficiency.

You'll find discussion of these risks in various posts and presentations I've made over the years. The forces behind this change are not specific to I.T. but generic and have (and will) effect many industries.

In this post, I want to turn the clock back and discuss again the organisational pressures that cloud creates because for some reason it doesn't get mentioned enough.

The tactics and methodologies needed to be used with any activity depend not upon the type of activity but it's lifecycle. For example, there is no project management method applicable to all types of activities despite the desire of many organisations to try and simplify this complex problem to a unified approach. Using agile everywhere is about as daft as using prince 2 everywhere, single policies simply aren't effective.

The shift towards cloud and the further commoditisation of I.T. will actually exacerbate this problem of single policies by highlighting the extremes and differences between managing an innovation and a commodity. Many organisations are simply not geared upto dealing with the realisation that they are complex adaptive systems rather than linear like machines.

People often ask the question whether "the cloud is ready for the enterprise" but the bigger question which is missed is whether "the enterprise is ready for the cloud". In many cases, the answer is no.

An example of the problems that the cloud can create is the shift to a variable model of charging and the move away from capital expenditure. Whilst it makes intuitive and obvious sense to pay for what you use, I'll use the example of worth based development to show where it goes wrong.

Back in 2003, I was extensively using agile development techniques for new projects to overcome the normal conflict between the client and the developers over what was in or not in the specification. It should never be forgotten that the process of developing a new project is also a process of discovery for the client. Requirements change continuously as more is discovered and agile is specifically designed to cope with a dynamic environment. However, it does have a weakness.

Whilst the client gets more involved in the project, the cost of change reduces (due to test driven development) and the client gets more of what they wanted, the weakness is that in most cases the client has little or no understanding of what the value of the system is going to be other than the amount they have to spend on it. In response to this, I introduced a concept known as worth based development.

The first step was to sit down with the client and work out a measure of worth for the system (i.e. machines sold, leads created, new customers found, improved forecasting etc). Once we had an agreed measure of worth, we used models to calculate the likely range of potential values that such a system would create.

If we calculated that the potential was high enough and the risks acceptable then we would offer to build and operate the system in return for an element of the measurable value created. This was a no-win, no-fee mechanism of development and goes far beyond "pay for what you use" and into "pay a fraction of what you get out".

The effect of this approach was several fold :-

  1. If we didn't believe the project was likely to succeed then we wouldn't work on this basis and instead charge on a more traditional model (hours billed etc). This told the client valuable information about their project.

  2. If we both agreed then immediately both parties were now focused on maximising the value of system rather than developing an arbritary set of capabilities. If an opportunity arose that could maximise value, then it was easy for both parties to accept it.

Charging based upon a measure of the value created sounds obvious but it was a spectacular fail.

In one case, we built a system which created leads for highly expensive equipment. The measure of value here was leads (i.e we didn't control the sales process which was entirely separate). We worked with the client, built the system and switched it on - it was a massive success.

In a few months, we had created tens of thousands of leads. Our charge was around 10 euros per lead and the client had racked up a sizeable bill. At this point I received a phone call.

The client asked me to switch the service off. I asked why and whether they were happy with all the leads. The answer came back that they were delighted with the leads, lots of which were turning into sales but could we still switch the system off because they had run out of budget.

Now this perplexed me because a single unit of the equipment sold for thousands of euros and the client was selling to a good percentage of the leads (often with multiple units). Working with the client, we quickly showed them how the additional revenue vastly outstripped the cost, the system was simply generating profit and we had the figures to prove it.

The problem however wasn't profit, they could see it was creating it. The problem was that the cost had exceeded the allocated budget and the client would have to go through either another planning cycle or an approval board to get more funds. Both options would take months.

I was stunned, the client was asking me switch off a profit making system because it had been too successful and the cost had surpassed some arbritary budget figure. They answered "yes" and stated that they had no choice.

Even in a case where direct additional revenue and profit could be proved, the budgetary mechanisms of the organisation were not capable of dealing with variable costs because they'd never been designed that way. The situation is worse in the case of the utility charging model of cloud providers because a direct measure of worth cannot always be shown. This problem of variable costs vs fixed budgeting & long planning cycles is going to re-occur for some organisations.

Not all enterprise are ready for the cloud.

Tuesday, September 08, 2009

Platforms and all that jazz ...

Back in early 2006/7 the transition of I.T. activities from a product to service world was often described using three distinct layers - software, framework and hardware. These layers had specific acronyms :-

  • SaaS (Software as a Service)
  • FaaS (Framework as a Service)
  • HaaS (Hardware as a Service).

The terms separated the boundary between what a user was concerned with and what the provider was concerned with.

In the case of HaaS, the provider was concerned with hardware provision (as virtual machines) and the user was concerned with what was built on that hardware (the operating system, any installed framework components such as databases, messaging systems and all code & data).

In the case of FaaS, the provider was concerned with the framework provided (and all the underlying subsystems such as the operating system, the hardware etc) whilst the user was concerned with what they developed in that framework.

I summarised these concepts and the overall stack in a much later blog post during 2007, "The SEP Boundary and more rough thoughts"

Of course, the plethora of 'aaS' terms was foolish especially since the industry had quickly embarked on what can only be described as the 'aaS' wars, a constant renaming of everything. Robert Lefkowitz ( in Jul'07) warned that this was going to lead to a whole lot of aaS. He was right.

Today, those three central layers of the stack have settled on software (though this often yo-yo's to application and back again), platform and infrastructure. However, this seems to have created its own problem.

Platform is being used at all layers of the stack, so we hear of cloud platforms for building IaaS and many other mangled uses of the concepts. The term infrastructure itself has many different meanings. Several SaaS offerings (despite many calling this layer applications, no-one wants to use the acronym AaaS) are described as core infrastructure and of course everything is built with software.

That which is, and still remains very simple, has become fraught with confusion. Life seemed so much simpler back in 2007.

This why, I argued that the CCIF needed to focus on a common taxonomy because we desperately need to talk about the same thing. Now, don't get me wrong, I'm more than aware of the pitfalls of a mechanistic definition of cloud and its inability to impart a wider understanding of the change that is occurring but confusion is a much greater foe.

So, I strongly urge you to adopt the NIST definition for everyday practical use.

Monday, September 07, 2009

The weekly dose ...

Last week, Botchagalupe (irl John Willis) and I finally got around to doing the first of what promises to be a weekly podcast on cloud computing.

After meandering through the dangerous ground of sports fanaticism, we seem to have hit upon a weekly format of :-

  • The big question
  • What's Hot!
  • What's getting the most hype?

We're going to continue it this week but I'd welcome suggestions for the questions we should be discussing. Leave a comment or ping me on twitter.

Sunday, September 06, 2009

Is Cloud Computing Green?

The short answer is yes, the long answer is no.

The short answer deals with the better utilisation of computer resources within the data centre, the potential for allocating virtual resources to more energy efficient services and for reducing the massive sprawl of physical resources (and all the energy hence consumed in manufacturing, cooling and distribution.). Overall, cloud computing offers the potential for more energy efficient virtual resources.

The long answer concerns componentisation. The shift of common and well defined I.T. activities from products to standard components provided as online services should lead to a dramatic explosion in innovation.

Standardisation always creates this potential.

If you consider writing an application today, the reason why it's a relatively quick process is because we have relatively standardised and stable components such as frameworks, databases, operating system, cpu, memory etc. Imagine how long it would take to write a new application if you first had to start by designing the CPU. This is componentisation in action, the rate of evolution of system is directly related to the organisation of its subsystems.

Cloud computing is all about providing standard components as services (it's pure volume operations). The problem of course is that we will end up consuming more of these standard components because it's so easy to do so (i.e. in old speak, there is less yak shaving) and it becomes easier to build new and more exciting services on these (standing on the shoulders of giants).

We might end up providing more efficient virtual resources but we will end up consuming vastly more of them.

In the short term, cloud computing will appear to be more green, in the long term it will turn out not to be. However, that's to be expected, our entire history of industrial progress continues to be about the constant creation of ever more complex and ordered systems and the use of stable subsystems simply accelerates this process, whether they be bricks, pipes, resistors, capacitors, databases or whatever.

Whichever way you cut it, our constantly accelerating process of creating greater "perceived" order and the constant reduction of entropy (within these systems and the future higher ordered systems that will be created) ultimately requires one external factor - a greater energy input.

Cloud computing will be no different and our focus will have to be on the source of that energy.

Thursday, September 03, 2009

The cloud computing standards war

Over the last couple of years, I've consistently talked about the necessity for standards in the cloud computing space and the oncoming war that this will create. This was not some insightful prediction but simply the re-application of old lessons learned from the many industries which have undergone a transformation to a service world.

That standards war is now in full swing.

The principle arguments behind standards comes from componentisation theory (the acceleration of innovation through the use of standardised subsystems) and the need for marketplaces with portability between providers (solving the lack of second sourcing options and competitive pricing pressures). The two main combatants at the infrastructure layer of the stack are shaping up to be Amazon with the EC2 API and VMware with vCloud API.

Much of the debate seems to be focused about how "open" the standards are, however, there's a big gotcha' in this space. Whilst open standards are necessary for portability and the formation of markets, they are not sufficient. What we really need are standards represented through open source reference models, i.e. running code.

The basic considerations are :-

  • A specification can be controlled, influenced and directed more easily than an open source project.
  • A specification can easily be exceeded providing mechanisms of lock-in whilst still retaining compliance to a 'standard'.
  • A specification needs to be implemented and depending upon the size and complexity of the 'standard' this can create significant adoption barriers to having multiple implementations.
  • Open source reference models provide a rapid means of implementing a 'standard' and hence encourage adoption.
  • Open source reference models provide a mechanism for testing the compliance of any proprietary re-implementation.
  • Adoption and becoming de facto are key to winning this war.

So, in the war of standards whilst the vCloud API has sought approval from the DMTF and formed a consortium of providers, the Amazon EC2 API has widespread usage, a thriving ecosystem and multiple open source implementations (Eucalyptus, Nimbus and Open Nebula).

There appears to be a lot of FUD over the intellectual property rights around APIs and a lot of noise over vCloud adoption. You should expect this to heat up over the next few months because these early battles are all about mindshare and will heavily influence the outcome.

However, whilst VMWare has struck boldly it has exposed a possible achilles heel. The only way of currently implementing vCloud is with VMWare technology, there is no open source reference model. If Reuven is right and Amazon does 'open up' the API, then Amazon have a quick footed route to IETF approval (multiple implementations) and can turn the tables on vCloud by labeling it as a "proprietary" only solution.

Of course VMWare could pre-empt this and go for an open source route or even attempt to co-opt the various open source clouds into adopting their standard. I'd be surprised if they weren't already trying to do this.

This space is going to get very interesting, very quickly.

Wednesday, September 02, 2009

That Tesla feeling ...

When it comes to the modern electricity industry, Nikola Tesla is undoubtedly one of the most significant figures in its history. His work pioneered the formation of alternating current electrical power systems, including the A/C motor and multi-phase systems for electricity distribution. There are none who would compare in terms of contribution.

By contrast, Thomas Edison was vehemently opposed to the A/C system, even going so far as to publicly electrocuting animals to show its dangers. However, the average person today would likely associate Edison as the "father of modern electricity", in much the same way they might associate him as the inventor of the electric light bulb (as opposed to Joseph Swan).

First, be in no doubt that Edison made enormous contributions to these and many other fields. The question that really needs to be asked is how did Edison become so strongly associated to a field when the contribution of others was equally as large if not greater?

I often nickname this situation as a "tesla moment", an incident in time where the noise generated by others far outweighs the actual contributions made. Nikola Tesla's entire life seems to have been on the wrong side of a prolonged Tesla moment. In my view, he has never really achieved the recognition he deserves.

So why do I mention this? Well, to some extent Canonical has had its own trivial Tesla moment and to be honest this irks me. Canonical, for those of you who don't know, is the company that sponsors and supports Ubuntu - the world's fastest growing linux distribution. In the cloud computing space, Canonical has made some bold moves, including :-

  • The first distribution to select KVM as its official hypervisor.
  • Launch of officially supported images on Amazon EC2 (2008).
  • Integration of Eucalyptus into the distribution (April'09, Ubuntu Enterprise Cloud). We're the first and only distribution to provide their users with a simple way of creating their own clouds that match the Amazon EC2 API (the defacto standard for infrastructure provision in the cloud)
  • The introduction of support, training and consultancy services targeted at building private clouds.
  • The introduction of officially supported machine images which will run both on a public and private cloud environment across different hypervisors.

We've been working in the background with a number of different partners and we have several announcements aimed for the next release of Ubuntu. Overall, a lot of work has gone into making Ubuntu Server Edition an easy way to get started with cloud computing.

So, you can guess my disappointment that Canonical was not included in the list of 85 vendors shaping the cloud.

Obviously we need to create more noise however we're not going to do this by adding vapour, there's enough already in the cloud world.

Since UEC is freely available, open sourced and doesn't require subscriptions for security updates and patches, there are many people building ubuntu clouds with whom we have no contact. Hence, I'd like to hear from you and how the community is using UEC.

Ping me on twitter.

Tuesday, September 01, 2009

Cloud definitions ... will it ever end?

At Oscon, I highlighted the problem with many of today's cloud definitions and the attempts to pigeon hole cloud computing as a discrete technology, distribution or billing mechanism.

Such definitions unnecessarily narrow the richness of the field since Cloud computing represents a transition, a shift of many I.T. activities from a product to a service based economy caused by a quartet of concept, suitability, technology and a change in business attitude.

Whilst you can certainly describe some of the mechanics of cloud computing, NIST's latest definition does an excellent job of this, the mechanics do not provide the entire picture. It would be like describing the industrial revolution as :-

"The industrial revolution is a model for enabling convenient, consistent and high speed production of goods that can be rapidly adapted to meet consumer needs. The model promotes availability of goods and is composed of several essential characteristics (use of machinery, higher production volumes, standardised quality, centralised workforce) ..."

A working definition would fail to capture the fundamental transition at play, the history and the potential consequences of this change. It would fail to provide any understanding of what is happening.

An example of this lack of understanding and hence confusion would be the latest debate over what is or is not cloud. This has recently been re-ignited with Amazon's launch of VPC ("virtual private cloud").

In a service economy, a service can be provided by any party and the total of the services provided to an organisation may include both internal and external resources. For example, a company with its own cloud (provided with its own infrastructure, as in the case of an UEC environment) may combine these internal resources with external resources from a provider such as Amazon EC2. This is commonly called the hybrid model.

However, this is a service world and we shouldn't confuse provision with consumption. For example, the company in question is freely at liberty to consume such resources internally (as a private cloud) or to provide those combined resources to others (as a public cloud).

A quick glance at the electricity industry will show you that there exists a wide mix of internal and external resource providers and different models for consumption and provision. I can build my own power generator, top up with the national grid or even sell back to the national grid. The same myriad of different options exists within cloud computing.

Amazon's recent announcement is important because it further enables one of these options. For example, a company could choose to build a private cloud (for internal consumption) which consists of both internal (as in UEC) and external (as in Amazon's VPC) resources. Of course, there are many other ways that this same result can be achieved but none have Amazon's momentum in this field.

The debate over whether it is or is not cloud computing, is simply ... immaterial.

Friday, August 14, 2009

Cloud Computing ... Deja Vu

A new birth always has about it an aura of excitement that be matched by few other spectacles. This is true whether the birth is that of a new being, a new world or a new idea. The excitement arises not so much from the mere fact of birth but rather from the uncertainty and the element of doubt as to the future that always surround a novel event. In this connections, workers in the field computers are now becoming increasingly excited about the birth of a remarkable new method for the distribution and utilization of computer power. This method has been given a variety of names including 'computer utility'.

Regardless of the name, however, the development of this method does open up exciting new prospects for the employment of computers in ways and on a scale that would have seemed pure fantasy only five year ago.

Even now the subject of computer utilities is very much in the public eye, as evidenced by many articles in both the popular and technical press, prognostications by leading industrial and scientific figures and growing signs of interest on the part of governments everywhere.

The word 'utility' in the term 'computer utility' has, of course, the same connotation as it does in other more familiar fields such as in electrical power utilities or telephone utilities and merely denotes a service that is shared among many users, with each user bearing only a small fraction of the total cost of providing that service. In addition to making raw computer power available in a convenient economical form, a computer utility would be concerned with almost any service or function which could in some way be related to the processing, storage, collection and distribution of information.

A computer utility differs fundamentally from the normal computer service bureau in that the services are supplied directly to the user in his home, factory or office with the user paying only for the service that he actually uses.

The computer utility is a general purpose public system that includes features such as :-

  1. Essentially simultaneous use of the system by many remote users.
  2. Concurrent running of different multiple programs.
  3. Availability of at least the same range of facilities and capabilities at the remote stations as the user would expect if he where the sole operator of a private computer.
  4. A system of charging based upon a flat service charge and a variable charge based on usage.
  5. Capacity for indefinite growth, so that as the customer load increases, the system can expanded without limit by various means.

In addition to the general-purpose public form, there are countless other possible shapes that a computer utility might take. This include private general-purpose systems, public special purpose systems, public and private multi-purpose systems and a whole heirarchy of increasingly complex general-purpose public systems extending all the way to national systems.

As generally envisaged, a computer public utility would be a general purpose public system, simultaneously making available to a multitude of diverse geographically distributed users a wide range of different information processing services and capabilities on an on-line basis.

The public / private division is reflected in our experience with older utilities, communication, gas, electric power etc. In fact, historically, many of our present public utilities began as limited subscriber or private ventures. Even today, despite the fantastic growth of public systems, many organizations continue to operate their own private power plants or internal communication systems.

It is necessary to consider each application of computer utility separately on its merits and balance off in each case the gains and losses resulting from the adoption of the utility concept.

A number of importance considerations tend to improve the cost/effectiveness picture.

  1. Reduced solution time for engineering and scientific problems.
  2. A capability for an organisation to provide faster service to its customers.
  3. Reduce user capital equipment and facility investments.
  4. Better utilization of computer resources

Extracts from Douglas Parkhill, The challenge of the computer utility, 1966. (thanks to Tom Wasserman for pointing me in this direction)

Friday, August 07, 2009

Open Clouds

Whilst there are many organisations attempting to define standards for the cloud, my view has always been that these standards will emerge through the marketplace. What is critically important is to the protect the notion of what is and what isn't an open cloud. This is why I actively support the Open Cloud Initiative (OCI) which was founded by Sam Johnston.

The OCI doesn't try to tell you what cloud computing is or isn't, it doesn't even try to tell you what is an open cloud or not. What the OCI does is state, this is our definition of an open cloud (of which there are various forms) and here are the trademarks which you may use to identify your cloud with one of our definitions. He is not saying you must follow our standards or the whims of a committee but instead, he provides a means for end-users to recognise a cloud as being truly open.

The market will decide if Sam's approach will be a success or not but I fully support him in this action.

Benefit Busters

I was really excited to hear about a program on Channel 4 which was going to look into how "the government is attempting to revolutionise the benefits system".

Promising an "all out attack" and a "no nonsense Yorkshire lass", I was imagining how those MP's were going to squirm.

Imagine my disappointment to discover that instead of hitting some of the biggest piggies in the country, it'll instead focus on the most vulnerable members of our society ... yawn.

According to the BBC, the amount of benefit fraud in the UK was around £2.6 billion in 2007, approximately 2% of a £130 billion (or thereabouts) yearly budget.

If the investment houses aren't making a better than 2% profit on the £175 billion quantitative easing program, I'd be gobsmacked.

This is the sort of benefits we can ill afford. Get your act together C4.

Why open source clouds are essential ...

I've covered this particular topic over the last four years at various conference sessions around the world. However, given some recent discussions I thought it is worth repeating the story.

"Cloud computing" (today's terminology for an old concept) represents a combination of factors that are accelerating the transition of common IT activities from a product to a service based economy. It's not per se a specific technology but a result of concept, suitability of activities, change in business attitude and available technology (for more information, see my most recent video from OSCON 2009).

The risks associated with this transformation are well known. For example, the risk of doing nothing and the need to remain competitive (see Red Queen Hypothesis part I and part II). This needs to be balanced against standard outsourcing risks (for example: lack of pricing competition & second sourcing options, loss of strategic control, vendor lock-in & suitability of activities for outsourcing) and transitional risks related to this transformation of industry (for example: trust, transparency, governance, security of supply).

These transitional and outsourcing risks create barriers to adoption, however whilst the transitional risks are transitional by nature (i.e. short lived), the outsourcing risks are not. The outsourcing risks can only be solved through portability, easy switching between providers and the formation of a competitive marketplace which in turn depends upon the formation of standards in the cloud computing field. If you want to know more about second sourcing, go spend a few hours with anyone who has experience of manufacturing & supply chain management because this is where the cloud is heading.

Now when it comes to standards in the cloud space, it's important to distinguish that there will be different standards at the various layers of the computing stack (application, platform and infrastructure). People often talk about portability between different layers but each layer is built upon subsystems from the lower layer, you can't just make those magically disappear. You're no more likely to get portability between Azure Platform and EC2 as you are to get portability from a programming language to bare metal (i.e. you need the underlying components).

At each layer of the stack, if you want portability, you're going to need common environments (defined through defacto standards), multiple providers and easy switching between them. For example portability between one Azure environment and another Azure environment.

In the above example, Azure would represent the "standard". However, if a marketplace emerges around a proprietary standard then in effect the entire market hands over a significant element of strategic control to the vendor of that standard.

The use of an open standard (i.e. in this case an open source including APIs and open data formats) is an important mechanism in creating a free marketplace without vendor control. We learnt this lesson from the network wars and the eventual dominance of TCP/IP.

As I've often pointed out, the standard has to be running code for reasons of semantic interoperability. Documented standards (i.e. the principle) are useful but they are not sufficient in the cloud world because of the complexity involved in describing an environment (such as a platform). Even if you could describe such an environment, it would create significant barriers of implementation. 

To achieve the goal of a free market (i.e. free from constraint by one vendor) then you have to solve both the issues of semantic interoperability and freedom from constraint. This means the standard has to be an expression and not principle and the only way to solve the constraint is for the standard to be implemented as an open source reference model (i.e. running code).

This does however lead to a licensing question, if you created an open source reference model for use as a standard, how would you license it? It is important to remember that the intention of a standard is to encourage portability (i.e. limit feature differentiation) but not to limit competition (i.e. to allow differentiation on price vs service quality)

GPLv3 has an important loophole (which I strongly supported and continue to support) known as the "SaaS Loophole" which achieves this goal.

Whilst GPLv3 prevents redistribution of code changes without releasing the modification, it does allow a provider to offer the system as a service with proprietary improvements. GPLv3 encourages competition in the cloud space by allowing providers to operationally "improve" any system and providing it as a service.

In a world where the standard is provided as such an open source reference model (ideally under GPLv3), then you'll also need the creation of an assurance industry to provide end user assurance that providers still match the standard (despite of any competitive modifications for operational improvements). This is how you create a truly competitive marketplace and by encouraging diversity in operations overcome the most dangerous risk of all which is systemic failure in the cloud.

We have already staked the ground with Ubuntu Enterprise Cloud, our intention is to continue to push this and create truly competitive markets in the cloud using the only viable mechanism - open source.  Of course, this is at the infrastructure layer of the computing stack. Our attention will shortly turn towards the platform.

Tuesday, August 04, 2009

Happy days are here again ...

Having arrived back from Dublin, I discover the local media is all a flutter with the tales of huge banking bonuses. In my view this is great news as it means the banks must be doing well and so we can finally stop the continued bail-out.

There is no need for any more of the £125 billion quantitative easing scheme and the purchase of gilts at hyper inflated prices. Obviously some banks have been making a nice little earner on this but they've got cash now, they're loaded and so they don't need it.

We can stop the planned £600 billion buy-out and insurance of toxic debt - the unfortunately named asset protection scheme. The one asset it won't protect is taxpayer funds and so with the banks awash with cash it's time to end this idea.

Obviously the $400bn black hole heading towards the private equity industry won't need any government funds because the banks have cash and they funded most of these shenanigans.

The generous lines of credit, the chunky loans - well this can all stop. With the banks in such good shape then I'd expect to see a wholesale reversal on the flow of funds as taxpayers want every penny back with a decent return to boot.

Trebles all around in my view.

Unfortunately I suspect that the trebles have already been drunk by a select few who are playing a lavish game of financial roulette insured by the average person on the street. From what I understand, most of the profits are coming from the investment banking operations rather than any meaningful growth in the lending industry to the business sector. As the Fed has been discovering, its recent use of taxpayers funds to improve liquidity has been gamed to generate handsome profits in these investment operations.

The taxpayer can only fund such an illusion of recovery for so long. Eventually we'll have to wake up and face the horrid truth especially as the abyss that it is the OTC market starts to swallow up what's left of yesteryear's fortunes.

I suspect we'll once again see that last bastion of the financial industry, a shabby bunch of fortune tellers who'll be wheeled out to explain that no-one saw it coming.

Friday, July 31, 2009

So little time ...

Alas, I've not been blogging recently due to a hectic travel, work and personal schedule. However, for those of you who enjoy my talks, here's the video of my OSCON keynote.

Wednesday, July 08, 2009


There has been some recent discussion about Chrome as the Browser Operating System. This prompted a friend to ask a question - why would Google want to compete in the O/S world?

This might actually be a necessity.

As the I.T. industry shifts towards a service world, the "traditional" O/S will remain a vital but potentially less visible component (normal effect of componentisation) as a higher level layer of the computing stack (such as the browser on the client side and the platform on the server) becomes increasingly dominant. You can see this effect occurring today with many people simply living their working lives through the browser with applications being built in online platforms and mashup environments.

This, in itself, shouldn't impact Google as its search business depends upon the neutrality of the browser and access to data. What might impact Google is the rise to dominance of an alternative to the browser. For Google, there exists a dangerous scenario in which a proprietary environment such as Azure and Silverlight becomes dominant and the open protocols of the "free web" are slowly extended.

As with all scenario planning, this is simply conjecture and it depends upon the intentions of different players, economic factors, technology and attitudes. Nevertheless, if you're planning for the future, it's a scenario which can't be ignored.

The potential threat was first highlighted during the ODF / OOXML debacle and as Mr Edwards states, this could prompt a "wholesale, across the boards replacement of W3C technologies".

At the very least, he's right to highlight the possibility.

When Chrome first launched, it appeared as through Google was trying to influence other browsers to follow a route of becoming the future "cloud O/S". It's debatable now whether Google has changed track and decided it has to make this happen itself.

Friday, July 03, 2009

On tiredness ...

Terms often have relative meanings. Pre-Harry (our newborn), my definition of tiredness was :-

bleary eyed and a feeling of exhaustion.

Post-Harry, tiredness means :-

waking up to discover I've put my iphone in the fridge and the butter in my briefcase.

It's impossible to describe how all the tiredness, messiness, anxiety, stress, the "I can't cope!" moments and uncertainty are paid off in a single smile - but they are.

Tuesday, June 30, 2009

Our cloud offering is ... a stamp.

I was all excited about Red Hat's entry into the cloud computing space until I read the press release.
Here's my problem ...

Today, a whole range of public cloud providers offer Red Hat as a virtual machine operating system (along with every other operating system). Tomorrow, Red Hat will certify some of those cloud providers.

Either I'm missing the point, or this is just the same as before but with a stamp of approval from Red Hat? Is it me or does that just seem incredibly arrogant? We can fix the cloud by applying our magic pixie dust stamp of approval.

I've worked extremely hard with the Ubuntu team and Eucalyptus to provide users with their own open source cloud system for building private clouds. We've created official images which run both on your private cloud and a public provider (i.e. Amazon EC2) and an ecosystem of management tools (RightScale, CohesiveFT).

We've provided real products for real users and our aim has been to provide an open source reference model for cloud computing at the infrastructure layer of the stack.

Whilst I agree that the solution to many of enterprise concerns over cloud can be resolved by such a common substrate, this needs to be far more than just saying run our operating system in your VMs and here's our stamp of approval.

I'm seriously disappointed. However, disappointment collapses into despair at sight of the following statement:

"How can a CIO be sure that an application written for a Google cloud will work with Salesforce, Amazon, or another cloud?"

[Hint: You can't, they're completely different levels of the computing stack]

I keep on getting this horrible feeling that they haven't got to grips with the changes in the environment but that doesn't make sense because the changes are plainly visible. Every time I look at the landscape it gets me confused ... what on earth are they upto? Have they found some new area of value I just don't get?

Monday, June 22, 2009

Black is the new ... black.

My friend James is reported to have said :-

"as far as the enterprise is concerned, cloud [computing] is the new VMWare, and IBM wants a fat slice this time".

According to another friend of mine, James Urquhart, it looks like the new slice IBM is getting is based upon .... VMWare.

Numb Nuts ...

Numb Nuts are those people whose mental faculties have somewhat been pacified by the onset of an easy culture. It would be unfair to describe them as feeble-minded as it's more about an unwillingness to think.

In the "cloud" world there are many examples of second sourcing failure through proprietary technology (e.g. Zimki, CogHead & Virtual Iron). Equally there are many Numb Nuts who still believe that proprietary is the way forward.

A femtogram of thought would show that in the service world, competition and value should be around services whilst the bits are free.

If you're building a cloud or using a cloud, ask yourself where is the open source reference implementation of this? Where are my second sourcing options?

If you don't have this, then check your face in the mirror each day and look for the tell-tale sign of "Numb Nut" appearing in bold lettering across your forehead.

As the habitual users of Virtual Iron have discovered, it'll happen soon enough.

[El Reg talking about Virtual Iron]
"anyone that built their hosting infrastructure on now totally in the shit".

[MasterMark speaking about Coghead 'going tits up']
"that left its customers essentially in the unfortunate position of being 'shit out of luck'.

P.S. Before anyone whines that Amazon EC2 is proprietary - try Eucalyptus, Globus Nimbus or any of the other open source re-implementations of the API.

P.P.S. Before anyone huffs that their proprietary system is backed by a big company, in my view "financial safety" is either open sourced or much bigger than Lehman Brothers i.e. revenues above $20 billion and a headcount higher than 28,000 - 'nuff said.

Monday, June 15, 2009

The cloud isn't just vapour.

I was recently asked whether I thought cloud was vapour, whether open sourced cloud systems could solve many of the adoption concerns over cloud computing, whether it was likely that such systems would appear, what standards would they support and which distribution would make the first move? I know I haven't been blogging much recently but I was surprised by the questions.

First, regardless of whatever definition you use to describe cloud computing (a fairly hopeless task in my view), the term merely identifies an underlying shift of I.T. from a product to a service based economy. It's a consequence of :-
  1. Certain I.T. activities becoming suitable for service provision through volume operations (i.e. those activities are well defined and ubiquitous
  2. The existence of mature enough technology to support this (Popek et al wrote the book on virtualisation back in 1974)
  3. A change in business attitude towards I.T. (Carr and others, namely Strassmann, pointed out that much of I.T. is simply seen as a cost of doing business.)
  4. The concept of utility computing provision (i.e. the provision of suitable I.T. resources much like other utility providers, as forecast by McCarthy back in the 1960's.)
Take away any of these elements and cloud computing wouldn't represent the upheaval and the disruption that it does today. As for trying to precisely define it, try first coming up with a short and punchy description of industrial revolution without hand-waving and referring to numerous tomes. The problem with cloud computing is that it isn't one thing, it's a transition caused by many factors.

I do believe that open source reference models (or what I used to call open sourced standards) are key to the development of the cloud industry because of the second sourcing concerns of enterprises. I strongly believe that private and hybrid clouds (using both private and public resources) will help develop this industry in the short term. Open source should also dominate this change as it is the only viable route to utility computing marketplaces with competition based upon services rather than lock-in. I've not changed my tune since 2006, I see no reason to change now.

During this time of transition (which is after all what is happening) standards will be incredibly important. Despite all the noise we already have a defacto standard at the infrastructure layer of the computing stack - it's called the Amazon EC2 API.

As for when open source systems will appear that match such emerging standards, the first truly credible system was released almost a year ago. It's called Eucalyptus and it's backed by a commercial company.

As for which distribution would make the first move. Well Ubuntu Server Edition has included Eucalyptus since 9.04. Building a private cloud using open source technology that matches the EC2 API is almost as easy as apt-get install and it's going to get easier. We call this concept Ubuntu Enterprise Cloud and you can find details here.

It's also worth noting that Ubuntu Server Edition is also provided as official images on both UEC and on Amazon EC2, so you can run the same base image in both environments. Matching standards and using uniform images brings us a step closer to portability between environments but most importantly simplifies the process of bursting. It takes us a step further away from the dangers of the cloud net neutrality style argument that I highlighted at E-Tech'07.

It's also essential to build ecosystems around open source cloud computing which is why we work with companies like RightScale and CohesiveFT as well as our own tools like Landscape. Everything we do is around openness and freedom in the cloud computing space. This is not some ideological pursuit but simply a realisation that the future cloud markets will depend upon such openness to form. There is plenty of revenue opportunity without the need to tie people down in an old product mentality.
In short :-

  • You can already build clouds with open source technology matching the emerging standard of EC2.
  • You can already use single images across both private and public environments.
  • The technology is entirely open sourced technology and there exists no lock-in to a proprietary framework or solution.
  • It's free.
  • It's supported.
  • It's already in a distribution, go check out Ubuntu Server Edition.
  • You can already use a range of different management tools.

Ubuntu already dominates the linux desktop market, and from the reports I've seen recently we're going great guns on the server market as well. As far as I'm aware, we're the only distribution which provides you with a simple means of creating an open source private cloud and images spanning both private and public environments. Maybe we should get a bit better at shouting about it.

Well, I'm going to be speaking at Velocity and then I have a keynote at OSCON. I was thinking of a tag line for what we've being doing and a friend of mine, Alexis Richardson, chipped in with the following :- "Ubuntu is Cloud for Human Beings"