Friday, August 14, 2009

Cloud Computing ... Deja Vu

A new birth always has about it an aura of excitement that be matched by few other spectacles. This is true whether the birth is that of a new being, a new world or a new idea. The excitement arises not so much from the mere fact of birth but rather from the uncertainty and the element of doubt as to the future that always surround a novel event. In this connections, workers in the field computers are now becoming increasingly excited about the birth of a remarkable new method for the distribution and utilization of computer power. This method has been given a variety of names including 'computer utility'.

Regardless of the name, however, the development of this method does open up exciting new prospects for the employment of computers in ways and on a scale that would have seemed pure fantasy only five year ago.

Even now the subject of computer utilities is very much in the public eye, as evidenced by many articles in both the popular and technical press, prognostications by leading industrial and scientific figures and growing signs of interest on the part of governments everywhere.

The word 'utility' in the term 'computer utility' has, of course, the same connotation as it does in other more familiar fields such as in electrical power utilities or telephone utilities and merely denotes a service that is shared among many users, with each user bearing only a small fraction of the total cost of providing that service. In addition to making raw computer power available in a convenient economical form, a computer utility would be concerned with almost any service or function which could in some way be related to the processing, storage, collection and distribution of information.

A computer utility differs fundamentally from the normal computer service bureau in that the services are supplied directly to the user in his home, factory or office with the user paying only for the service that he actually uses.

The computer utility is a general purpose public system that includes features such as :-

  1. Essentially simultaneous use of the system by many remote users.
  2. Concurrent running of different multiple programs.
  3. Availability of at least the same range of facilities and capabilities at the remote stations as the user would expect if he where the sole operator of a private computer.
  4. A system of charging based upon a flat service charge and a variable charge based on usage.
  5. Capacity for indefinite growth, so that as the customer load increases, the system can expanded without limit by various means.

In addition to the general-purpose public form, there are countless other possible shapes that a computer utility might take. This include private general-purpose systems, public special purpose systems, public and private multi-purpose systems and a whole heirarchy of increasingly complex general-purpose public systems extending all the way to national systems.

As generally envisaged, a computer public utility would be a general purpose public system, simultaneously making available to a multitude of diverse geographically distributed users a wide range of different information processing services and capabilities on an on-line basis.

The public / private division is reflected in our experience with older utilities, communication, gas, electric power etc. In fact, historically, many of our present public utilities began as limited subscriber or private ventures. Even today, despite the fantastic growth of public systems, many organizations continue to operate their own private power plants or internal communication systems.

It is necessary to consider each application of computer utility separately on its merits and balance off in each case the gains and losses resulting from the adoption of the utility concept.

A number of importance considerations tend to improve the cost/effectiveness picture.

  1. Reduced solution time for engineering and scientific problems.
  2. A capability for an organisation to provide faster service to its customers.
  3. Reduce user capital equipment and facility investments.
  4. Better utilization of computer resources

Extracts from Douglas Parkhill, The challenge of the computer utility, 1966. (thanks to Tom Wasserman for pointing me in this direction)

Friday, August 07, 2009

Open Clouds

Whilst there are many organisations attempting to define standards for the cloud, my view has always been that these standards will emerge through the marketplace. What is critically important is to the protect the notion of what is and what isn't an open cloud. This is why I actively support the Open Cloud Initiative (OCI) which was founded by Sam Johnston.

The OCI doesn't try to tell you what cloud computing is or isn't, it doesn't even try to tell you what is an open cloud or not. What the OCI does is state, this is our definition of an open cloud (of which there are various forms) and here are the trademarks which you may use to identify your cloud with one of our definitions. He is not saying you must follow our standards or the whims of a committee but instead, he provides a means for end-users to recognise a cloud as being truly open.

The market will decide if Sam's approach will be a success or not but I fully support him in this action.

Benefit Busters

I was really excited to hear about a program on Channel 4 which was going to look into how "the government is attempting to revolutionise the benefits system".

Promising an "all out attack" and a "no nonsense Yorkshire lass", I was imagining how those MP's were going to squirm.

Imagine my disappointment to discover that instead of hitting some of the biggest piggies in the country, it'll instead focus on the most vulnerable members of our society ... yawn.

According to the BBC, the amount of benefit fraud in the UK was around £2.6 billion in 2007, approximately 2% of a £130 billion (or thereabouts) yearly budget.

If the investment houses aren't making a better than 2% profit on the £175 billion quantitative easing program, I'd be gobsmacked.

This is the sort of benefits we can ill afford. Get your act together C4.

Why open source clouds are essential ...

I've covered this particular topic over the last four years at various conference sessions around the world. However, given some recent discussions I thought it is worth repeating the story.

"Cloud computing" (today's terminology for an old concept) represents a combination of factors that are accelerating the transition of common IT activities from a product to a service based economy. It's not per se a specific technology but a result of concept, suitability of activities, change in business attitude and available technology (for more information, see my most recent video from OSCON 2009).

The risks associated with this transformation are well known. For example, the risk of doing nothing and the need to remain competitive (see Red Queen Hypothesis part I and part II). This needs to be balanced against standard outsourcing risks (for example: lack of pricing competition & second sourcing options, loss of strategic control, vendor lock-in & suitability of activities for outsourcing) and transitional risks related to this transformation of industry (for example: trust, transparency, governance, security of supply).

These transitional and outsourcing risks create barriers to adoption, however whilst the transitional risks are transitional by nature (i.e. short lived), the outsourcing risks are not. The outsourcing risks can only be solved through portability, easy switching between providers and the formation of a competitive marketplace which in turn depends upon the formation of standards in the cloud computing field. If you want to know more about second sourcing, go spend a few hours with anyone who has experience of manufacturing & supply chain management because this is where the cloud is heading.

Now when it comes to standards in the cloud space, it's important to distinguish that there will be different standards at the various layers of the computing stack (application, platform and infrastructure). People often talk about portability between different layers but each layer is built upon subsystems from the lower layer, you can't just make those magically disappear. You're no more likely to get portability between Azure Platform and EC2 as you are to get portability from a programming language to bare metal (i.e. you need the underlying components).

At each layer of the stack, if you want portability, you're going to need common environments (defined through defacto standards), multiple providers and easy switching between them. For example portability between one Azure environment and another Azure environment.

In the above example, Azure would represent the "standard". However, if a marketplace emerges around a proprietary standard then in effect the entire market hands over a significant element of strategic control to the vendor of that standard.

The use of an open standard (i.e. in this case an open source including APIs and open data formats) is an important mechanism in creating a free marketplace without vendor control. We learnt this lesson from the network wars and the eventual dominance of TCP/IP.

As I've often pointed out, the standard has to be running code for reasons of semantic interoperability. Documented standards (i.e. the principle) are useful but they are not sufficient in the cloud world because of the complexity involved in describing an environment (such as a platform). Even if you could describe such an environment, it would create significant barriers of implementation. 

To achieve the goal of a free market (i.e. free from constraint by one vendor) then you have to solve both the issues of semantic interoperability and freedom from constraint. This means the standard has to be an expression and not principle and the only way to solve the constraint is for the standard to be implemented as an open source reference model (i.e. running code).

This does however lead to a licensing question, if you created an open source reference model for use as a standard, how would you license it? It is important to remember that the intention of a standard is to encourage portability (i.e. limit feature differentiation) but not to limit competition (i.e. to allow differentiation on price vs service quality)

GPLv3 has an important loophole (which I strongly supported and continue to support) known as the "SaaS Loophole" which achieves this goal.

Whilst GPLv3 prevents redistribution of code changes without releasing the modification, it does allow a provider to offer the system as a service with proprietary improvements. GPLv3 encourages competition in the cloud space by allowing providers to operationally "improve" any system and providing it as a service.

In a world where the standard is provided as such an open source reference model (ideally under GPLv3), then you'll also need the creation of an assurance industry to provide end user assurance that providers still match the standard (despite of any competitive modifications for operational improvements). This is how you create a truly competitive marketplace and by encouraging diversity in operations overcome the most dangerous risk of all which is systemic failure in the cloud.

We have already staked the ground with Ubuntu Enterprise Cloud, our intention is to continue to push this and create truly competitive markets in the cloud using the only viable mechanism - open source.  Of course, this is at the infrastructure layer of the computing stack. Our attention will shortly turn towards the platform.

Tuesday, August 04, 2009

Happy days are here again ...

Having arrived back from Dublin, I discover the local media is all a flutter with the tales of huge banking bonuses. In my view this is great news as it means the banks must be doing well and so we can finally stop the continued bail-out.

There is no need for any more of the £125 billion quantitative easing scheme and the purchase of gilts at hyper inflated prices. Obviously some banks have been making a nice little earner on this but they've got cash now, they're loaded and so they don't need it.

We can stop the planned £600 billion buy-out and insurance of toxic debt - the unfortunately named asset protection scheme. The one asset it won't protect is taxpayer funds and so with the banks awash with cash it's time to end this idea.

Obviously the $400bn black hole heading towards the private equity industry won't need any government funds because the banks have cash and they funded most of these shenanigans.

The generous lines of credit, the chunky loans - well this can all stop. With the banks in such good shape then I'd expect to see a wholesale reversal on the flow of funds as taxpayers want every penny back with a decent return to boot.

Trebles all around in my view.

Unfortunately I suspect that the trebles have already been drunk by a select few who are playing a lavish game of financial roulette insured by the average person on the street. From what I understand, most of the profits are coming from the investment banking operations rather than any meaningful growth in the lending industry to the business sector. As the Fed has been discovering, its recent use of taxpayers funds to improve liquidity has been gamed to generate handsome profits in these investment operations.

The taxpayer can only fund such an illusion of recovery for so long. Eventually we'll have to wake up and face the horrid truth especially as the abyss that it is the OTC market starts to swallow up what's left of yesteryear's fortunes.

I suspect we'll once again see that last bastion of the financial industry, a shabby bunch of fortune tellers who'll be wheeled out to explain that no-one saw it coming.