Sunday, September 27, 2009

If Sauron was the Microsoft Cloud Strategist

Back in March 2008, I wrote a post which hypothesised that a company, such as Microsoft, could conceivably create a cloud environment that meshes together many ISPs / ISVs and end consumers into a "proprietary" yet "open" cloud marketplace and consequently supplant the neutrality of the web.

This is only a hypothesis and the strategy would have to be straight out of the "Art of War" and attempt to misdirect all parties whilst the ground work is laid. Now, I have no idea what Microsoft is planning but let's pretend that Sauron was running their cloud strategy.

In such circumstances, you could reasonably expect an attack on internet freedom based upon the following actions :-

Development of the Ecosystem (from my Nov'08 post)

    When the Azure services platform is launched, we will see the creation of an ecosystem based upon the following concepts:-

    1. build and release applications to Microsoft's own cloud environment providing Azure and the Azure Services.
    2. build and release applications to a number of different ISPs providing Azure and specific Azure Services (i.e. SQL, .Net and Sharepoint services).
    3. purchase server versions of Azure and specific Azure services for your own infrastructure.
    4. buy a ready made scaleable "Azure" container cloud for all those large data centre needs of yours.

    Since the common component in all of this will be the Azure platform itself, then migration between all these options will be easy as pie through the Windows Azure Fabric Controller

Growth of the Ecosystem

The Azure platform will benefit from componentisation effects rapidly increasing the development speed of applications. Given the large pre-installed base of .NET developers, these effects would encourage rapid adoption especially as a marketplace of providers exists with portability between them (overcoming one of the adoption concerns). However, MSFT will purposefully retain specific elements of the Azure Services (those with strong network effects through data & meta data - user profiles etc). This will create accusations of lock-in.

To combat this, the entire services will be based upon open standards (for the data but not the meta data) and support will be given to an open source equivalent of the Azure platform (i.e mono) . The argument will be presented strongly that the Azure market is open, based upon open standards with a fledgling open source alternative. Hence, into this mix will be launched :-

  1. A wide variety of ISV applications running on the Azure Platform.
  2. A strong message of openness, portability and support for open standards (particularly for the reasons of interoperability)
  3. A fledgling open source equivalent.

In order to achieve the above and to allow for the development of the future strategy, the office suite of tools must be based upon open standards. Both Mr. Edward's & I share concerns in this space.

Capture

With the growth of the Azure marketplace and applications built in this platform, a range of communication protocols will be introduced to enhance productivity in both the office platform (which will increasingly be tied into the network effect aspects of Azure) and Silverlight (which will be introduced to every device to create a rich interface). Whilst the protocols will be open, many of the benefits will only come into effect through aggregated & meta data (i.e. within the confines of the Azure market). The purpose of this approach, is to reduce the importance of the browser as a neutral interface to the web and to start the process of undermining the W3C technologies.

The net effect

The overall effect of this approach would be to create the illusion of an open marketplace on Azure which is rapidly adopted because of the componentisation effects created and pre-existing skills base. Into this marketplace will be provided beneficial protocols for communication which are again "open". Despite its popularity, no accusation of monopoly can be levelled because users are free to choose providers and a fledgling open source equivalent exists.

However, the reality would be to create a market that has surrendered technological direction to one vendor with new protocols designed to undermine the W3C. All participants (whether ISPs, ISVs, consumer or manufacturers) will find themselves dependent upon the one vendor because of the strong network effects created through data (including aggregated and meta data effects).

Following such a strategy, then it could be Game, Set and Match to MSFT for the next twenty years and the open source movement will find itself crushed by this juggernaut. Furthermore, companies such as Google, that depend upon the neutrality of the interface to the web will find themselves seriously disadvantaged.

Now, I'm not saying this is going to happen but I have my own personal views. I warned about the dangers of a proprietary play in this space back in 2007 and the importance of open source in protecting our freedoms. I'm mentioning this again because I keep on hearing people say that "open source has won".

The problem is that we might be fighting the wrong fight, and the battle for middle earth has already begun.

Tuesday, September 22, 2009

Is the enterprise ready for the cloud?

There are many known risks associated with the cloud, some are transitional in nature (i.e. related to the transformation of an industry from a product to a service based economy) whilst others are general to the outsourcing of any activity. These always have to be balanced against the risk of doing nothing and the loss of competitive effectiveness and/or efficiency.

You'll find discussion of these risks in various posts and presentations I've made over the years. The forces behind this change are not specific to I.T. but generic and have (and will) effect many industries.

In this post, I want to turn the clock back and discuss again the organisational pressures that cloud creates because for some reason it doesn't get mentioned enough.

The tactics and methodologies needed to be used with any activity depend not upon the type of activity but it's lifecycle. For example, there is no project management method applicable to all types of activities despite the desire of many organisations to try and simplify this complex problem to a unified approach. Using agile everywhere is about as daft as using prince 2 everywhere, single policies simply aren't effective.

The shift towards cloud and the further commoditisation of I.T. will actually exacerbate this problem of single policies by highlighting the extremes and differences between managing an innovation and a commodity. Many organisations are simply not geared upto dealing with the realisation that they are complex adaptive systems rather than linear like machines.

People often ask the question whether "the cloud is ready for the enterprise" but the bigger question which is missed is whether "the enterprise is ready for the cloud". In many cases, the answer is no.

An example of the problems that the cloud can create is the shift to a variable model of charging and the move away from capital expenditure. Whilst it makes intuitive and obvious sense to pay for what you use, I'll use the example of worth based development to show where it goes wrong.

Back in 2003, I was extensively using agile development techniques for new projects to overcome the normal conflict between the client and the developers over what was in or not in the specification. It should never be forgotten that the process of developing a new project is also a process of discovery for the client. Requirements change continuously as more is discovered and agile is specifically designed to cope with a dynamic environment. However, it does have a weakness.

Whilst the client gets more involved in the project, the cost of change reduces (due to test driven development) and the client gets more of what they wanted, the weakness is that in most cases the client has little or no understanding of what the value of the system is going to be other than the amount they have to spend on it. In response to this, I introduced a concept known as worth based development.

The first step was to sit down with the client and work out a measure of worth for the system (i.e. machines sold, leads created, new customers found, improved forecasting etc). Once we had an agreed measure of worth, we used models to calculate the likely range of potential values that such a system would create.

If we calculated that the potential was high enough and the risks acceptable then we would offer to build and operate the system in return for an element of the measurable value created. This was a no-win, no-fee mechanism of development and goes far beyond "pay for what you use" and into "pay a fraction of what you get out".

The effect of this approach was several fold :-

  1. If we didn't believe the project was likely to succeed then we wouldn't work on this basis and instead charge on a more traditional model (hours billed etc). This told the client valuable information about their project.

  2. If we both agreed then immediately both parties were now focused on maximising the value of system rather than developing an arbritary set of capabilities. If an opportunity arose that could maximise value, then it was easy for both parties to accept it.

Charging based upon a measure of the value created sounds obvious but it was a spectacular fail.

In one case, we built a system which created leads for highly expensive equipment. The measure of value here was leads (i.e we didn't control the sales process which was entirely separate). We worked with the client, built the system and switched it on - it was a massive success.

In a few months, we had created tens of thousands of leads. Our charge was around 10 euros per lead and the client had racked up a sizeable bill. At this point I received a phone call.

The client asked me to switch the service off. I asked why and whether they were happy with all the leads. The answer came back that they were delighted with the leads, lots of which were turning into sales but could we still switch the system off because they had run out of budget.

Now this perplexed me because a single unit of the equipment sold for thousands of euros and the client was selling to a good percentage of the leads (often with multiple units). Working with the client, we quickly showed them how the additional revenue vastly outstripped the cost, the system was simply generating profit and we had the figures to prove it.

The problem however wasn't profit, they could see it was creating it. The problem was that the cost had exceeded the allocated budget and the client would have to go through either another planning cycle or an approval board to get more funds. Both options would take months.

I was stunned, the client was asking me switch off a profit making system because it had been too successful and the cost had surpassed some arbritary budget figure. They answered "yes" and stated that they had no choice.

Even in a case where direct additional revenue and profit could be proved, the budgetary mechanisms of the organisation were not capable of dealing with variable costs because they'd never been designed that way. The situation is worse in the case of the utility charging model of cloud providers because a direct measure of worth cannot always be shown. This problem of variable costs vs fixed budgeting & long planning cycles is going to re-occur for some organisations.

Not all enterprise are ready for the cloud.

Tuesday, September 08, 2009

Platforms and all that jazz ...

Back in early 2006/7 the transition of I.T. activities from a product to service world was often described using three distinct layers - software, framework and hardware. These layers had specific acronyms :-

  • SaaS (Software as a Service)
  • FaaS (Framework as a Service)
  • HaaS (Hardware as a Service).

The terms separated the boundary between what a user was concerned with and what the provider was concerned with.

In the case of HaaS, the provider was concerned with hardware provision (as virtual machines) and the user was concerned with what was built on that hardware (the operating system, any installed framework components such as databases, messaging systems and all code & data).

In the case of FaaS, the provider was concerned with the framework provided (and all the underlying subsystems such as the operating system, the hardware etc) whilst the user was concerned with what they developed in that framework.

I summarised these concepts and the overall stack in a much later blog post during 2007, "The SEP Boundary and more rough thoughts"

Of course, the plethora of 'aaS' terms was foolish especially since the industry had quickly embarked on what can only be described as the 'aaS' wars, a constant renaming of everything. Robert Lefkowitz ( in Jul'07) warned that this was going to lead to a whole lot of aaS. He was right.

Today, those three central layers of the stack have settled on software (though this often yo-yo's to application and back again), platform and infrastructure. However, this seems to have created its own problem.

Platform is being used at all layers of the stack, so we hear of cloud platforms for building IaaS and many other mangled uses of the concepts. The term infrastructure itself has many different meanings. Several SaaS offerings (despite many calling this layer applications, no-one wants to use the acronym AaaS) are described as core infrastructure and of course everything is built with software.

That which is, and still remains very simple, has become fraught with confusion. Life seemed so much simpler back in 2007.

This why, I argued that the CCIF needed to focus on a common taxonomy because we desperately need to talk about the same thing. Now, don't get me wrong, I'm more than aware of the pitfalls of a mechanistic definition of cloud and its inability to impart a wider understanding of the change that is occurring but confusion is a much greater foe.

So, I strongly urge you to adopt the NIST definition for everyday practical use.

Monday, September 07, 2009

The weekly dose ...

Last week, Botchagalupe (irl John Willis) and I finally got around to doing the first of what promises to be a weekly podcast on cloud computing.

After meandering through the dangerous ground of sports fanaticism, we seem to have hit upon a weekly format of :-

  • The big question
  • What's Hot!
  • What's getting the most hype?

We're going to continue it this week but I'd welcome suggestions for the questions we should be discussing. Leave a comment or ping me on twitter.

Sunday, September 06, 2009

Is Cloud Computing Green?

The short answer is yes, the long answer is no.

The short answer deals with the better utilisation of computer resources within the data centre, the potential for allocating virtual resources to more energy efficient services and for reducing the massive sprawl of physical resources (and all the energy hence consumed in manufacturing, cooling and distribution.). Overall, cloud computing offers the potential for more energy efficient virtual resources.

The long answer concerns componentisation. The shift of common and well defined I.T. activities from products to standard components provided as online services should lead to a dramatic explosion in innovation.

Standardisation always creates this potential.

If you consider writing an application today, the reason why it's a relatively quick process is because we have relatively standardised and stable components such as frameworks, databases, operating system, cpu, memory etc. Imagine how long it would take to write a new application if you first had to start by designing the CPU. This is componentisation in action, the rate of evolution of system is directly related to the organisation of its subsystems.

Cloud computing is all about providing standard components as services (it's pure volume operations). The problem of course is that we will end up consuming more of these standard components because it's so easy to do so (i.e. in old speak, there is less yak shaving) and it becomes easier to build new and more exciting services on these (standing on the shoulders of giants).

We might end up providing more efficient virtual resources but we will end up consuming vastly more of them.

In the short term, cloud computing will appear to be more green, in the long term it will turn out not to be. However, that's to be expected, our entire history of industrial progress continues to be about the constant creation of ever more complex and ordered systems and the use of stable subsystems simply accelerates this process, whether they be bricks, pipes, resistors, capacitors, databases or whatever.

Whichever way you cut it, our constantly accelerating process of creating greater "perceived" order and the constant reduction of entropy (within these systems and the future higher ordered systems that will be created) ultimately requires one external factor - a greater energy input.

Cloud computing will be no different and our focus will have to be on the source of that energy.

Thursday, September 03, 2009

The cloud computing standards war

Over the last couple of years, I've consistently talked about the necessity for standards in the cloud computing space and the oncoming war that this will create. This was not some insightful prediction but simply the re-application of old lessons learned from the many industries which have undergone a transformation to a service world.

That standards war is now in full swing.

The principle arguments behind standards comes from componentisation theory (the acceleration of innovation through the use of standardised subsystems) and the need for marketplaces with portability between providers (solving the lack of second sourcing options and competitive pricing pressures). The two main combatants at the infrastructure layer of the stack are shaping up to be Amazon with the EC2 API and VMware with vCloud API.

Much of the debate seems to be focused about how "open" the standards are, however, there's a big gotcha' in this space. Whilst open standards are necessary for portability and the formation of markets, they are not sufficient. What we really need are standards represented through open source reference models, i.e. running code.

The basic considerations are :-

  • A specification can be controlled, influenced and directed more easily than an open source project.
  • A specification can easily be exceeded providing mechanisms of lock-in whilst still retaining compliance to a 'standard'.
  • A specification needs to be implemented and depending upon the size and complexity of the 'standard' this can create significant adoption barriers to having multiple implementations.
  • Open source reference models provide a rapid means of implementing a 'standard' and hence encourage adoption.
  • Open source reference models provide a mechanism for testing the compliance of any proprietary re-implementation.
  • Adoption and becoming de facto are key to winning this war.

So, in the war of standards whilst the vCloud API has sought approval from the DMTF and formed a consortium of providers, the Amazon EC2 API has widespread usage, a thriving ecosystem and multiple open source implementations (Eucalyptus, Nimbus and Open Nebula).

There appears to be a lot of FUD over the intellectual property rights around APIs and a lot of noise over vCloud adoption. You should expect this to heat up over the next few months because these early battles are all about mindshare and will heavily influence the outcome.

However, whilst VMWare has struck boldly it has exposed a possible achilles heel. The only way of currently implementing vCloud is with VMWare technology, there is no open source reference model. If Reuven is right and Amazon does 'open up' the API, then Amazon have a quick footed route to IETF approval (multiple implementations) and can turn the tables on vCloud by labeling it as a "proprietary" only solution.

Of course VMWare could pre-empt this and go for an open source route or even attempt to co-opt the various open source clouds into adopting their standard. I'd be surprised if they weren't already trying to do this.

This space is going to get very interesting, very quickly.

Wednesday, September 02, 2009

That Tesla feeling ...

When it comes to the modern electricity industry, Nikola Tesla is undoubtedly one of the most significant figures in its history. His work pioneered the formation of alternating current electrical power systems, including the A/C motor and multi-phase systems for electricity distribution. There are none who would compare in terms of contribution.

By contrast, Thomas Edison was vehemently opposed to the A/C system, even going so far as to publicly electrocuting animals to show its dangers. However, the average person today would likely associate Edison as the "father of modern electricity", in much the same way they might associate him as the inventor of the electric light bulb (as opposed to Joseph Swan).

First, be in no doubt that Edison made enormous contributions to these and many other fields. The question that really needs to be asked is how did Edison become so strongly associated to a field when the contribution of others was equally as large if not greater?

I often nickname this situation as a "tesla moment", an incident in time where the noise generated by others far outweighs the actual contributions made. Nikola Tesla's entire life seems to have been on the wrong side of a prolonged Tesla moment. In my view, he has never really achieved the recognition he deserves.

So why do I mention this? Well, to some extent Canonical has had its own trivial Tesla moment and to be honest this irks me. Canonical, for those of you who don't know, is the company that sponsors and supports Ubuntu - the world's fastest growing linux distribution. In the cloud computing space, Canonical has made some bold moves, including :-

  • The first distribution to select KVM as its official hypervisor.
  • Launch of officially supported images on Amazon EC2 (2008).
  • Integration of Eucalyptus into the distribution (April'09, Ubuntu Enterprise Cloud). We're the first and only distribution to provide their users with a simple way of creating their own clouds that match the Amazon EC2 API (the defacto standard for infrastructure provision in the cloud)
  • The introduction of support, training and consultancy services targeted at building private clouds.
  • The introduction of officially supported machine images which will run both on a public and private cloud environment across different hypervisors.

We've been working in the background with a number of different partners and we have several announcements aimed for the next release of Ubuntu. Overall, a lot of work has gone into making Ubuntu Server Edition an easy way to get started with cloud computing.

So, you can guess my disappointment that Canonical was not included in the list of 85 vendors shaping the cloud.

Obviously we need to create more noise however we're not going to do this by adding vapour, there's enough already in the cloud world.

Since UEC is freely available, open sourced and doesn't require subscriptions for security updates and patches, there are many people building ubuntu clouds with whom we have no contact. Hence, I'd like to hear from you and how the community is using UEC.

Ping me on twitter.

Tuesday, September 01, 2009

Cloud definitions ... will it ever end?

At Oscon, I highlighted the problem with many of today's cloud definitions and the attempts to pigeon hole cloud computing as a discrete technology, distribution or billing mechanism.

Such definitions unnecessarily narrow the richness of the field since Cloud computing represents a transition, a shift of many I.T. activities from a product to a service based economy caused by a quartet of concept, suitability, technology and a change in business attitude.

Whilst you can certainly describe some of the mechanics of cloud computing, NIST's latest definition does an excellent job of this, the mechanics do not provide the entire picture. It would be like describing the industrial revolution as :-

"The industrial revolution is a model for enabling convenient, consistent and high speed production of goods that can be rapidly adapted to meet consumer needs. The model promotes availability of goods and is composed of several essential characteristics (use of machinery, higher production volumes, standardised quality, centralised workforce) ..."

A working definition would fail to capture the fundamental transition at play, the history and the potential consequences of this change. It would fail to provide any understanding of what is happening.

An example of this lack of understanding and hence confusion would be the latest debate over what is or is not cloud. This has recently been re-ignited with Amazon's launch of VPC ("virtual private cloud").

In a service economy, a service can be provided by any party and the total of the services provided to an organisation may include both internal and external resources. For example, a company with its own cloud (provided with its own infrastructure, as in the case of an UEC environment) may combine these internal resources with external resources from a provider such as Amazon EC2. This is commonly called the hybrid model.

However, this is a service world and we shouldn't confuse provision with consumption. For example, the company in question is freely at liberty to consume such resources internally (as a private cloud) or to provide those combined resources to others (as a public cloud).

A quick glance at the electricity industry will show you that there exists a wide mix of internal and external resource providers and different models for consumption and provision. I can build my own power generator, top up with the national grid or even sell back to the national grid. The same myriad of different options exists within cloud computing.

Amazon's recent announcement is important because it further enables one of these options. For example, a company could choose to build a private cloud (for internal consumption) which consists of both internal (as in UEC) and external (as in Amazon's VPC) resources. Of course, there are many other ways that this same result can be achieved but none have Amazon's momentum in this field.

The debate over whether it is or is not cloud computing, is simply ... immaterial.