Thursday, July 19, 2007

Competition, not greed, is good.

Utility computing concerns taking the idea of utility energy provision and applying it to the world of IT, so that companies buy computing resources in much the same way that they buy electricity - charged according to metered usage.

This market is growing, and will continue to grow in three distinct areas :-

SaaS (Software as a Service) : where entire applications are provided on a utility basis.

FaaS (Framework as a Service) : where an application development and deployment environment or framework is provided on a utility basis.

HaaS (Hardware as a Service) : where raw virtual machines are provided on a utility basis.

An ideal situation for any company, is where multiple providers of the same product exists (known as common service providers or CSPs) and where the company can switch between CSPs. Switching between CSPs will often be based upon price and quality of service.  As long as no lock-in exists and there are multiple providers of the same service, a competitive utility computing market should form.

This idea of a competitive utility computing market is viable at the SaaS, FaaS and HaaS levels. Each level is complimentary to each other - in that a SaaS app can be built upon a FaaS environment which can run on HaaS infrastructure. Each level is about achieving greater economies of scale and reduction of risk than any individual company can achieve.

The issues and the needs of a competitive utility computing market are also the same at each level - portability, multi-providers and agreed standards and solves the same class of problems - single source failure, access to resources, pricing competition, efficiency and exit costs.

In today's world the fastest way to achieve a standard is not through committee, conversation or whitepapers but through the release and adoption of not only a standard but also an operational means of achieving a standard. Hence such utility computing standards will only be achieved through the use of open source. This is the only way to achieve the level of interoperability that's required for easy transfer of code, data and processes without any one CSP being strategically disadvantaged to any owner of the standard (e.g. a product vendor).

Furthermore as the open source software model minimises lock-in to a utility computing service by encouraging other providers to enter the market and host an equivalent platform, this provides the pressure for a competitive utility computer market to form, with competition based upon price (for example CPU, bandwidth and storage), quality (SLA, TTFR) and capacity.

For background reading on why a marketplace is more efficient than a monopoly, I'd suggest Adam Smith (1723-1790) - "Monopoly ... is the great enemy to good management"

Or alternatively, for a less well versed argument, I'll be talking about this at OSCON.

4 comments:

Bert said...

While I certainly agree that the traditional standards bodies will fail here, I'm not certain OS offers the only path to a solution.

In some areas, for instance what you identify as HaaS, I'd argue no new standards are needed. We at 3tera were able to develop our AppLogic system without introducing any new APIs because we took as a design criteria that folks needed to be able to move their apps in and out of the system.

Other areas, like SaaS, are already political and messy. Facebook and Salesforce have shown that developers' primary concern is access to the their user base.

swardley said...

Good comment.

The issue here is "move their apps in and out" of where?

EC2 doesn't provide lock-in because you can't move apps in and out of EC2 - it provides lock-in because there is not a Google EC2 or an IBM EC2 or a somebody else EC2. There is an exit cost of this movement.

How providers manage the underlying infrastructure, whether it's virtualisation or something else - from a user point of view I'm not interested.

All want is freedom of movement.

At the SaaS level, all I want to know is that my data (including configuration) will run with this provider or that, and that I can transfer it easily to another.

At the FaaS level, all I want to know is that my data and application will run with this provider or that, and that I can transfer it easily to another.

At the HaaS level, it's machine images.

What is key with standards, is to know that if my machine image is running on this HaaS environment or my app is running on this FaaS environment or I'm using this SaaS - then they conform to a standard which enables me to switch to another provider and it will still work, no surprising glitches, no setup, no exit fees and no uncertainties.

At the HaaS level, the standards needed are few. At the FaaS level, there are standards per framework. At the SaaS level there would be standards per type of application.

This all requires open standards. Providers at any level are unlikely to hand over strategic control of their business by adopting a proprietary standard. Whilst users and business need to know that they have choice and can switch.

In an ideal world I choose which level of the stack I want to operate at and ideally I have multiple providers of the same standard at that level of the stack.

Bert said...

I've found lock-in to be more an issue of vendor intention than standard or OS.

Since I know more about HaaS (being a co-founder of 3tera) I'll pick on EC2 for a moment. EC2 uses the OS Xen as a hypervisor to host VMs and yet creates lock-in in a very traditional sense. The way in which you build images is specific to EC2. The way your VM accesses network resources and data is specific to EC2. And, for any application beyond a simple VM you need to develop code that coordinates your images by writing to AWS APIs, and that code is specific to EC2.

However, none of that is essential to building a utility computing service. Those layers of lock-in were choices made by the team as they built the service. Were they to OS the code tomorrow EC2 would have only slightly less lock-in because your apps would still be tied to the EC2 platform.

We made different choices. We maintained the existing infrastructure model of load balancers, NAS, firewalls etc and created a visual editor that lets you define infrastructure for your app exactly as you would have built it before - except the system creates it for you. This required extra work on our part but ensured there's no code to be written by users and generates a portable descriptor for the app. We also took the approach of enabling hosting providers to build services instead of building our own specifically so developers would have a choice of vendor. A single command is all it takes to migrate distributed apps between data centers - as close to zero cost as we could make it. And, if someone decides they don't like our model, they can take their existing images and move them to traditional colo. No lock-in.

Any VC reading this is probably crossing our name off their potential investment list right now, but these are decisions we made because we feel they are in the best interest of our users and therefore in our best interest as well.

My point is that OS alone isn't sufficient to assure mobility is achieved. It requires vendors who believe that mass adoption requires mobility and who seek to foster it instead of merely being dragged along.

Of course, that's just one geek's opinion!

swardley said...

Good comments again - this is appreciated Bert.

"Were they to OS the code tomorrow EC2 would have only slightly less lock-in because your apps would still be tied to the EC2 platform."

Absolutely you would. In the same sort of manner, but higher up the stack, that a Java application is tied to the JVM. The issue is not being tied to one machine environment, one framework or one app but that there exists multiple providers of the same standard.

"none of that is essential to building a utility computing service."

Agreed, virtualisation is a technology which can be used but it is not utility computing.

However, what you have created is not just virtualisation but the equivalent of Jar files for infrastructure at the HaaS level.

Rather than VCs crossing your name off the list, if you are willing to consider the open source route (and overcome the issues of ISPs regarding strategic control) - then there are ways to rapidly increase adoption, disrupt alternative models at the HaaS level and create multiple viable businesses for yourself at this level - providing a marketplace for FaaS & SaaS to further develop.

If you are unsure about this, I'd take a look at the JBOSS story and then if you wish I'd be more than willing to discuss with you how this should be possible.

Regarding "requires vendors who believe that mass adoption requires mobility" - actually a marketplace is the only way that small and niche ISPs will be able to compete against the larger players entering their space.

You have a potential treasure trove, I believe that open source is your key to unlocking this if you have the right strategy and plan.

Of course, that's just one businessman's opinion!