Tuesday, November 04, 2008

Interoperability is not enough

Sam Ramji said: "My team's focus has been on making sure that this platform treats open source development technologies as first-class citizens" and here are some of the quoted examples :-

  • A developer using the Eclipse IDE can write a C# application that runs on Windows Azure
  • Gallery, the leading PHP photo application, can access Windows Azure cloud storage
  • A blog engine hosted on Windows Azure can authenticate users with OpenID.

This is excellent news but as I've said many times before, what we require is portability (or what I used to call patration)

In a service world, lock-in is not solved by interoperability alone. You require portability of your code and data from one framework provided by a large computing vendor, to another or to your own machines. This is my basic minimum in order for me to be happy with a cloud service.

This can be achieved with a closed stack (adopted by many providers) or an entirely open framework, however in the cloud computing world the frameworks (Azure, Google App Engine, Zembley, Jaxer, ReasonablySmart, 10Gen etc) are the potential standards that allow for portability & interoperability between these providers. This is what you need in order to overcome the current lack of second sourcing options.

In the service world, specifications and open standards are not enough. In a service world, standards need to be actual pieces of operational code. Whilst a "standard" can be a closed technology, it obviously creates dependencies of all the participants in a marketplace on the technology vendor who owns that "standard".

If you're going to compete on service, compete on service but don't try and convince us that either a proprietary technology is open because it uses some open standards or a proprietary technology doesn't create lock-in in the service world.

There will always be some CIOs who will rush head long into a gilded cage, I suspect most will be considering how to deal with second sourcing issues.

2 comments:

Neil Mosafi said...

It's a good aim but it's way too early in the lifecycle of cloud computing for this to be an issue, I think.

Innovation will slow down if everyone has to agree a standard first. Let portability come once the bulk of innovation has taken place... people will notice the commonalities between the cloud services providers and come up with a solution, whatever the case.

I can't see cloud computing apps being used for mission critical applications for a long time. Until then, it's probably worth the risk to move such non-critical apps to a single vendor cloud, even just to make more room in your private data centre for those mission critical apps! As long as you can get at your data in some raw format, it shouldn't be a problem.

Also like you say, allowing people to host their own "private clouds" using proprietary technology, or allowing other hosting providers (such as Rackspace hosting Microsoft's Azure) would also be suitable.

Cheers
Neil

swardley said...

Thanks for the comment Neil.

"Innovation will slow down if everyone has to agree a standard first" - it depends upon whether you are talking about provider or client innovation.

The provision of common activities as standard internet components should accelerate innovation not slow it down. This is a basic tenant of componentisation theory as the rate of evolution of any system is dependant upon the organisation of its subsystems.

Whilst "allowing people to host their own 'private clouds' using proprietary technology" is suitable and the issue of portability can be resolved using a proprietary stack, you need to be aware of the impact of this in terms of strategic control, pricing competition and hence second sourcing. The problem with a proprietary stack at the framework layer, is that whilst it will solve the issues of portability and benefit from rapid innovation through componentisation, it does mean that all parties become dependant upon that framework. If large enough it can create a powerful method of controlling the future development of the internet.

Lastly, portability requires much more than simple access to raw data. Downloading terrabytes of data from a proprietary stack gets me precisely terrabytes of data which is often pointless unless I can recreate the context in which that data existed.

Portability requires an alternative provider of the same service and interoperability of the services.