Saturday, August 28, 2010

The bets ...

Every now and then I make bets about the future, a gamble on the actions of individual players.  I thought I'd write down some of the outstanding bets I have, obviously leaving out the counterparty & how much it is for (which is normally a cup of tea or a pint of beer). Naturally these are fairly reckless [I like to gamble] and whilst I don't expect to win, I do expect to get the general direction of travel roughly right.

[2006] : The Open Source Bet.
By the end of 2015, it will be accepted wisdom that open source provides the only viable mechanism of creating competitive markets in the utility computing world. It will dominate this space and be considered the norm.

[2006] : The JS Bet.
By the end of 2015, JavaScript will be seen as a rapidly growing enterprise language for developing entire applications, both client and server side.

[2007] : The Broker Bet.
By the end of 2013, we will see adverts for computer resource brokers.
[LOST - alas it looks like it will be end of 2014 / 2015. Close but no cigar, a fail is a fail.]

[2008] : The Three Step Bet.
By the end of 2013, Oracle would have bought Sun in order to get into the cloud space, RedHat would have bought Novell and Microsoft will have bought Canonical.
[LOST - Oracle did buy Sun but RedHat waivered on Novell and without which the triggers for MSFT / Canonical were less likely to happen.]

[2009] : The Escape Button Bet.
By the end of 2014, VMWare will have divided into two companies, one focused on existing virtualisation technology (IaaS Group) and the other focused on PaaS technology. The IaaS group will have been sold off to another company.

[2010] : The Rackspace Bet.
By Aug 2013, Rackspace will be providing EC2/S3 APIs alongside its own.
[LOST to +James Watters. James argued the team at Rackspace were culturally against such adoption]

Added Friday 25th March 2011
Rather than creating new lists, I thought I'd extend this list.

[2011] : The Apple Bet (@GeorgeReese & @mthiele10)
By the end of 2017, Apple will have or will be in the process of filing for protection under chapter 11 of the U.S. bankruptcy code.

Added Friday 30th July 2012
[2012] : The Public Public bet (@lmacvittie)
By the end of 2016 the common view (as in held more often than alternatives) will be that the future is hybrid as in public / public and that private cloud will be seen as niche.

[2012] The Inertia bet (@jeffsussna)
By mid 2014, a competitive market based around an open source technology which contains at least four IaaS providers (each with >100K servers) that have initiated a price war with AMZN will exist or by 2020 IBM, HP and Dell's server infrastructure business will be in severe decline verging on collapse (unless they have previously exited those businesses).

Added Friday 6th August 2012
[2012] The Modified VMWare Bet (@botchagalupe)
By the end of Feb 2014, VMWare will have :-
1) Acquired Basho or an equivalent company providing distributed storage capabilities to Riak CS
2) Launched an open source IaaS offering.
[LOST to +John Willis. Well VMWare did acquire Virsto and has adopted a closer relationship to OpenStack providing support for it but ... a fail is a fail.]

Added Monday 15th October 2012
[2012] The Amazon Two Factor Market Bet (@jeffsussna)
If GCE (Google Compute Engine) launches at scale with success then by end of 2015 Amazon's AWS consoles will have extended to enable you to control your GCE environment.

Added Friday 19th July 2013
[2013] The Dead Duck Bet (@cloudbzz, @EdLeafe, @kylemacdonald)
By end of July 2016, either OpenStack will be flourishing through provision of a massive market of AWS clones or the project and its differentiate from Amazon strategy will be widely seen as a dead duck. 

For clarification:-
1. AWS clone does not mean that all AWS services are provided but instead the market is focused solely on providing as many AWS compatible services as possible. The differentiation strategy will have been abandoned or the project will fail.

2. The counter bet is that OpenStack won't be a dead duck nor will there be a massive market of AWS clones built on OpenStack. In this circumstance, the differentiation strategy of OpenStack will be seen as successful.

Added Monday 9th September 2013
[2013] The Highly Aggressive Smart Phone and Tablet Collapse Bet
(@reverendted, @pwaring )
By the end of 2016, numerous press articles will have proclaimed the future death of smart phones and tablets.  By the end of 2018 both categories will be widely viewed as 'dead markets' and expected to become rapidly niche by end of 2021. NB. The original dates were 2018, 2020 and 2025 respectfully but in this case I took a hyper aggressive stance rather than my normal reckless.

[2013] The It's Alive! Bet (@crankypotato)
By the end of 2023, research labs will have an entirely biological clock including display with no mechanical or electronic components.

Added Saturday 13th September 2014
[2014]  The "I'm not allowed to drive here anymore" Bet (@h_ingo and @lhochstein)
By end of 2030, at least one city with a population in excess of 1 million will have outlawed people driving cars.

[2014]  The "Silicon Valley isn't what it used to be!" Bet (@quentynblog)
By end of 2030, Silicon Valley will not be commonly ranked in the top 5 most innovative places in the world.

[2014] The Amazon doesn't lose Bet (@littleidea)
By end of 2025, then neither MSFT nor Google will have wiped out AMZN which will remain a dominant player in cloud infrastructure. 'Wiped out' will mean less than 15% of market share by use. 'Dominant' will mean in the top 2 of public infrastructure providers by use.

For clarification
* 15% then I buy the tea
* Still in the top 2, then @littleidea buys the tea
* 15% but out of top 2 then we just hang out drinking tea

[2014] The Disruption Bet (@crankypotato)
By end of 2030, the period 2025 to 2030 will be commonly considered (by the general press, population and business press) as far more 'disruptive' than the period 2010 to 2020.

[2014] The Big Data's gone horribly wrong Bet [@thinkinnovation]
By end of 2023, Big Data will have experienced its first casualties with more than Five Big Data (product) vendors having gone bust or sold in a fire sale. For clarification, the vendor must be a well known name to experts in the field in 2014.

[2014] The IoT didn't save us Bet [@ZakKissel]
By end of 2030, many of today's large Tech company leaders in IoT will be reported (in the common tech press) as facing severe disruption by new entrants. For clarification. Many will be four or more company names associated as tech leaders in the field in 2014. Large will mean greater than 5,000 employees.

Wednesday, August 18, 2010

Arguably, the best cloud conference in the world?

For those of you who missed the OSCON Cloud Summit, I've put together a list of the videos and speakers. Obviously this doesn't recreate the event, which was an absolute blast, but at least it'll give you a flavour of what was missed.

Welcome to Cloud Summit [Video 14:28]
Very light introduction into cloud computing with an introduction to the speakers and the conference itself. This section is only really relevant for laying out the conference, so can easily be skipped.
With John Willis (@botchagalupe) of opscode and myself (@swardley) of Leading Edge Forum.

Scene Setting
In these opening sessions we looked at some of the practical issues that cloud creates.

Is the Enterprise Ready for the Cloud? [Video 16:39]
This session examines the challenges that face enterprises in adopting cloud computing. Is it just a technology problem or are there management considerations? Are enterprises adopted cloud, is the cloud ready for them and are they ready for it?
With Mark Masterson (@mastermark) of CSC.

Security, Identity – Back to the Drawing Board? [Video 25:12]
Is much of the cloud security debate simply FUD or are there some real consequences of this change?
With Subra Kumaraswamy(@subrak) of Ebay.

Cloudy Operations [Video 22:10]
In the cloud world new paradigms and memes are appearing :- the rise of the “DevOps”, “Infrastructure == Code” and “Design for Failure”. Given that cloud is fundamentally about volume operations of a commoditized activity, operations become a key battleground for competitive efficiency. Automation and orchestration appear key areas for the future development of the cloud. We review current thinking and who is leading this change.
With John Willis (@botchagalupe) of opscode.

The Cloud Myths, Schemes and Dirty Little Secrets [Video 17:38]
The cloud is surrounded by many claims but how many of these stand up to scrutiny. How many are based on fact or are simply wishful thinking? Is cloud computing green, will it save you money, will it lead to faster rates of innovation? We explore this subject and look at the dirty little secrets that no-one wants to tell you.
With Patrick Kerpan (@pjktech) of CohesiveFT.

Curing Addiction is Easier [Video 18:41]
Since Douglas Parkhill first introduced us to the idea of competitive markets of compute utilities back in the 1960s, the question has always been when would this occur? However, is a competitive marketplace in the interests of everyone and do providers want easy switching? We examine the issue of standards and portability in the cloud.
With Stephen O’Grady (@sogrady) of Redmonk.

Future Setting
In this section we heard from leading visionaries on the trends they see occurring in the cloud and the connection and relationships to other changes in our industry.

The Future of the Cloud [Video 29:00]
Cloud seems to be happening now but where is it going and where are we heading?
With J.P. Rangaswami (@jobsworth) of BT.

Cloud, E2.0 – Joining the Dots [Video 30:04]
Is cloud just an isolated phenomenon, or is it connected to many of the other changes in our industries.
With Dion Hinchcliffe (@dhinchcliffe) of Dachis.

The Questions
The next section was a Trial by Jury where we examined some of the key questions around cloud and open source.

What We Need are Standards in the Cloud [Video 45:17]
We put this question to the test, with prosecution Benjamin Black (@b6n) of FastIP, defence Sam Johnston (@samj) of Google and trial by a Jury of John Willis, Mark Masterson, Patrick Kerpan & Stephen O’Grady

Are Open APIs Enough to Prevent Lock-in? [Video 43:21]
We put this question to the test, with prosecution James Duncan (@jamesaduncan) of Joyent, defence George Reese (@georgereese) of Enstratus and trial by a Jury of John Willis, Mark Masterson, Patrick Kerpan & Stephen O’Grady

The Debates
Following the introductory sessions, the conference focused on two major debates. The first of these covered the “cloud computing and open source question”. To introduce the subject and the panelists, there were a number of short talks before the panel debates the impact of open source to cloud and vice versa.

The Journey So Far [Video 10:59]
An overview of how “cloud” has changed in the last five years.
With James Urquhart (@jamesurquhart) of CISCO.

Cloud and Open Source – A Natural Fit or Mortal Enemies? [Video 8:44]
Does open source matter in the cloud? Are they complimentary or antagonistic?
With Marten Mickos (@martenmickos) of Eucalyptus.

Cloudy Futures? The Role of Open Source in Creating Competitive Markets [Video 8:43]
How will open source help create competitive markets? Do “bits” have value in the future and will there be a place for proprietary technology?
With Rick Clark (@dendrobates) of OpenStack.

The Future of Open Source [Video 9:34]
What will cloud mean to open source development and to linux distributions. Will anyone care about the distro anymore?
With Neil Levine (@neilwlevine) of Canonical.

The Debate – Open Source and the Cloud
 [Video 36:24]
Our panel of experts examined the relationship between open source and cloud computing.
With Rick Clark, Neil Levine, Marten Mickos & James Urquhart

The Future Panel followed the same format with first an introduction to the experts who will debate where cloud is going to take us.

The Government and Cloud [Video 10:27]
The role of cloud computing in government IT – an introduction to the large G-Cloud and App Store project under way in the UK; what the UK public sector hopes to gain from a cloud approach, an overview of the proposed technical architecture, and how to deliver the benefits of cloud while still meeting government’s stringent security requirements.
With Kate Craig-Wood (@memset_kate) of Memset.

Infoware + 10 Years [Video 10:38]
Ten years after Tim created the term infoware, how have things turned out and what is the cloud’s role in this?
With Tim O'Reilly (@timoreilly) of O'Reilly Media.

The Debate – A Cloudy Future or Can We See Trends? [Video 50:12]
The panel of experts examine what’s next for cloud computing, what trends can they forsee.
With Kate Craig-Wood, Dion Hinchcliffe, Tim O’Reilly & JP Rangaswami

So, why "arguably the best cloud conference in the world?"

As a general conference on cloud, then the standard and quality of the speakers was outstanding. The speakers made the conference, they gave their time freely and were selected from a wide group of opinion leaders in this space. There was no vendor pitches, no paid for conference speaking slots and hence the discussion was frank and open. The audience themselves responded marvelously with a range of demanding questions.

It is almost impossible to pick a best talk from the conference because they were all great talks. There are real gems of insight to be found in each and every one and each could easily be the keynote for most conferences. In my opinion, if there is a TED of cloud, then this was it.

Overall, the blend of speakers and audience made it the best cloud conference that I've ever attended (and I've been to 50+). This also made my job as a moderator simple.

I'm very grateful to have been part of this and so my thanks goes to the speakers, the audience, the A/V crew who made life so easy and also Edd Dumbill (@edd), Allison Randal (@allisonrandal), Gina Blaber (@ginablaber) and Shirley Bailes (@shirleybailes) for making it happen.

Finally, huge thanks to Edd and Allison for letting me give a version of my Situation Normal, Everything Must Change talk covering cloud, innovation, commoditisation and my work at LEF.

Wednesday, August 04, 2010

Islands in the sky

I'm often asked how will the cloud develop to which I'll answer -"imperfectly, very imperfectly".

I was reminded of this through a long discussion with Benjamin Black, hence I thought I'd write something to explain my general thoughts on the problem. First, let me apologise as this will be a long post. Second, we need to start by recaping some basic concepts about risks. The barriers to adoption in cloud cover three basic forms of risk :-

Disruption Risks : Change to existing business relationships combined with issues around political capital and previous enterprise investment. It's often difficult to let go of that which we have previously invested in.

Transitional Risks: These risks are related to the shift from a world of products to a world of services and they include confusion over the models, trust in the service providers, governance of this service world, transparency from the providers and security of supply. Many of the transitional risks can be mitigated with a hybrid (private + public) cloud approach, a standard supply chain management technique. This approach has been used in many industries which have undergone a similar change, for example in the early decades of power generation it was common to combine public generation with private generators. Even today most data centres mix a variety of public suppliers with backup generators and UPS systems. Fortunately, these transitional risks are relatively short lived.

Outsourcing Risks: These cover lack of pricing competition between the new providers , lack of second sourcing options between providers, loss of strategic control to a specific technology vendor, lock-in and unsuitability of the activity for such service provision (i.e. it's not ubiquitous or well defined enough for such volume operations based service provision). The outsourcing risks can be reduced through the formation of a competitive marketplace of providers with easy switching between them and ideally the option to in-house service provision. The outsourcing risks are long term.

For a competitive market to form, you need easy switching which means portability. The basic ingredients of portability include a choice of providers, access to your code and data from any provider and semantic interoperability between providers i.e. both the origin and destination providers need to understand your code and data in the same way. There is limited value in having access to your code and data if no other provider understands it and operates to provide the same functionality e.g. getting access to your data in salesforce is great but what do you do with it?

In such circumstances, there does exist a weaker form of syntactic interoperability, which means both providers can exchange data but the end result may not function in the same way and your data may not retain its original meaning. Often, this is where we see translation systems to convert from one system to another with the usual abundance of translation and semantic errors.

The ideal situation is therefore semantic interoperability, which generally means a common reference model (i.e. running code) which providers either operate or conform to. Unfortunately, common reference models come with their own risks.

Let us suppose you have a marketplace of providers offering some level of service at a specific level of the computing stack (SPI Model) and these providers operate to a common reference model. The model provides APIs and open data formats, giving you access to your code and data. You therefore have a choice in providers, access to your data and semantic interoperability between them. You have portability. BUT, if that common reference model is owned by a vendor (i.e. it's proprietary code) then that market is not free of constrant but instead controlled by the vendor. All the providers & consumers in that marketplace hand over a significant chunk of strategic control and technology direction to the vendor, who is also able to exert a tax on the market through license fees.

To reduce this loss of strategic control and provide a free market (as in free of constraints), then that common reference model must not be controlled by one party. It has to be open sourced. In such an environment, competition is all about operational efficiency and price vs QoS rather than bits. This makes intuitive sense for a service world, which is why I'm pleased openstack is following that route and I hope it will become the heart of a market of AWS clones. Obviously, you'll need different common reference models at different layers of the computing stack. Whilst only one is probably needed for infrastructure, you will need as many as there are competitive application marketplaces (CRM, ERP etc) in the software later of the SPI model.

Before anyone cries the old lie of standardisation hampers innovation, it's worth remembering that utility service provision (which is what cloud is really about) requires volume operations which in turn requires a ubiquitous and well defined activity. Whilst the common reference models certainly won't be perfect in the beginning, they don't need to be, they only have to create "good enough" components (such as a defined virtual machine). They will improve and evolve over time but the real focus of innovation won't be on how good these "good enough" components are but instead what is built with them. This concept, known as componentisation, is prevalent throughout our industrial history and shows one consistent theme - standardisation accelerates innovation.

So everything looks rosy … we'll have the economics benefits of cloud (economies of scale, increased agility, ability to focus on what matters), competitive marketplace based around multiple providers competing on price vs QoS, the options to use providers or install ourselves or to mitigate risks with a hybrid option, "open" API & data formats giving us access to our code and data, open sourced common reference models providing semantic interoperability, "good enough" components for ubiquitous and well defined activities which will cause an acceleration of innovation of new activities based upon these components … and so on.

Think again.

In all likelihood, we're going to end up with islands in the cloud, marketplaces built around specific ways of implementing a ubiquitous and well defined activity. Don't think of "good enough" components but instead a range of different "good enough" components all doing roughly the same thing. Nuts? It is.

Hence, in the infrastructure layer you're likely to see islands develop around :-
  • EC2/S3 (e.g. core of AWS) including the open source implementations such as Open Stack, Eucalyptus and Open Nebula.
  • vCloud principally provided through VMWare technology.
  • a Microsoft infrastructure based environment.
  • any Openstack APIs, particularly if Rackspace implements this.
All of these will be providing their own versions of "good enough" units of virtual infrastructure. Within those islands you'll head towards multiple service providers or installations, a competitive marketplace with switching between installation and semantic interoperability based upon a common reference model. The open source projects such as OpenStack are likely to form assurance industries (think moody's rating agencies, compliance bodies) to ensure portability between providers by comparison to the common reference model whereas the proprietary technologies are likely to develop certification bodies (e.g. VMWare Express).

Between islands there will be only syntactic interoperability (with exceptions such as OpenStack which will try to span multiple Islands), which will mean that you'll require translation of systems from one island to another. Whilst management tools will develop (and already have started) to cover multiple islands and translation between them, this process is imperfect and a constant exercise in chasing different APIs and creating a lowest common denominator (as per libcloud). Of course, I wouldn't be surprised if the libcloud folk were hoping that as a community develops around them, then the providers will offer libcloud as a native API. Such command & conquer strategies rarely succeed.

Given this complexity and since there will be multiple service providers within an island, it's likely that consumers will tend to stick within one island. If we're lucky, some of these Islands might die off before the problem becomes too bad.

Of course, these base components could effect the development of higher order layers of the computing stack and you are likely to see increasing divergence between these islands as you move up the stack. Hence, the platform space on the vCloud island will differ from the platform space on the EC2 / S3 island. We will see various efforts to provide common platforms across both, but each will tend towards the lowest common denominator between the islands and never fully exploit the potential of any. Such an approach will generally fail compared to platforms dedicated to that island, especially if each island consists of multiple providers hence overcoming those general outsourcing risks (lack of second sourcing options etc). Maybe we'll be lucky.

So, the future looks like multiple cloud islands, each consisting of many service providers complying to the standard of that island - either vCloud, EC2/S3 or whatever. Increasing divergence in higher order systems (platforms, applications) between the islands and whilst easy switching between providers on an island is straightforward, shifting between islands requires translation. This is not dissimilar to the linux vs windows worlds with applications and platforms tailored to each. The old style of division will just continue with a new set of dividing lines in the cloud. Is that a problem?

Yes, it's huge if you're a customer.

Whilst cloud provides more efficient resources, consumption will go through the roof due to effects such as componentisation, long tail of unmet business demand, co-evolution and increased innovation (Jevons' paradox). Invariably one of the islands will become more price efficient i.e. there is no tax to a technology vendor who collects their annual license and upgrade fee through a drip feed process. It's this increased dependency combined with price variance which will result in operational inefficiencies for one competitor when compared to another who has chosen the more efficient island. The problem for the inefficient competitor will be the translation costs of moving wholesale from one island to another. This is likely to make today's translations look trivial and in all probability will be prohibitive. The inefficient competitor will be forced therefore to compete on a continual disadvantage or attempt to drive the technology vendor to reduce their taxation on the market.

The choices being made today (many are choosing islands based upon existing investment and political choices) will have significant long term impacts and my come to haunt many companies.
It's for these reasons, that I've recommended to anyone getting involved in cloud to look for :-
  1. aggressively commoditised environments with a strong public ecosystem.
  2. signals that multiple providers will exist in the space.
  3. signals that providers in the space are focused on services and not bits.
  4. an open source reference implementation which provides a fully functioning and operating environment.
In my simple world, VMWare is over-engineered and focuses on resilient virtual machines rather than commodity provision. It's ideal for a virtual data centre but we're talking about computing utilities and it also suffers from being a proprietary stack. Many of the other providers offer "open" APIs but as a point of interest APIs can always be reverse engineered for interoperability reasons and hence there is no such thing as "closed" API.

The strongest and most viable island currently resides around EC2 / S3 with the various open source implementations (such as UEC), especially since the introduction of Rackspace & Nasa's service focused openstack effort.

I don't happen to agree with Simon Crosby that VMWare's latest cloud effort Redwood == Deadwood. I agree with his reasoning for why it should be, I agree that they're on shaky grounds in the longer term but unfortunately, I think many companies will go down the Redwood route for reasons of political capital and previous investment. IMHO I'm pretty sure they'll eventually regret that decision.

If you want my recommendation, then at the infrastructure layer get involved with open stack. At the platform layer, we're going to need the same sort of approach. I have high hopes for SSJS (having been part of Zimki all those years back), so something like Joyent's Smart platform would be in the right direction.

---  Added 19th August 2013

Gosh, this is depressing. 

Three years and 15 days later Ben Kepes (a decent chap) writes a post on how we're coming to terms with what are basically "islands in the clouds".

OpenStack followed a differentiation road (which James Duncan and I raised as a highly dubious play to the Rackspace Execs at the "OpenStack" party in July at OSCON 2010). They didn't listen and we didn't get the market of AWS clones. In all probability if AWS compatibility had been the focus back in 2010 then the entire market around OpenStack could have possibly been much larger than AWS by now. But, we will never know and today, OpenStack looks like it has almost given up the public race and is heading for a niche private role.

In his article, Ben states that companies never wanted "cloud bursting" - a term which seems to be a mix of 'live' migration (a highly dubious and somewhat fanciful goal to aim for which is more easily managed by other means) combined with the ability to expand a system into multiple environments.

Dropping the 'live' term, then both can be achieved easily enough with deployment and configuration management tools. One of the reasons why I became a big fan of Chef in '08/'09 (and not just because of my friend Jesse Robbins). This sort of approach is simple if you have multiple providers demonstrating semantic interoperability (i.e. providing the same API and the same behaviour) as your cost of re-tooling and management is small. It becomes unnecessarily more complex with more Islands.

Anyway, that aside the one comment I'll make on Ben's post is the goal was never "cloud bursting" but instead second sourcing options and balancing of buyer / supplier relationship. Other than that, a good but depressing post.