In 1976, Space Aliens worried about future business synergy created Amazon EC2 out of spare non-existent and infinite computer resources using quantum time dilation tunnelling effects ... or was it?
There are many myths about EC2, my favourite includes the selling of Amazon spare capacity. It's always a good idea to talk to the people involved.
For the origins of Amazon EC2 - read here
For how it got built - read here
'Nuff said.
A node between the physical and digital.
The rants and raves of Simon Wardley.
Industry and technology mapper, business strategist, destroyer of undeserved value.
"I like ducks, they're fowl but not through choice"
Friday, November 30, 2012
Wednesday, November 28, 2012
Competition, Strategy and Execution ... an OCI question.
I was asked a question recently, why did the OCI (Open Cloud Initiative) not demand an open source reference model? The answer is ... it does.
What OCI doesn't demand is all implementations of a "standard" have to be open sourced, it allows for operational improvements and hence service competition between providers. For example, I take an open source model for IaaS and make it work better somehow for my own IaaS and decide to keep those improvements proprietary.
Such competition based upon operational efficiency (as opposed to feature differentiation) is fine in a utility market, in fact it's even highly desirable in terms of reducing the probability of common failures. However, the market needs to ensure that semantic interoperability (a necessity for switching) between the providers is maintained. For this you need an assurance mechanism around a core model.
If the core model is open (i.e. the open version is a full, faithful and interoperable reference model) and the assurance system is built around this then you should get a free market (as in unconstrained). This is in contrast to a captured market which is based upon a proprietary reference model and hence is under the influence of a single vendor.
For example, take Cloud Foundry which provides an open source PaaS which is implemented by a number of providers. This is on the way to creating a competitive free market of providers based around an open source reference model. However, you still need a mechanism of assurance that semantic interoperability is maintained (i.e. innovation is constrained in some manner to operational improvements rather than differentiation which itself limits switching between providers). Hence things like CloudFoundry Core, which provides such an assurance mechanism are critically important to the game.
Alas, knowing how to play the game (e.g. create a market based on an open reference model, allow operational competition and create assurance) is merely necessary but not sufficient to create a functioning market. There's also the thorny issue of execution.
Whereas commoditisation is a consequence of competitive action (user and supply competition) of ALL actors and does not depend upon the execution of specific actors, the questions of centralisation (few players) or decentralisation (a broad market) is commonly a consequence of the play BETWEEN actors and it does depend upon execution by those different actors.
Hence whilst, OCI embodies the principles of creating an unconstrained competitive market based on an open source reference model with operational competition between the players - that's only part of the battle. Whether such a competitive market will actually form or instead a more centralised environment emerges which does not espouse any of the OCI values depends upon how well the game is played.
In other words, any strategy or principle no matter how good or benign becomes relatively meaningless without good execution and good game play.
Monday, November 19, 2012
Monopolies, Commoditisation and Centralisation
Just read a good article by Mark Thiele on Why Monopolies and Commoditization would pollute the cloud. It reminds me of my 2007 talk when I said "open source would be needed to counter the threat of monopolies in this space".
However that was then, this is now and there's a world of difference between how the game should be played and execution. The article has a number of assumptions which need challenging. So let's go through each ...
"Cars have been around for over 100 years now, they must be commodity by now, right?"
Here the article is assuming that the process of commoditisation is time based like diffusion i.e. adoption over time. In reality it depends upon the actors in the market and competition. Hence evolution from the genesis of something to commodity is governed by ubiquity (how widespread something is and user / demand competition) and certainty (how feature complete something is and supplier / supply competition). See graph below.
The nut and bolt took almost 2,000 years to go from genesis (first creation) to late product and commodity (starting with Maudslay's screw cutting lathe). Electricity took 1400 years from the Parthian Battery to Westinghouse / Tesla and A/C utility provision. Telephony about 100 years and computing infrastructure about 65+ years. So, you cannot assume some linear relationship with time.
Secondly, in the above examples (Nut and Bolts, Electricity, Computing Infrastructure, Telephony) these systems have become components of something else (e.g. Machines, Consumer Electronics, Big Data System, Smart Phones).
In industries where the car is a component of some other value chain (e.g. Taxis, Fleets, On Demand hire services etc) then it would be worth looking at whether standardisation has or is occurring.
As the article points out the car itself has many commodity components but if the system you're examining is the top level system then you're always going to have branding, perception of other values which impact it. Gary Dahl's "Pet Rock" is probably the best loved example of associating a branding value with what remained a commodity ... it's not a rock (which it was) but a "Pet Rock".
A comparison of computing infrastructure to cars is one of comparing a component to a higher order system which has other values (i.e. personal branding, status symbol etc). If the article was to compare like for like then it probably would end up with a different conclusion i.e. how many users of Amazon EC2 care or even know what hardware Amazon EC2 runs on - it's an invisible component, far down the value chain. The same can be said of how many users know or care what specification of nuts and bolts are used to build their car?
"If we allow a few companies to push the technology to a true commodity business model"
First, it's not companies that drive things towards a commodity model but the interaction of users (demand competition) and suppliers (supply competition). The question of evolution (which for activities we call commoditisation) is separate from the question of centralisation / decentralisation and the two shouldn't be conflated.
It would have been relatively trivial for the hardware manufacturers to create a price war in the IaaS space around 2008-2010 in order to fragment the market by increasing demand (computing infrastructure is elastic) beyond the capability of one vendor to supply. The fact they didn't is their own fault and also one of the major factors why we might see centralisation.
In general:-
- The process of evolution (driven by demand and supply competition) is not time based but depends upon the overall interactions of ALL actors (users and suppliers). It is an inevitable process of competition.
- The question of centralisation / decentralisation varies with a number of different economic forces but it usually depends upon the actions and gameplay of SPECIFIC actors (suppliers). Often you will find that companies are suffering from inertia to change (due to past success) and hence new entrants into a market that is commoditising can quickly capture the market. This doesn't need to be the case though and the issue is one of executive failure usually of past giants and inability to react.
Let me be absolutely clear here, commoditisation does not mean centralisation. There's a myth that it does, often used to create strawman arguments. These two issues of commoditisation and centralisation are entirely different things.
However, it's probable that commoditisation of various IT activities (nee cloud) will lead to centralisation due to failure of competitors within this space. You can't assume that commoditisation and centralisation occur hand in hand but in the case of IaaS it's likely.
Whilst "open source would be needed to counter the threat of monopolies in this space" still holds true, the actions since 2007 (particularly on differentiation) means the industry didn't counter the threat in the IaaS space. This didn't have to be the case, learning those lessons from HTTP and a more focused and earlier attack on becoming the open source AWS clone would have changed this. Unfortunately rather than a strong play, competitors have played a weak game and a period of monopoly / oligopoly looks destined to be the result.
Hence the shift towards utility services is occurring (driven by actions of all players), open source is the route to creating competitive markets (in the final state) but due to Amazon playing the game well and most competitors having poor gameplay (with the possible exception of Google) then we're likely to see a period of monopoly / oligopoly in IaaS (of Amazon and Google) for far longer than was necessary.
Fortunately, some (such as CloudStack, Eucalyptus and OpenStack groups like CloudScaling) appear to know how to play the game. So, I take the view that this won't be a permanent state of affairs and it will eventually work itself out in the IaaS space. We will eventually get those competitive markets based around open source IaaS or in the worst case scenario Government regulation. The downside, it'll take far longer than it needed to, by about 5-15 years (unless of course Amazon or Google decide to open up first or CloudStack suddenly becomes the focal point of intense competition e.g. VMware making a huge IaaS play based upon it).
We will eventually get there but as I've said, centralisation / decentralisation all depends upon how well the actors played the game and let's be frank - most haven't played it well. Luckily for all of us, CloudFoundry is playing a very smart game in the platform space, so that area seems less of a problem.
Whilst "open source would be needed to counter the threat of monopolies in this space" still holds true, the actions since 2007 (particularly on differentiation) means the industry didn't counter the threat in the IaaS space. This didn't have to be the case, learning those lessons from HTTP and a more focused and earlier attack on becoming the open source AWS clone would have changed this. Unfortunately rather than a strong play, competitors have played a weak game and a period of monopoly / oligopoly looks destined to be the result.
Hence the shift towards utility services is occurring (driven by actions of all players), open source is the route to creating competitive markets (in the final state) but due to Amazon playing the game well and most competitors having poor gameplay (with the possible exception of Google) then we're likely to see a period of monopoly / oligopoly in IaaS (of Amazon and Google) for far longer than was necessary.
Fortunately, some (such as CloudStack, Eucalyptus and OpenStack groups like CloudScaling) appear to know how to play the game. So, I take the view that this won't be a permanent state of affairs and it will eventually work itself out in the IaaS space. We will eventually get those competitive markets based around open source IaaS or in the worst case scenario Government regulation. The downside, it'll take far longer than it needed to, by about 5-15 years (unless of course Amazon or Google decide to open up first or CloudStack suddenly becomes the focal point of intense competition e.g. VMware making a huge IaaS play based upon it).
We will eventually get there but as I've said, centralisation / decentralisation all depends upon how well the actors played the game and let's be frank - most haven't played it well. Luckily for all of us, CloudFoundry is playing a very smart game in the platform space, so that area seems less of a problem.
"Innovation would be stifled"
Quite the opposite. There are two aspects of innovation here - operational efficiency and creation of higher order systems. Innovation is one of those awful words which means many different things, however the two are not the same.
The commoditisation of any pre-existing activities which acts as a component leads to rapid increases in the genesis of higher order systems :-
- From Nuts and Bolts to Machines
- Electricity to Radio / Hollywood / Consumer Electronics / Computing
- Computing to Big Data and all the changes we're seeing today.
Hence commoditisation always increases innovation (as in genesis of higher order systems), this effect is known as componentisation (Herbert Simon).
Equally, innovation behind the interface doesn't stop i.e. electricity was standardised to an interface but innovation still occurs in the means of production (e.g. operational improvements and efficiency or new means of generation). This is the same with all manner of industries from finance to manufacturing (see float glass method of producing glass etc).
You cannot therefore state that commoditisation inhibits or limits or stifles innovation, where historical evidence shows it not only allows for innovation in production but accelerates and enables innovation of higher order systems. Each of the major ages - industrial, mechanical, electrical, internet - are associated with commoditisation of pre-existing activities.
"Drivers that make unique IT solutions critical"
The article is absolutely spot on that even in a mass market of commodity components there are always niches - same with electricity, finance, most manufacturing etc. You'd expect the same with computing. There will be highly profitable but small niches.
"There are just too many ways to (in this case) build that car"
One of the most important parts of any of the common cycles of changes (i.e. ages) is the disruption of past giants stuck behind inertia barriers due to past success. Disruption comes in two forms - there's the unpredictable, unforeseeable (e.g. a change in characteristic of a product such as disk drives) and then there's the predictable (e.g. a shift of an activity from product & rental services to commodity and utility services).
Both cause disruption, the former because it's unseen and hard to defend against, the latter because it's seen but the company is unable to react due to inertia (which is often embedded in the culture of the organisation).
We've seen this in every major age and there is nothing which suggest that cloud, which was predicted by Douglas Parkhill in his 1966 book the challenge of the computing utility, will be any different.
The list of companies who believed that their industry would not be commoditised is long and glorious from the gas lamp companies of the past to almost certainly modern infrastructure companies today.
The problem for all is that their offering is just a component.
In summary
Mark's article is interesting, adds to the debate but makes some hefty assumptions in my view and may even be victim of falling for many of the spreading myths. However, it's definitely well worth a read and a welcome change from some of the strawman arguments and schoolyard diatribe that I've been exposed to of late. It's a refreshingly sensible article.
Its general premise on the danger of monopolies is one I wholeheartedly agree with. The reason this danger exists though is not one of commoditisation itself but instead executive failure of competitors - from differentiation strategies to failure to effectively execute and in many cases failure to act.
In summary
Mark's article is interesting, adds to the debate but makes some hefty assumptions in my view and may even be victim of falling for many of the spreading myths. However, it's definitely well worth a read and a welcome change from some of the strawman arguments and schoolyard diatribe that I've been exposed to of late. It's a refreshingly sensible article.
Its general premise on the danger of monopolies is one I wholeheartedly agree with. The reason this danger exists though is not one of commoditisation itself but instead executive failure of competitors - from differentiation strategies to failure to effectively execute and in many cases failure to act.
Big Data
"Big Data" is occurring due to the increase in new activities producing un-modelled data combined with the cost of thinking about whether to store data exceeding the cost of storing. To be put it simply, it's cheap (or should be) to store everything.
It's like my "stuff" box at home but on steroids. I throw all sorts of bits and pieces into my "stuff" box because it's cheaper than taking the time to sort out what I should keep. I also do so on the assumption that "there's value or maybe there will be value in that stuff". In general this turns out to be a delusion, it's mainly junk but I can't help thinking that "there's a pony in that field".
Eventually the box becomes full and I'm faced with a choice. Buy a bigger box (scale-up), buy another box (scale-out) or just bin it anyway. I tend to do the latter. What I don't need is a bigger or better or more distributed "box" but instead a better "algorithm" for sorting out what has value and what doesn't.
Eventually the box becomes full and I'm faced with a choice. Buy a bigger box (scale-up), buy another box (scale-out) or just bin it anyway. I tend to do the latter. What I don't need is a bigger or better or more distributed "box" but instead a better "algorithm" for sorting out what has value and what doesn't.
A lot of "Big Data" seems to be about better "boxes" where in my honest opinion it should be focused on better "algorithms / models". I'm not against storing everything, especially when it's cheap to store data (i.e. distributed system built with commodity components etc) as you never know what you might find. However, that shouldn't be the emphasis.
Oh, as for my "stuff" box, StorageBod humorously raised the idea of using the attic. Look's like I've got an even bigger "stuff" box now, though I'm not sure that helps me? I'll have to decide whether I fill the attic with lots of "stuff" boxes or use it as a free for all? Maybe I'll need a cataloguing system?
Of course if I fill up my attic with stuff then I'll probably end up with some salesman telling me stories about how "Mr Jones found a lost lottery ticket" or "Ms Jones found an old master's" in their attics. I'll probably end up spending a shed load of cash on the "Attic Drone Detection, Investigation, Cataloguing and Treasure Seeking (ADDICTS)" system.
I know there's a pony in that field somewhere, I'm sure of it. Otherwise I wouldn't just put this stuff in a box marked "stuff" - would I?
Oh, as for my "stuff" box, StorageBod humorously raised the idea of using the attic. Look's like I've got an even bigger "stuff" box now, though I'm not sure that helps me? I'll have to decide whether I fill the attic with lots of "stuff" boxes or use it as a free for all? Maybe I'll need a cataloguing system?
Of course if I fill up my attic with stuff then I'll probably end up with some salesman telling me stories about how "Mr Jones found a lost lottery ticket" or "Ms Jones found an old master's" in their attics. I'll probably end up spending a shed load of cash on the "Attic Drone Detection, Investigation, Cataloguing and Treasure Seeking (ADDICTS)" system.
I know there's a pony in that field somewhere, I'm sure of it. Otherwise I wouldn't just put this stuff in a box marked "stuff" - would I?
Wednesday, November 14, 2012
Hi Ho Silver Lining
Having a plan to create a federated market based upon a common open source reference model is not something new - it's a pretty common idea (the presentation linked above, I gave in 2007 and it wasn't a new concept then). But having a plan is not enough, execution matters.
In the platform space, Cloud Foundry seems to be leading that charge. They've even recently released a Cloud Foundry Core which enables them to provide users with some level of assurance between the different Cloud Foundry providers. This is all good stuff and yes there is competition with Google App Engine combined with the open source AppScale. However the approach of VMware towards creating a marketplace is enlightened in my view. They've got a good shot at making a large competitive market happen.
In the infrastructure space, the road has been more rocky with lots of mis-steps. However, I'm fairly bullish about CloudStack. Their focus on not differentiating with AWS but instead co-opting (which is fairly uniformly what I hear customers ask for, an AWS clone) combined with its membership of the ASF and the release of CloudStack 4.0 are all positives in my view. It bodes well for the future assuming they can grow and build a vibrant community.
The technology is also used in production in various places (30,000+ servers cited in one case), ISPs are coming online ... again, all good stuff. By not differentiating they also buy themselves time in the expected AWS vs GCE war as they can grow a large and vibrant ecosystem around the AWS ecosystem. Ultimately, if they can grow fast enough they might even exceed it. They have competition (Eucalyptus, OpenNebula, OpenStack etc) in this space but CloudStack seem to be taking a good shot at making this market happen.
The technology is also used in production in various places (30,000+ servers cited in one case), ISPs are coming online ... again, all good stuff. By not differentiating they also buy themselves time in the expected AWS vs GCE war as they can grow a large and vibrant ecosystem around the AWS ecosystem. Ultimately, if they can grow fast enough they might even exceed it. They have competition (Eucalyptus, OpenNebula, OpenStack etc) in this space but CloudStack seem to be taking a good shot at making this market happen.
Back in OSCON 2007 when I gave my keynote on federated markets and open source, I was concerned that these future industries would be dominated by proprietary environments. Companies had to pick up the idea of building federated markets around open source, they had to start working together and they had to execute. With projects like CloudFoundry and CloudStack, I'm less so concerned these days for the long term. Both projects seem to understand where they need to go and neither has made a serious mis-step (e.g. failing to open source, going down an open core route, differentiating when they shouldn't, trying to be all things to all people, confused messaging on focus, major losses in community, unchecked issues around a collective prisoner dilemma).
They're both playing a reasonably good game, backed by well funded companies and are executing with a vision of creating a market in my opinion. For me, they are the silver lining in a cloud that at one point threatened a dark future for open source and consumers alike. This makes me happy - as in closer to 3 rules happy.
Thursday, November 08, 2012
On myths ..
Oh, I'm hearing lots of myths being spread about cloud ... again. Let me see if I can't nail a few.
On Amazon
AWS wasn't about selling spare capacity, it was built as a stand alone service.
On Utility
All activities would appear to evolve, the end state is one of commodity and utility provision. What is happening today is simply a shift from products to utility services. We happen to call this "Cloud".
On Centralisation
Utility does not mean centralisation (or consolidation to a few providers), the two are entirely different and governed by different economic forces. A utility service can be provided by a large number of different providers in a market.
On Open Source
A market of utility compute providers needs to solve the issue of semantic interoperability. In practice, to create a "free" (as in non constrained) rather than captured market, the technology must be provided through an open source reference model.
Open Stack is guaranteed to win
So, can we assume that OpenStack will win because it plans to create a federated market based upon a common open source reference model ... err No. There's that little issue of execution.
Whilst an open source approach has all the right advantages, you cannot assume that this will be OpenStack for many reasons :-
On Amazon
AWS wasn't about selling spare capacity, it was built as a stand alone service.
On Utility
All activities would appear to evolve, the end state is one of commodity and utility provision. What is happening today is simply a shift from products to utility services. We happen to call this "Cloud".
On Centralisation
Utility does not mean centralisation (or consolidation to a few providers), the two are entirely different and governed by different economic forces. A utility service can be provided by a large number of different providers in a market.
On Open Source
A market of utility compute providers needs to solve the issue of semantic interoperability. In practice, to create a "free" (as in non constrained) rather than captured market, the technology must be provided through an open source reference model.
Open Stack is guaranteed to win
So, can we assume that OpenStack will win because it plans to create a federated market based upon a common open source reference model ... err No. There's that little issue of execution.
Whilst an open source approach has all the right advantages, you cannot assume that this will be OpenStack for many reasons :-
- There are multiple competitors to OpenStack - CloudStack and Eucalyptus to name two. Each has the potential to build a market and competing ecosystem.
- The rate of innovation, customer focus and efficiency grows with the size of AWS's ecosystem and so critical to competition is building a bigger ecosystem. Without visibly co-opting the EC2 / S3 / EBS APIs then OpenStack will put itself at a disadvantage.
- It is likely that a price war will develop between AWS vs GCE which will only increase demand (Jevons' Paradox). If a competing ecosystem is not in place at this time, it will get left further behind.
- The scale advantage is not about efficiency alone but rate of innovation and customer focus. With big enough ecosystems then Amazon and Google can continually outstrip new competitors i.e. they will continually be further ahead and increasingly so.
If you asked me back in 2010 should OpenStack win this game - I would have said that in all probability ... yes. Rick Clark was key in the project and he knew the game. However, ask me that same question today and you'll get a different answer.
This isn't because the approach of building a competitive market around an open source reference model is wrong but instead the execution of this project. Which is why I say they desperately need a Benevolent Dictator to get things sorted fast. Someone like Lew Tucker.
Rackspace going ALL - IN with OpenStack
I was alerted by a good friend of mine Benjamin Black that Rackspace had announced it was going "ALL - IN" with OpenStack and that it was going to compete with Amazon on service not scale.
Ok, this is potentially great news for OpenStack but that depends upon the play and intention at hand.
If Rackspace believes that there are enough companies setting up or wanting to setup as utility providers of infrastructure around OpenStack then the move can make a great deal of sense. By enabling other companies to set-up, Rackspace's focus would be on growing the entire ecosystem without being a dominant player in that market. This is actually essential if you want to try and become the exchange and / or marketplace for a broad utility market.
So let us assume that the focus in Rackspace has become :-
- build OpenStack into the reference model for a huge ecosystem (i.e. bigger than AWS)
- manoeuvre to become the exchange / marketplace / assurance body for that ecosystem
... then that's grand. It's a bold move but a good play.
By doing so, it would also make it easier for Rackspace to co-opt OpenStack competitors where such action is beneficial as it removes the whole differentiation and on ramp to Rackspace argument. It may also mean that Rackspace will push the technology even faster as they increasingly depend upon a broad ecosystem of utility providers. It will also enable them to introduce some form of certification for OpenStack (much as Google has done with CTS) in order to overcome the collective prisoner dilemma (everyone within the group differentiating). This latter part is required for assurance reporting across multiple providers (and an essential part of an exchange).
So the models for Rackspace would become :-
- Focus on growing the ecosystem rapidly
- Build a certification and ratings agency (e.g. a Moody's model) to ensure compliance of members offerings to OpenStack (essential for switching)
- Build a marketplace for a market of OpenStack providers (e.g. a uSwitch model)
- Build a computing exchange (e.g. where the real money will be)
Add into this some service and technical support revenue (though helping companies get going with OpenStack) then this would all be very reasonable. By also growing OpenStack in the enterprise and helping companies build their own private OpenStack clouds (whether sensible or not), there is the potential to grow future customers of this market by providing a natural route for transition.
Whilst the play is obvious and has been often repeated umpteen times over the years (in 2007 we were talking federated markets etc), it's potentially a good move because no-one has yet effectively built that market, marketplace, exchange and assurance body. Of course, it'll bring them straight into a collision course with RightScale, Enstratus, Canonical and others who have been gearing up for this space.
It's going to be a huge uphill battle - you've got AWS vs GCE to contend with, you'll need to move fast, you'll need to encourage OpenStack members to bet big as utility providers, you'll need to co-opt competitors, you'll need to manage the potential conflicts and you'll need to get that market setup within the next 12 months.
However, it gives some hope.
Of course, I'm assuming that this is what they're actually planning. If instead their plan is to get enterprises building OpenStack as an on ramp to Rackspace services which they'll "differentiate" on some spurious basis rather than competing on scale with little or nor focus on the ecosystem, marketplace, exchange etc ... then ... oh well.
So, the dead duck might just have made a murmur or alternatively it was gas escaping ... can't tell which yet. Will they successfully achieve this? Will they be able to climb that mountain?
Well, if you want my personal opinion ... no. Looking at what has happened and the choices they've made, then I take the view that they lack the force of leadership necessary to play this game. Of couse, that's assuming they're even playing the right game.
Well, if you want my personal opinion ... no. Looking at what has happened and the choices they've made, then I take the view that they lack the force of leadership necessary to play this game. Of couse, that's assuming they're even playing the right game.
Wednesday, November 07, 2012
These US elections are more complex than I realised.
The internet is all agog with talk of Nate Silver and how he got the election right. So, I went to have a look and he seems to have called the race at 313 (Obama) / 225 (Romney). That seems very impressive to me.
However, I hate to be picky but whilst the prediction was close it doesn't seem to be actually right. It seems the result will end up 332 / 206 when Florida calls (assuming Obama wins). I've been told that actually Nate predicted a broader range and that 313 / 225 was the average - so he was hedging.
That's ok then. Still, it's very impressive and yes the twitter verse is flowing with #natesilverfacts
Now, as impressive as Nate Silver's prediction was, it seems that Drew Linzer who has predicted an Obama win since June with a 90%+ certainty and the right range, made his prediction of 332 / 206 which is also what he has been predicting since June.
Hang on - 332 / 206 - that's what seems to be happening. That's no hedge, that's just oh wow. Has Drew Linzer really nailed it? Since June?
Every state, every forecast - on the money. That's real wow. That's mega mega wow with wow sauce on.
That's more than just impressive that's so impressive that there must be ... wait ...
Where's the #drewlinzerfacts?
Hint : There aren't any.
Now, both Nate Silver and Drew Linzer have certainly made exceptional predictions here and despite the hedging on the overall count on Nate's part, his overall predictions on % vote for each candidate squeaked past Drew i.e. Nate Silver was more accurate in 26 States whereas Drew was more accurate in 24 States.
But why the silence on Drew Linzer? If Florida goes the way expected then :-
#NateSilver can beat the sun in a staring contest but only Drew Linzer can make it run and hide #drewlinzerfacts
OK, this most be some sort of special US Election thing that I'm not getting seeing that I'm a Brit. I'm a huge fan of people who stick their necks out, don't hedge and use data. Linzer is a star.
However, I hate to be picky but whilst the prediction was close it doesn't seem to be actually right. It seems the result will end up 332 / 206 when Florida calls (assuming Obama wins). I've been told that actually Nate predicted a broader range and that 313 / 225 was the average - so he was hedging.
That's ok then. Still, it's very impressive and yes the twitter verse is flowing with #natesilverfacts
Now, as impressive as Nate Silver's prediction was, it seems that Drew Linzer who has predicted an Obama win since June with a 90%+ certainty and the right range, made his prediction of 332 / 206 which is also what he has been predicting since June.
Hang on - 332 / 206 - that's what seems to be happening. That's no hedge, that's just oh wow. Has Drew Linzer really nailed it? Since June?
Every state, every forecast - on the money. That's real wow. That's mega mega wow with wow sauce on.
That's more than just impressive that's so impressive that there must be ... wait ...
Where's the #drewlinzerfacts?
Hint : There aren't any.
Now, both Nate Silver and Drew Linzer have certainly made exceptional predictions here and despite the hedging on the overall count on Nate's part, his overall predictions on % vote for each candidate squeaked past Drew i.e. Nate Silver was more accurate in 26 States whereas Drew was more accurate in 24 States.
But why the silence on Drew Linzer? If Florida goes the way expected then :-
#NateSilver can beat the sun in a staring contest but only Drew Linzer can make it run and hide #drewlinzerfacts
OK, this most be some sort of special US Election thing that I'm not getting seeing that I'm a Brit. I'm a huge fan of people who stick their necks out, don't hedge and use data. Linzer is a star.
Tuesday, November 06, 2012
On OpenStack and Dead Ducks ...
I received a message recently that I only referred to Open Stack as a dead duck because it disagreed with my hypothesis on evolution which was unscientific and quasi religious.
This is a very misguided view, so I thought I'd better respond.
On the hypothesis of evolution.
Back between 2005-2007, I collected a reasonable amount of data (around 6,000 data points from telephones to televisions to engineering components to electricity to banking instruments to ... a long list) covering a hundred+ years and discovered a process of how things evolve (as opposed to how things diffuse). This process which is driven by user and supply competition is described in the diagram below which covers both the correlation and causation of the change.
During 2009-2011, I used a variety of predictions tests to confirm the consequences of the model. I'm happy to now call this a weak hypothesis being supported by historical data, prediction tests and even published in an article as part of a peer reviewed journal.
Does this mean it is true? Of course not. It means it's a weak hypothesis. The model describes the journey of any activity (a thing we do) from genesis to custom built examples to product (and rental services) to commodity (and utility services). Such as the evolution of computing infrastructure from the Z3 to EC2 today (and its equivalents).
Graph of Evolution
For those still in confusion, the above is NOT a diffusion curve (there is no time axis) though it happens to have an S-Curve shape. When the Z3 was created, the act of using computing infrastructure was rare and poorly understand. For Amazon EC2 to appear the act of using computing infrastructure had to be both widespread (ubiquitous) and well understood (certain) in order to support the volume operations that utility provision requires.
Of course both the genesis of an activity and utility provision of a pre-existing activity diffuse but diffusion and evolution are not the same.
Of course both the genesis of an activity and utility provision of a pre-existing activity diffuse but diffusion and evolution are not the same.
The Market Today
When we talk about activities (as opposed to practices and data), we commonly refer to this process of evolution by the term "commoditisation". This is exactly what is happening with computing infrastructure today, it is being commoditised to utility services as exemplified by Amazon EC2.
To counter the hypothesis, you'd have to demonstrate that somehow infrastructure was becoming less of a commodity and that rather than utility services growing that they would suddenly decline and we would return to a world governed by individually bought products. I have yet to find an example of this throughout history and no reason to suspect that this model will not hold today i.e. utility services for computing infrastructure are here to stay.
I should caveat that there are certain marketing mechanisms and abuses of market power (i.e. oligopoly and reduced competition) which can give an appearance of something "de-commoditising" along with an issue of substitution, but before blindly accepting the opinion that something that has never happened before will now happen, I would ask for one iota of evidence of this. Simply demonstrate to me that utility services in infrastructure are not growing. Simply explain to me why utility services for infrastructure are not the future and how this transition towards cloud is a mere blip which will soon reverse. The market says otherwise, history says otherwise and I see no data which supports the alternative opinion.
I should caveat that there are certain marketing mechanisms and abuses of market power (i.e. oligopoly and reduced competition) which can give an appearance of something "de-commoditising" along with an issue of substitution, but before blindly accepting the opinion that something that has never happened before will now happen, I would ask for one iota of evidence of this. Simply demonstrate to me that utility services in infrastructure are not growing. Simply explain to me why utility services for infrastructure are not the future and how this transition towards cloud is a mere blip which will soon reverse. The market says otherwise, history says otherwise and I see no data which supports the alternative opinion.
Instead I see 6,000 data points plus today's market which says they are wrong. Now, to call me religious for basing hypothesis on data (both current and past), modelling and prediction tests is farcical. To do so because I won't accept their unsupported, un-evidenced opinion that the entire history of human development is wrong ... well.
The hypothesis states that utility services will grow and dominate computing infrastructure, I see no evidence that this will not happen.
On players
Now, if you happen to agree that the market for computing infrastructure is shifting towards utility services then it becomes a question of who will win that market, will anyone win, what standards will develop and if so how many?
I say standards develop because the natural end state of utility services is provision of fairly standardised components which is commonly through defacto standards followed later by dejeure. This is an essential part of creating a competitive utility market which also appears common in all utility services. The answers to these questions depend upon the actions of the players in the market as it forms.
Currently Amazon dominates the infrastructure as a service market and the APIs it provides can be considered defacto. This is not a permanent situation, it is possible for other players to build a larger ecosystem and to supplant Amazon as the dominant. An example threat to Amazon may well be Google Compute Engine.
At this moment in time however, Amazon is leading the pack and appears to be visibly growing faster than those around it. The APIs it provides have been adopted by others (Eucalyptus, CloudStack and even OpenStack) and so whilst this part of the race is not over, it looks like Amazon is a good bet.
On OpenStack
Obviously Amazon has multiple zones and regions but to counter you could attack its weakness of being a single point of failure (a single company) by playing a competitive market game with multiple providers. In order to do so, you would have to solve the issues of semantic interoperability and this in practice can only be done with an open source reference model.
However, the ecosystem around Amazon provides it with extra-ordinary benefits in terms of innovation, customer focus and efficiency. Hence a smart play would be to co-opt the ecosystem rather than attempt to build from scratch (i.e. to differentiate away from it). You could differentiate later once your ecosystem was big enough but it seems naive to do this early.
When Rick Clark left Canonical, joined Rackspace and was instrumental in the creation of OpenStack - I fully supported this move of building a competitive marketplace around an open source reference model which co-opted the biggest ecosystem. However that idea appeared to be short lived as the ideas of differentiation quickly surfaced.
Today, I see considerable problems with OpenStack which I've listed beforehand. My opinion is the project has been hampered by poor leadership, poor strategic play, a lack of major investment by the companies involved, slow development, weak technology and an unnecessary focus on differentiation. It does however have a visible and engaged community.
With the coming price war likely to be initiated between Google Compute Engine vs AWS then OpenStack needs to have in place in the next year, a massive scale competitive market of providers with strong technology. I hope that they achieve this but I see little evidence of what is needed. Instead I see further potential issues around a collective prisoner dilemma issue (where members differentiate within the group itself).
So do I believe that in the next year that a benevolent dictator will emerge and resolve these issues, that the technology will become rock solid, that members will invest the billions needed to build a competitive market at scale ... er no. Which is why I hold the opinion that OpenStack is a wasted opportunity and hence a dead duck.
So, what if OpenStack fails, will the shift towards utility provision of infrastructure continue? Yes, well, at least that's the hypothesis.
But what if OpenStack does manage to create a competitive utility market at scale, will the shift towards utility provision of infrastructure continue? Yes, well, at least that's the hypothesis.
This is why the comment that I called OpenStack a dead duck because it "disagreed with my hypothesis on evolution which was unscientific and quasi religious" is misguided whether deliberately so or not. Oh, I'm being too kind ...
The process of evolution is independent of any individual players success or any particular person's opinion. It is simply a consequence of competition.
A final few words
I realise that people have their own pet opinions (often they call them "theories" when they really shouldn't) and despite scant or more commonly no supporting evidence they argue vociferously that their idea will happen. If you're one of those then "bully for you", you're obviously omnipotent though that's not the word that comes to mind.
Yes, I have opinions (e.g. OpenStack) which I state clearly as opinions. No-one can predict the interactions of actors in this space, you can only make informed guesses and yes, my opinions are often wrong. For more on the unpredictability of actors actions then Hayek is worth a read.
Yes, I also have areas of research (e.g. evolution) and this depends upon collection of data, causation, correlation and prediction tests. Evolution is not time based (i.e. no crystal ball again) and it doesn't depend upon specific actors actions but instead competition between actors. No, it isn't "right" or "true" or "absolute", it's just the best model I have to explain market phenomenon that are clearly visible for everyone to see.
A final few words
I realise that people have their own pet opinions (often they call them "theories" when they really shouldn't) and despite scant or more commonly no supporting evidence they argue vociferously that their idea will happen. If you're one of those then "bully for you", you're obviously omnipotent though that's not the word that comes to mind.
Yes, I have opinions (e.g. OpenStack) which I state clearly as opinions. No-one can predict the interactions of actors in this space, you can only make informed guesses and yes, my opinions are often wrong. For more on the unpredictability of actors actions then Hayek is worth a read.
Yes, I also have areas of research (e.g. evolution) and this depends upon collection of data, causation, correlation and prediction tests. Evolution is not time based (i.e. no crystal ball again) and it doesn't depend upon specific actors actions but instead competition between actors. No, it isn't "right" or "true" or "absolute", it's just the best model I have to explain market phenomenon that are clearly visible for everyone to see.
I would happily dump the model of evolution if someone could finally provide me with a better model and no, I don't count hand wavy concepts with no data, cause, correlation, test or historical examples even if you do believe you're omnipotent. I'm a skeptic.
Saturday, October 20, 2012
Something for the future
A list of companies in two groups. I'm putting this here in order to return to the list in 2020 and explain why I had the companies listed as such.
It's a prediction test (useful for me, hence I'm putting it somewhere public) but probably not useful for anyone else.
Group 1
Group 2
It's a prediction test (useful for me, hence I'm putting it somewhere public) but probably not useful for anyone else.
Group 1
- Amazon
- Samsung
- BP
- Baidu
- China Telecom
- EMC
- Lloyds Bank
- ARM
- NetFlix
- Ebay
- Yahoo
- Intel
- BAE Systems
- Lenovo
- Salesforce
- Time Warner
- Huawei
- Canonical
- Citrix
- Fastly
- Bromium
- Opscode
- Juniper Networks
Group 2
- DuPont
- GSK
- WalMart
- Microsoft
- Berkshire Hathaway
- Goldman Sachs
- Barclays Bank
- RedHat
- Walt Disney
- IBM
- Cable and Wireless
- Canon
- SAP
- HP
- CouchBase
- Oracle
- Cisco
- Puppet Labs
- Apple
- Rackspace
- Dell
- Nokia
- Zynga
- PistonCloud
Friday, October 19, 2012
On Open Source, Standards, Clouds, Strategy and Open Stack
On Standards and Open Source
The issue of standards and in particular open standards is a hotly debated topic. The recent UK Government consultation on open standards was embroiled in much of the politics on the subject with even a media expose of the chair of a consultation meeting as a member of a paid lobbyist group. The rumours and accusations of ballot stuffing of ISO meetings with regards to Microsoft’s OOXML adoption as an open standard are also fairly widespread. The subject matter is also littered with confusing terms from FRAND (fair, reasonable and non discriminatory licensing) being promoted as an “open” standard despite it being IP encumbered by definition.
In general, the principle of standards is about interoperability. In practice, it’s appears more of a battleground for control of a developing market. Standards themselves can also create new barriers to entry into a market due to any onerous cost of implementation. There are also 17 different definitions of what an “open standard” is varying from international bodies to governments. Of these the OSI definition is probably the most respected. Here, open standards are defined as those which have no intentional secrets, not restricted by patents or other technology, no dependency on execution of a license agreement, freely and publicly available and royalty free.
When I talk about evolution of systems (whether activities, practices and data) these are all about meeting specific needs such as providing a web site or carbon dioxide emissions of a country. However standards refer to generic needs that apply across many stages of evolution – the need for things to work together and the need to be able to switch between solutions. These generic needs change in implementation as the underlying system evolves.
For example, in the world of software products then standards can provide interoperability between products and hence the potential to replace one product with another. The standard is normally articulated as a principle of working (i.e. how data is transferred and interpreted) in a document and the expression of this standard (i.e. the code itself) is left up to the product vendor.
This process is often imperfect as one vendor’s interpretation and expression of a principle might not match another’s. However since the switching between solutions is not usually time critical for the end user then some level of imperfection is acceptable i.e. you own the product and if you decide to switch you can migrate at your own pace. Though it should be noted that the process of migration is often a fraught one due to the imperfections.
In the world of utility services, the switching time can be critical and immediate such as the termination of a service. Here imperfections in migration are extremely undesirable and effective switching requires semantic interoperability between the services. In other words, any code and data is understood in the same way between the providers. In practice this can only be achieved if both providers are running the same system or strictly conforming to a reference model (an expression of code) rather than their interpretation of a documented principle.
Switching is an essential part of any competitive market and with utility services then the principle in a standards document is not enough and a reference model (i.e. running code, the expression) is required. If that market is to be unconstrained (i.e. free) then that expression needs to be open itself.
Hence open standards in the product world can simply be open documented principles but in the utility world they require open source reference models (i.e. running code). This simple fact is counter to how standards have been used in the past battles between products and hence the confusion, debates and arguments over the terms are unsurprising.
On Clouds, Open Source and Standards
So, we come to the cloud, which is simply the evolution of a range of IT activities from a product and product rental model to one of commodity and utility services. If we are going to see competitive free markets in this space, then open source reference models are essential. But the value to the end user is the market, not whether this system is more open or not.
An example of this is in the IaaS space. If you ask some of the big users of AWS what they want, they often reply with multiple AWS clones and rarely with another IaaS with another API. To create a marketplace of AWS clones you're going to need to start with an open source system that provides the EC2/S3/EBS APIs with multiple implementations (i.e. providers of such).
I first raised this issue at Web 2.0 in 2006 and by 2007 the whole debate of "open APIs" was kicking off and it has raged ever since. The argument today goes that EC2/S3/EBS are not "open". However APIs are simply principles, they cannot be owned only your expression can be. This has been re-affirmed in several court cases over the last few years.
This debate has already started to switch to "But the process of creating EC2/S3/EBS isn't open" ... well neither is the process for the development of Android and the end user doesn't care. As Benjamin Black once said "Solve user problems or become irrelevant".
The user problem is a competitive free market of multiple utility providers offering the defacto standard (which in case you haven't realised is EC2/S3/EBS) and not a plethora of multiple APIs basically doing the same thing, with super APIs attempting to manage this mess under various cries of being more "Open".
Matt Asay does a good job hitting the point home with his post of "Whose cloud is the open sourciest ... who cares?". Let us be crystal clear here, Matt isn't against open source in the cloud, he (like most of us) understands its essential importance. However, Matt also understands that what needs to be focused on is the user need for competitive markets.
Open source is absolutely essential for creating a competitive free market but the focus should be on solving users need i.e. creating the market.
The focus should not be on differentiation of stuff that doesn't matter whether because of some belief that you can out innovate the biggest ecosystem in a utility space or you view it as an on ramp to your own public services. The focus shouldn't be on protecting existing industries (something which concerns me with the EU communication on Cloud). To reiterate for the umpteenth time over umpteen years, the focus should be adapting to this new world and using open source to create a utility market which meets users needs.
Mark Shuttleworth nailed this point many years ago "Innovation and Open Stack : Lessons from HTTP"
On Strategic Play and Open Stack
So when it comes to playing this game, especially because of the powerful effects that ecosystems can create, then the smart strategic play is to build an open source equivalent to EC2/S3/EBS (i.e. all the core AWS features) and create a market place of multiple providers. The goal is to co-opt, sure you can differentiate later when your ecosystem is bigger but at this moment in time co-opt and build a bigger ecosystem through a competitive market.
But how do we know we can match the APIs. Well, the beauty of competitive markets is that they allow for assurance services and exchanges i.e. there is value in the test scripts which ensure that an API is faithful. So, build the test scripts, use those to build your open source IaaS and allow people to create a Moody / Rating agency style business as your market forms.
Whilst CloudStack and Eucalyptus have made steps in this direction, Open Stack (and more likely the RackSpace people involved) seem more reluctant. "Anything but Amazon" seems to be the watch cry and the adoption of EC2/S3/EBS in Open Stack appears to have been something the community forced upon it.
Differentiation on APIs etc is a pointless hangover to the product world. Adoption of APIs and differentiation on Price vs Quality of Service is the way forward. Being an AWS clone and creating a market around such isn't "giving in" but instead it's "competing on what users rather than vendors want".
I would like to see Open Stack succeed, there are many talented people (and friends involved). But I have my doubts because I feel it has wasted an opportunity, it has been very poorly led in my view despite its success in marketing.
The battle in IaaS is going to heat up between AWS and GCE in the next year or so. Open Stack during that time needs to create a large competing market and an equivalent ecosystem which will only be hampered if it doesn't co-opt AWS. Hence it has probably twelve months or so to get the technology rock solid, multiple large scale implementations (by large I mean a minimum $500 million capital spend for each installation), overcome the potential collective prisoner dilemma issue (of everyone differentiating rather than contributing to core) and form the competitive market.
If it fails, then my view is the good ship Open Stack will become the Mary Celeste 2.0 and that will be a terrible waste of effort.
In my view, Open Stack needs a strong benevolent dictator (like a Mark Shuttleworth for Ubuntu). An individual, who is willing to do what's in the interest of the end user and community and ride roughshod over others where necessary. Of course, the essential part is benevolence and its easy to fall foul here. With Open Stack, my advice is they need to focus on engineering quality and build the best open source AWS equivalent there is. Anything else (especially when the words differentiate and innovate are used to describe it) should be unceremoniously dumped of a high cliff for the time being. Along with this should go the every API, every Hypervisor and any "be all to everyone" concepts - focus, focus and more focus on one thing e.g. the best AWS clone. At a push, you could make an argument for AWS and GCE as your native APIs ... but I'd advise against it.
I'm glad we have an Open Stack Foundation, as the chances of that benevolent dictator emerging have grown. I'm hoping someone like Lew Tucker will step up to the plate.
However, where would I place my bet? Currently in the three horse race between Open Stack, Cloud Stack and Eucalyptus then the latter two have been playing a better game in my view though this race is far from over. If forced to choose one, well that's splitting hairs between the likes of Eucalyptus and Cloud Stack but I'd probably bet on Cloud Stack. They have a good focus, they're part of the ASF, they have a well funded backer and they have numerous production deployments at scale. It is however, too early to tell ... the next 12 months are critical.
The issue of standards and in particular open standards is a hotly debated topic. The recent UK Government consultation on open standards was embroiled in much of the politics on the subject with even a media expose of the chair of a consultation meeting as a member of a paid lobbyist group. The rumours and accusations of ballot stuffing of ISO meetings with regards to Microsoft’s OOXML adoption as an open standard are also fairly widespread. The subject matter is also littered with confusing terms from FRAND (fair, reasonable and non discriminatory licensing) being promoted as an “open” standard despite it being IP encumbered by definition.
In general, the principle of standards is about interoperability. In practice, it’s appears more of a battleground for control of a developing market. Standards themselves can also create new barriers to entry into a market due to any onerous cost of implementation. There are also 17 different definitions of what an “open standard” is varying from international bodies to governments. Of these the OSI definition is probably the most respected. Here, open standards are defined as those which have no intentional secrets, not restricted by patents or other technology, no dependency on execution of a license agreement, freely and publicly available and royalty free.
When I talk about evolution of systems (whether activities, practices and data) these are all about meeting specific needs such as providing a web site or carbon dioxide emissions of a country. However standards refer to generic needs that apply across many stages of evolution – the need for things to work together and the need to be able to switch between solutions. These generic needs change in implementation as the underlying system evolves.
For example, in the world of software products then standards can provide interoperability between products and hence the potential to replace one product with another. The standard is normally articulated as a principle of working (i.e. how data is transferred and interpreted) in a document and the expression of this standard (i.e. the code itself) is left up to the product vendor.
This process is often imperfect as one vendor’s interpretation and expression of a principle might not match another’s. However since the switching between solutions is not usually time critical for the end user then some level of imperfection is acceptable i.e. you own the product and if you decide to switch you can migrate at your own pace. Though it should be noted that the process of migration is often a fraught one due to the imperfections.
In the world of utility services, the switching time can be critical and immediate such as the termination of a service. Here imperfections in migration are extremely undesirable and effective switching requires semantic interoperability between the services. In other words, any code and data is understood in the same way between the providers. In practice this can only be achieved if both providers are running the same system or strictly conforming to a reference model (an expression of code) rather than their interpretation of a documented principle.
Switching is an essential part of any competitive market and with utility services then the principle in a standards document is not enough and a reference model (i.e. running code, the expression) is required. If that market is to be unconstrained (i.e. free) then that expression needs to be open itself.
Hence open standards in the product world can simply be open documented principles but in the utility world they require open source reference models (i.e. running code). This simple fact is counter to how standards have been used in the past battles between products and hence the confusion, debates and arguments over the terms are unsurprising.
On Clouds, Open Source and Standards
So, we come to the cloud, which is simply the evolution of a range of IT activities from a product and product rental model to one of commodity and utility services. If we are going to see competitive free markets in this space, then open source reference models are essential. But the value to the end user is the market, not whether this system is more open or not.
An example of this is in the IaaS space. If you ask some of the big users of AWS what they want, they often reply with multiple AWS clones and rarely with another IaaS with another API. To create a marketplace of AWS clones you're going to need to start with an open source system that provides the EC2/S3/EBS APIs with multiple implementations (i.e. providers of such).
I first raised this issue at Web 2.0 in 2006 and by 2007 the whole debate of "open APIs" was kicking off and it has raged ever since. The argument today goes that EC2/S3/EBS are not "open". However APIs are simply principles, they cannot be owned only your expression can be. This has been re-affirmed in several court cases over the last few years.
This debate has already started to switch to "But the process of creating EC2/S3/EBS isn't open" ... well neither is the process for the development of Android and the end user doesn't care. As Benjamin Black once said "Solve user problems or become irrelevant".
The user problem is a competitive free market of multiple utility providers offering the defacto standard (which in case you haven't realised is EC2/S3/EBS) and not a plethora of multiple APIs basically doing the same thing, with super APIs attempting to manage this mess under various cries of being more "Open".
Matt Asay does a good job hitting the point home with his post of "Whose cloud is the open sourciest ... who cares?". Let us be crystal clear here, Matt isn't against open source in the cloud, he (like most of us) understands its essential importance. However, Matt also understands that what needs to be focused on is the user need for competitive markets.
Open source is absolutely essential for creating a competitive free market but the focus should be on solving users need i.e. creating the market.
The focus should not be on differentiation of stuff that doesn't matter whether because of some belief that you can out innovate the biggest ecosystem in a utility space or you view it as an on ramp to your own public services. The focus shouldn't be on protecting existing industries (something which concerns me with the EU communication on Cloud). To reiterate for the umpteenth time over umpteen years, the focus should be adapting to this new world and using open source to create a utility market which meets users needs.
Mark Shuttleworth nailed this point many years ago "Innovation and Open Stack : Lessons from HTTP"
On Strategic Play and Open Stack
So when it comes to playing this game, especially because of the powerful effects that ecosystems can create, then the smart strategic play is to build an open source equivalent to EC2/S3/EBS (i.e. all the core AWS features) and create a market place of multiple providers. The goal is to co-opt, sure you can differentiate later when your ecosystem is bigger but at this moment in time co-opt and build a bigger ecosystem through a competitive market.
But how do we know we can match the APIs. Well, the beauty of competitive markets is that they allow for assurance services and exchanges i.e. there is value in the test scripts which ensure that an API is faithful. So, build the test scripts, use those to build your open source IaaS and allow people to create a Moody / Rating agency style business as your market forms.
Whilst CloudStack and Eucalyptus have made steps in this direction, Open Stack (and more likely the RackSpace people involved) seem more reluctant. "Anything but Amazon" seems to be the watch cry and the adoption of EC2/S3/EBS in Open Stack appears to have been something the community forced upon it.
Differentiation on APIs etc is a pointless hangover to the product world. Adoption of APIs and differentiation on Price vs Quality of Service is the way forward. Being an AWS clone and creating a market around such isn't "giving in" but instead it's "competing on what users rather than vendors want".
I would like to see Open Stack succeed, there are many talented people (and friends involved). But I have my doubts because I feel it has wasted an opportunity, it has been very poorly led in my view despite its success in marketing.
The battle in IaaS is going to heat up between AWS and GCE in the next year or so. Open Stack during that time needs to create a large competing market and an equivalent ecosystem which will only be hampered if it doesn't co-opt AWS. Hence it has probably twelve months or so to get the technology rock solid, multiple large scale implementations (by large I mean a minimum $500 million capital spend for each installation), overcome the potential collective prisoner dilemma issue (of everyone differentiating rather than contributing to core) and form the competitive market.
If it fails, then my view is the good ship Open Stack will become the Mary Celeste 2.0 and that will be a terrible waste of effort.
In my view, Open Stack needs a strong benevolent dictator (like a Mark Shuttleworth for Ubuntu). An individual, who is willing to do what's in the interest of the end user and community and ride roughshod over others where necessary. Of course, the essential part is benevolence and its easy to fall foul here. With Open Stack, my advice is they need to focus on engineering quality and build the best open source AWS equivalent there is. Anything else (especially when the words differentiate and innovate are used to describe it) should be unceremoniously dumped of a high cliff for the time being. Along with this should go the every API, every Hypervisor and any "be all to everyone" concepts - focus, focus and more focus on one thing e.g. the best AWS clone. At a push, you could make an argument for AWS and GCE as your native APIs ... but I'd advise against it.
I'm glad we have an Open Stack Foundation, as the chances of that benevolent dictator emerging have grown. I'm hoping someone like Lew Tucker will step up to the plate.
However, where would I place my bet? Currently in the three horse race between Open Stack, Cloud Stack and Eucalyptus then the latter two have been playing a better game in my view though this race is far from over. If forced to choose one, well that's splitting hairs between the likes of Eucalyptus and Cloud Stack but I'd probably bet on Cloud Stack. They have a good focus, they're part of the ASF, they have a well funded backer and they have numerous production deployments at scale. It is however, too early to tell ... the next 12 months are critical.
Monday, October 15, 2012
At last ... a great definition for cloud computing
I'm not a fan of the term 'cloud computing' nor the umpteen number of definitions. I don't like NIST's mechanistic definition of 'cloud computing' which misses the nuances and so I prefer to stick with 'computer utilities' (as described by Parkhill in his 1966 book).
A definition of 'cloud computing' has to consider the economic changes due to the evolution of computing infrastructure (a technology activity) to more of a utility but at the same time it has to be mindful of niches and the different organisational and security requirements (resulting in various forms of hybrid environments) during the transition. Many of these won't last (it is a transition after all) but they need to be considered.
Somehow, in all of this time, I've missed this wonderfully simple definition of 'cloud computing' by Ramnath K. Chellappa in 1997 at INFORMS
A definition of 'cloud computing' has to consider the economic changes due to the evolution of computing infrastructure (a technology activity) to more of a utility but at the same time it has to be mindful of niches and the different organisational and security requirements (resulting in various forms of hybrid environments) during the transition. Many of these won't last (it is a transition after all) but they need to be considered.
Somehow, in all of this time, I've missed this wonderfully simple definition of 'cloud computing' by Ramnath K. Chellappa in 1997 at INFORMS
All, I can say is this definition is almost perfect in simplicity and at the same time incredibly sophisticated in nuance (and vagueness). It also happens to be the first known definition of 'cloud computing' (being in 1997) and as far as I'm concerned it has been downhill since then.
Tuesday, October 09, 2012
Some trivia questions on cloud computing
... just for amusement.
Questions
1. Which year was the future of computing being provided by public, private and mixed utilities like "electricity", first published in a book?
2. Which came first, a utility based IaaS or a utility based PaaS?
3. Was Amazon EC2 built on selling Amazon's spare capacity?
4. When did Amazon start working on the concept of EC2?
5. In which year was the idea of future utility markets, federated grids and the role of open source in cloud computing first publicly presented?
Answers
1. 1966, Douglas Parkhill, The Challenge of the Computer Utility.
2. Utility based IaaS. The first utility based PaaS (Zimki) was publicly launched at D.Construct, 14 days after the launch of the most commonly well known utility based IaaS of EC2 on the 25th August 2006.
3. No. The myth of Amazon EC2 being built on spare capacity of Amazon is one of the unquenchable and totally untrue rumours.
4. 2003, though the implementation of the idea started in 2004. A good bit of background on this can be found on Benjamin Black's post.
5. Whoot, I'd like to claim that was me in 2006, an earlier version of the talk I repeated at OSCON in 2007 but then that would be completely untrue. (see http://blip.tv/swardley/commoditisation-of-it-419213).
The reality is these ideas was fairly common by 2007 and I don't know when it actually started. Some of the federated ideas certainly dates back to 1960s and many of the concepts above were described in this 2003 paper on Xenoservers by Ian Pratt et al.
There are many earlier publication, normally around the notion of markets of common service providers (CSPs). You can also bet your bottom dollar that many academics were looking into this issue between 1995-2005.
So I'm afraid this was a trick question and the answer is ... no idea but earlier than people normally think.
Comments
The point I want to get across is that the concepts of cloud computing are much older than many realise, that there still are many myths (such as the Amazon spare capacity story) and we're in our 7th year of commercial applications. Normally, these changes take 8-12 yrs to become widespread and mainstream, hence expect this over the next year or so. If you're just getting upto speed with Cloud Computing then to be honest, you're perilously close to being a laggard.
Friday, October 05, 2012
Don't try to out innovate Amazon
Amazon is famous for it's two factor market and platforms play. The modus operandi of Amazon in general is :-
1. Find an activity which should be so well understood and ubiquitous that is suitable for provision as a commodity (ideally utility service). Example's would be online marketplace, infrastructure, certain data sets etc.
2. Provide a platform to exploit this i.e. expose through public APIs.
3. Enable an ecosystem to build on the platform. This could be either an ecosystem of providers and consumers (two factor market) or consumers of an API (e.g. developers building higher order systems) or ideally both.
4. Mine the meta data of that ecosystem for new information on trends (i.e. things evolving in the market by examining consumption of the API). You don't have to look at the data of others just the consumption rates to do this. Focus the efforts of the company on this activity i.e. use a press release process. In other words, get everyone to write press releases before building anything. Since you can't write a press release for something not invented yet, this has the effect of concentrating people on commoditising pre-existing acts (which you can write a press release for).
5. Commoditise those new trends either through copying or acquisition in order to provide new component services to both feed the ecosystem but encourage it to innovate more.
Rinse and repeat this cycle.
Rinse and repeat this cycle.
This model is far from new. The basics are - get other's to Innovate, Leverage the ecosystem to spot trends and Commoditise to component services - ILC for short. It enables the company to appear to be highly innovate (everyone else is doing the "innovation" of novel and new), highly customer focused (mining meta data and using a press release process to give people what they want) and highly efficient (economies of scale) all at the same time and the ability to do all three increases as the ecosystem grows.
So, when you come up against Amazon in your industry, here are two simple Do's and Dont's.
Don't try to out innovate Amazon : You're not actually fighting Amazon over innovation but instead you're fighting the entire ecosystem that has built upon its platform and is doing much of the innovation (in terms of new activities). It's worth remembering that some of Amazon's ecosystems can have hundreds of thousands of developers and companies. If you're going to try this alone then you'll need an enormous R&D group to out compete on these terms. If you've not got this then reality is you'll just get spanked. This is despite, if my sources are correct, of Amazon not having a traditional R&D group. It wouldn't surprise me if every time Amazon hears a company say "We're going to out innovate Amazon" then they cross them of their list of competitors to watch and mark them "RIP". The only time its really worth fighting on these terms is when you have an equivalent size of ecosystem (or you're pretty confident of quickly getting one) combined with the ability to exploit it. In which case you're not really trying to out innovate Amazon, you're focused on getting your ecosystem to out innovate their ecosystem.
Do try to co-opt and out commoditise Amazon : Critical to this game is to try and build a bigger ecosystem and one way is to exploit the main weakness of Amazon being a single provider. So, try and build a competing market of equivalent providers enabling customers to easily switch between its members. Co-opt Amazon's ecosystem as much a possible. Provide the systems as open source and don't fall into the trap of all the members trying to differentiate (the collective prisoner dilemma issue). Once your ecosystem is big enough then you can use the ecosystem to out innovate Amazon and its ecosystem.
-- 18th April 2016
Reminded of this today. People are still getting spanked by this. Sad really.
Added rinse and repeat to make it clear, this is a cycle.
-- 18th April 2016
Reminded of this today. People are still getting spanked by this. Sad really.
Added rinse and repeat to make it clear, this is a cycle.
Subscribe to:
Posts (Atom)