Saturday, August 31, 2013

Punctuated equilibriums and boiling frogs

For many years, I've talked about the power of punctuated equilibriums in our economic and technological systems.  These occur during an economic state known as 'war' which is caused by the commoditisation of a pre-existing act after a more relatively 'peaceful' period of product competition. 

These punctuated equilibriums are dangerous to companies because of :-
  • the inertia to change that is created by successful business models built during the more peaceful competitive state.
  • the likelihood we will underestimate the speed of change due to the previous more peaceful and slower changing competitive state.
  • the exponential nature of the change.  This exponential growth is driven by numerous factors including compound forces of adaptation i.e. the benefits of cloud (efficiency, agility in higher order systems, new sources of wealth creation) encourage competitors to adapt, the more our competitors adapt the greater the pressure on us to adapt becomes just to retain our relative competitive position (this is known as the Red Queen effect).
These days with cloud, I see all the signals of a punctuated equilibrium plus all the usual consequences of people misunderstanding exponential change.  They believe that they have time because the market of cloud is just a "small fraction of the computing market" etc. 

Hence, I provide the following as a sobering reminder. This is an estimated growth of AWS updated with various analysis statements (NB. Amazon is very careful not to publish actual figures, this is based upon analysis of Q4 figures and calculating a forward revenue i.e. by the end of 2015, I calculate that AWS forward revenue will be over $15 billion p.a.) with a % figure of revenue vs world wide server revenue.  Currently this stands around 3% but these figures are not as important as the growth rate.  Assuming Amazon continues on the estimated growth rate from the past six years then it'll probably represent somewhere between 30-50% of worldwide server revenues during 2017.  

Such a rapid change would not be unusual as we've experienced punctuated equilibriums in many forms of systems - environmental, economic, biological and technological.  However, this might be a bit of shock to those applying a linear model of growth (hence the delta of "future shock").  In fact, the only thing shocking is that people don't understand basic concepts like exponential change. 

In the cloud world this translates to companies thinking the effects are minor and rather than seeing the changes as a threat to the company, they often believe that they have time to exploit and manage the transition to this new revenue stream. It's like frogs discussing how the cold water they are in has warmed up and it feels quite pleasant and certainly manageable. 

So, if exponential growth is not something you're truly familiar with, then in order to do my bit for the community - please watch the following Dr Bartlett video on Arithmetics, Population and Energy as recommended by a good friend of mine, Artur Bergman.

[9th April 2015 - the video is no longer available]

-- 29th January 2016

The model is spiked. Actual revenue in 2015 did not exceed $8Bn but $7.8Bn. The actual run rate for Amazon by the end of 2015 was $10Bn (assuming this includes growth). I had AWS down for >$15.5Bn revenue in 2016 (i.e. each year after 2015). Oh, well.

Saturday, August 24, 2013

Mapping and playing games

Personally, I find mapping a useful tool for determining predictable changes (and manipulating the environment accordingly) and anticipating possible but ultimately uncertain changes by narrowing the range of the likely.  Maps are never perfect but they provide a view.

I thought I'd explain this with two past examples - Fotango and Canonical - where I've used mapping.  I won't go through the details of mapping (I've covered this many times before) but instead how I used it.

My Fotango Story

Almost a decade ago when I was CEO of Fotango (a profitable and growing company), we had a problem on the horizon. We needed to find new external sources of revenue to rebalance our portfolio. James Duncan and I mapped out the environment on a whiteboard and used this to determine our play. For reference, a copy of that map (which I've simplified and tided up) is provided in figure 1.

Figure 1 - Map of Fotango

By examining the value chains associated with Fotango (the more faded circles and lines in the diagrams), we had determined that:-

1) Platform were going to become ubiquitous and provided through more utility like services.

2) That customers and past vendors would have inertia to the change (black bars) which we could reduce by creating a competitive market of providers and use of an open source play

3) That the marketplace would allow for formation of exchanges, assurance mechanisms (reporting on providers as per a Moody's rating agency) and a u-switch like model. 

4) That an ecosystem (the large shaded circle) could be built around the platform play and exploited through an ILC like model to identify new sources of future value.

5) That raw infrastructure (compute) would move to a utility.  Though we had already been running our own pre-cursor to a private cloud for some time (known as Borg), we could exploit the movement of a new entrant into this space to reduce our capital expenditure cost. I suspected it was going to be Google who made the first move, the following year we discovered it was Amazon.

We built our model around this and some of the other anticipated effects (change of practices etc) and Zimki, almost certainly the first modern PaaS was launched publicly (2006) as Amazon EC2 revealed its beta. We announced our plans to open source, create competitive markets and focus on building higher order systems.  Amazon's move mean't we started to immediately reposition to run on Amazon.

Everything was rosy, it was rapidly growing and the timing was spot on.  Alas the parent company had been persuaded that the future wasn't the stuff we were working on - utility computing, 3D printing, impact of mobile phones on cameras - but instead outsourcing IT, big ERP implementations and televisions.

In another universe, if a different route had been taken then Canon might today be a rival to Amazon in the cloud and have sown up the 3D printing industry.  But the same "if only" can be said of Xerox, Kodak and many others. We will just never know whether those decisions cost significant future losses of market cap since Fotango and Zimki were killed off and we can't replay the past.

I lost. C'est la vie. 

Fortunately a useful lesson in political capital was learned and the mapping technique survived even if Zimki didn't.

My Canonical Story

After leaving Fotango and working on various pet projects, I met with Mark Shuttleworth. Actually, a very smart troublemaking friend of mine introduced us under false pretences.  Mark wanted a designer, I said "I know nothing about Design" and so we decided to just have a chat.  What I explained to Mark was the principles of evolution and how this applied to the computing industry.  Mark asked me to come work with Canonical on strategy and so I joined in 2008.

The first five people I met in Canonical when I told them I was there to work on "Cloud" were pretty dismissive. "It's a fad" etc was actually fairly normal in the industry.  Fortunately, I had a great boss in Steve George and quickly met some smart folks in Rick Clark, Soren Hansen, Nic Barcet, Gustavo Niemeyer and John Pugh.  They were all open minded to the concept of evolution and change, so we plotted a plan.

The problem for Canonical (and Ubuntu) was that though it was strong on the desktop for a linux based OS, we were fighting RedHat on the server market and they owned it.  So, we mapped out the environment (on a huge whiteboard though interspersed with some content intense presentations of mine), had various arguments over how to play the game and roughly settled on an approach.  Figure 2 is a simplified summary of that process, a map that I've cleaned up (the original was actually far more complex and messy) and added some modern terms so it makes more sense for today (devops etc).

Figure 2 - Canonical Map

The principle play was:-

1) We accepted that RedHat owned the server OS (for the time being) but that didn't matter, we were going to own the future and let the industry catch up to us.

2) We would focus on being the dominant guest OS on any compute utility by building a cut down version of Ubuntu which anyone could use. Nic just made this happen with his 'just enough' concepts and support of others in the wider community who had led the charge. 

3) Companies would have inertia to the change so we would acquire an open source private cloud equivalent which matched the dominant player in a transitional hybrid (public + private) play. Hence our early involvement with Eucalyptus which John Pugh and Mark managed to hook in 2008 and Rick and Soren worked furiously on.

4) We knew that new practices would appear and so we would look at bringing those toolsets into Ubuntu (e.g. Chef) and modify Landscape (our management tool) appropriately.  In fact, Gustavo went much further and took this into the whole area of JuJu which is something I had serious doubts at over the time due to education barriers. In retrospect, I think Gustavo was right and I was clearly wrong here.

5) We knew that we needed to own any future platform plays and push application development onto Ubuntu. In fact the latter was already happening, the community was strong and the desktop and community team with people like Jono Bacon were doing an incredible job. However, we needed to get everyone in the cloud building on Ubuntu.

Steve would always (and rightly so) twist my arm over revenue stream. He would certainly give me numerous challenges on my approach to server.  As I pointed out there were multiple different revenue streams possible in the future (from support to brokerage to assurance to private cloud installations etc). However, my focus was to own the future, use it as a door opener (opportunities multiply as they are seized) and then build revenue streams.  I always take a view it's better to own the future and then monetize it rather than own a route to monetization but not the future.  Steve, was always supportive and both him and Mark are exceptional in terms of understanding opportunity and game play.  Any inertia that might have been, quickly evaporated when Mark made it clear that we were focused on cloud.

We didn't get every step right (I certainly had some bust ups over my insistence on Chef for example) and I certainly made a bucketful of errors.  But today, Ubuntu is without question the dominant OS in cloud.  The team at Canonical have done a stirling job in creating that future.

Let us be clear, Canonical (a relatively small company) literally stole the future from RedHat and all the other giants in the field and this was done with very modest investment and barely a shot fired by the competitors. The ability for Canonical to help itself to the prize whilst giants were sleeping around it was stunning but then I've often seen this scenario.

Now, some people seem to think I had a big part in this and so I want to correct that view as my role was minor. I was motivated to do this because of some very kind comments from Jorge Castro at OSCON which though appreciated overstated my influence.  My role was providing a way of viewing the landscape, sensing where to attack, exploiting the battlefield and evangelism.  Today, I teach all sorts of companies and Government organisations the same sort of techniques for mapping and certainly my maps have become a lot better over time.

However, this is just viewing the board and though it helps in playing the game what matters is how well you play the game and execute.  Mark and the group above and many more (which I haven't mentioned) within Canonical made it happen.  They are the truly exceptional ones and you should be in no doubt that Canonical is an outstanding place to work with some incredible talent. 

As for RedHat.  I kindly warned them to buy Novell and force an acquisition by IBM.  They didn't. They must by now realise the dangerous threat that Canonical has become and that the cost of reclaiming the future will be huge.  But that's not the fault of Red Hat's engineering talent nor their culture nor their community nor inertia nor their ability to execute, it was their executive management team that was outplayed.  They might have great engineering skills but in terms of executive management they had poor chess players.

C'est la vie.

Update 25th August 2013

I've been asked two questions. First does culture matter more than strategic play?  The answer to this is very complex and I'll have to do a number of posts to explain why.   The short answer is that in certain parts of the economic cycle then culture matters vastly more whilst in others strategic play appears to matter more. Where IT is at the moment, then for most companies, strategic play trumps.

The second question was what would I do if I was RedHat? I'll leave that to the reader to make their own suggestions but I would start by thinking about and mapping the landscape.  I'm not a great fan of ad-hoc plays without good situational awareness nor am I a fan of relying on "everyone else is doing this" or differentiation plays when not appropriate.  As a clue, my first step would be to embrace Cloud Foundry as the platform play.

Friday, August 09, 2013

Why Google Glass will change the world.

I've only played with Google Glass a few times (not owning a set) and it takes only a few seconds of use to realise there exists multiple killer applications which will change the way we interact with the world.

For me, Google Glass is more 1.0 and a predecessor to more powerful systems where your entire field of vision becomes interactive through printed and transparent electronics over the lens. In this latter case a whole host of other potential applications becomes immediately clear such as annotating of objects and people of interest in your field of view. However, this is all rather obvious and inevitable stuff but even in its current form there is a long list of killer apps i.e. multiple "where to attack" that a company might build a business in. Of these, one of my favourite examples is the second opinion model which actually takes advantage of the current display characteristics of Google Glass.

Before explaining how it works, I thought I'd explain why it's a killer application. I'm a great fan of mapping competitive environments through user needs and using this to predict market changes and opportunities (i.e. where to attack) by finding better ways of meeting user needs. The ideal scenario for me is to find a universal but poorly met need and there is one abundant example today which Google Glass solves.

To explain, I want you to think back to the last time you were going to buy a car or rent a home or were fixing something or in fact any time when a second opinion would have been useful. You probably actually phoned someone, had to describe the situation and they may have given you some advice on this but the process would most likely have been tiresome. Trying to explain over the phone a particular car and get their advice on what's a good price, what should you look for etc is never easy. Try asking someone whether a particular thing you've seen in an auction is a fake?

It's way much easier if you could show them and they can talk you through it. I've tried this in the past using skype on the phone but whilst that's better it's less than ideal. The core component of the killer app in Google Glass is hangout. When I create a hangout I can see and hear the person on the hangout whilst they can see what I'm looking at. Now obviously if we had full view interaction then they could point to areas of my field of vision that I should take an interest in but even in its current form the floating window of a person who can see what I'm looking at is highly effective and bizarrely reassuring. They can guide and direct me.

Now, ideally whatever situation I'm in I'd like an expert at hand. Burst pipe, instant plumber available who can see what I can see and give me advice. Travelling on a plane and the pilot and co-pilot are taken out by a mysterious ailment (I've obviously watched too many disaster movies) then instant pilot available. Is this car a good buy? Is this antique a fake? Funny looking boil on my leg should I go to the hospital?

There's an enormous list of situations where second opinions are good to have especially from someone who knows what they're talking about. And that's the the killer app. A connection to a personal assistant with a long list of available experts on a wide range of topics who can create a 1 to 1 hangout for me with someone who is knowledgeable about what I'm looking at. 

This one thing alone will change the way we interact with the world and stop me buying lemon cars, fake Picasso's and pressing the wrong button on an airplane. As for the boil, I suppose I'll call NHS Direct but I bet it would be easier if I could just show them.

--- 9th Sept 2013

I was asked for other examples of "where to attack" with Glass. To be honest, the list is huge, there's lots of potential and a hangout is just one. A few of the more obvious examples include :-

Interpretation of audio / visual events. Hear a bird signing, a song, a foreign language spoken, the roar of a motor car or see some impressive building or some other event then click here to interpret and identify.

Augmentation / Annotation of objects in the field of view. See something you like then click to buy it and variations of this form including in-field translation of text i.e. "What do these hieroglyphs mean?" will be a thing of the past. If my partner sees a present which she thinks our son might like (e.g. a new toy) then I expect her not only to be able to send me a photo but leave a virtual note on it. So, when I go into some other toyshop then my glasses will identify it and I can add my own views on suitability etc.

Streaming interpretation of audio / visual events. Having a conversation on some subject, don't worry Glass will be constantly streaming relevant information on the discussion to your field of view i.e. "Who was in the Rolling Stones?" will be a thing of past, the answer will be available immediately. Think MindMeld combined with Google Glass. Watching football will never be the seem again.

Remote viewing and control. Worried you've left the house without turning the cooker off or setting the fish tank onto "automated feed" mode? Quickly transport your vision back to home and reset / change what you need.

Augmentation / annotation of location. Need information on where you are, history, culture, practices or need a taxi (or a self driven car i.e. the future "utility" taxi) to your location or simply want to leave a virtual message at this space for others (think a virtual "I was here", "The building is unsafe, do not enter" note) then Glass will have a solution for that.

Basically there's a mass of new activities related to augmentation, annotation, interpretation, remote viewing, remote control based upon audio, visual and location information. No-one should be in any doubt that Google Glass will change the world.

And whilst the above is huge it is but peanuts compared to what is coming and the rise of intelligent software agents.  The combo of this with Glass will create true marvels of incredible use.

Few will care that privacy will further evaporate. In ten years time as I wander through a local craft store with my Glasses identifying something it calculates that my Mother would like for her birthday based upon its discussion with her Glasses then privacy won't be top of my thoughts.

What I'll be thinking about is the suggestion that my Glasses will make that if I take a twenty minute stroll (the weather is good, I need the exercise) upto this shop (directions provided) then an acquaintance has seen a better version (my Glasses asked their Glasses) and I could also stop and have a chat with them at the coffee shop next door. As my Glasses will point out they're working for company XYZ and are producing some product relevant to my research.

Today's browser based / smart phone world will seem like a bad memory in a decade. Like flared trousers or mullets.

Tuesday, August 06, 2013

Why would Bezos buy the Washington Post?

Well, there's two parts to the business - the news gathering and distribution but also the printing business.

There's all the usual suspects - a content strategy, a future news publishing platform play, buying influence or a trophy. However there is also the simple issue that large scale printing facilities cost a fortune to set up. Washington Post's smaller printing facility (which apparently was sold in 2010) cost around $230 million to build. Its printing facility in Springfield is said to be a monster.

Well, as I said over a decade ago by 2020 large volume printed electronics should become a massive business and by 2030 most objects with macro physical and micro electronic features will be printed. It's all part and parcel of commoditisation of the manufacturing process. So I don't share the view that printing is dead. I take the view it has barely got started and it'll make cloud look like chicken feed.

So, along with a news business, Bezos has acquired a massive printing facility with capability, skills and know how just about the right time to get prepared for a printing revolution in electronics? Oh, I see I've already commented on Buffet buying up Newspapers with big printing facilities as well. Do they know something the rest of the industry hasn't figured out?

Is this their intention? No idea. It's pure speculation.

Still, there's money to be made in printing. How else do you think we're going to get ubiquitous computing without electronics flowing off the back of massive printers measured in many km per hour.

I've a really old and out of date report on this from 2006. It's fairly useless now (other than general interest) but if you want to know more on the subject then I'd look at the work that Kate Stone is doing or Bruce Kahn.

Anyway, my view is the news business is interesting and there is lots of overlap with Amazon's current properties. As for the printing business, well if the time, tech and capabilities are all right then this could be the bargain of the century. Why buy a newspaper to do this? Unless you wanted to clearly signal your intentions to competitors by buying up or building printing capabilities then it's the best way I know to silently slip into the field.

Of course, with Google's purchase of Motorola most people are talking about the patent portfolio and their latest range of smart phones with the customisations available in the Moto X. What people might not realise is that Motorola also has some serious history in printed electronics at large scale. We might actually be watching the carve up of the future of manufacturing by two tech giants.

For those who have never seen flexographic, offset or gravure printing of electronics then this video [removed link, now defunct 30/03/2015] might peak your interest or the following CNET article.

But then again, maybe the analysts are right and Bezos just bought a newspaper and it's all about the content, influence, a trophy, a news platform play or a bit of them all. I wouldn't however simply ignore the printing facility though.

Monday, August 05, 2013

The interface doesn't matter.

Excellent article by @somic on a "Response To Simon Wardley: Innovation in Interface Implementations" and well worth a read.

It's an intelligent article with little for me to take umbrage with, other than the "ideological footing" comment as very little of what I write comes from a particular ideology but instead pragmatism to eventual evolution of acts to more of a commodity. Other than that minor quibble, it's a well reasoned article (a blessing from the various Simon is a buffoon comments I get to read) but there are a couple of points worth raising.

Point 1 - Don't mix product and utility

The article states that "customers want differentiation". Differentiation in a function for a product like "cars"? Absolutely we want this.


"customers want differentiation" - in function for a utility like "electricity"? Really? Do I really want the interface to be changing from one socket to another? A different frequency, voltage and socket type? I don't think so. 

The two contexts are different, you cannot simply compare products versus utility. This alas is the flaw with the analogy given in the article. If we want to compare utility versus utility and use cars as the example then we would have to roll the clock forward to a time when cars are more of a utility i.e. a world of self driving cars where I just jump in and say where I want to go (a more automated form of today's taxis). 

Will I care about the diameter of the steering wheel in such a world … nope. It's an invisible component to me in a utility, I only care that I can jump in and say where I want to go. This act of saying where I want to go is the interface.

Will I care if someone changes the interface because of a desire to innovate and so when I jump in and say "Cowper Street" it returns back "Please tap out your directions on the driver's head rest in Morse code" … yep. I'll care quite a bit.

Will I care if different cars use different co-ordination / mapping systems such that a direction in one car will end up at a different location to another - you betch'a. I'll tend to get quite annoyed at this one behavioural change.

I'll be definitely writing posts that say the interface doesn't matter, please limit innovation to above the interface (would I like a fast route or a scenic one) or below it (operational efficiency of car) but leave the interface as standard and just adopt the dominant de facto please. 

Yes, we could have abstraction layers in which case I could walk around with a piece of kit which translates my speech to Morse where necessary and my directions to the right co-ordination / mapping system. Would it make me happy - nope. I'd think of it as a bloody waste of good engineering time and effort with no obvious benefit other than pandering to desires to innovate at the level of the interface.

Point 2 - We shouldn't care about the interface.

As the articles states "Has the generic IaaS interface changed in any significant way? Not really - no one is innovating in the interface any more because a lot of interesting work there has already been done"

Perfect. I couldn't agree more and would reiterate this has been the case for some time. However, if the above is true then why on earth can we not just adopt one interface then - the dominant defacto of the market. If the interface doesn't matter then we should simply adopt AWS APIs and co-opt their ecosystem because after all, this is a battle of ecosystems and co-opting is the right play.

Sunday, August 04, 2013

Can OpenStack dominate IaaS?

tl;dr : only if it embraces ecosystem over egosystem.

To understand why, we need to cover some basics regarding the market as it is and examine how to win the space.

First, the basics :-
  • Users (as in companies) are flocking to AWS because lets face it - it's useful. Many aspects of IT are evolving from a world of products (and rental services) to commodity (and utility) and this process is driven by competition (both supply and demand). As I've said for the last eight years, this is unavoidable.
  • Despite any inertia that we may have to the change from past practices and past business models, competition not only forces the process of change, it also forces us to adapt and adopt. The reason for this is that utility provision is not just about efficiency but also agility in building higher order systems which are new sources of wealth. As competitors become more efficient, agile and able to extract new sources of wealth then the pressure on us mounts - this is the Red Queen effect.
  • That mounting pressure has a network effect, as more competitors adapt then the pressure on us  to adapt also increases. This is why a trickle becomes a flood with respect to these changes and the speed of change occurs more rapidly than we expect. This is known as a Punctuated Equilibrium and it's why Amazon has exponential growth.
  • An ecosystem model known as ILC (innovate-leverage-commoditise) can be built around utility services by exploitation of consumption information. Under this model, a company creates a utility which allows other to innovate. As those innovations diffuse and evolve this pattern can be spotted through consumption. Hence the company can use the ecosystem to spot new and successful change. This can then be commoditised to new components to grow the ecosystem. 

    Under a well run ILC model then a company can simultaneously appear to be :-
    1. highly innovative, as others are actually taking the high risk gamble of innovation for it and successful changes are being included as components.
    2. highly customer focused, as data on consumption in the ecosystem is being used to determine successful changes.
    3. highly efficient, through volume operations and economies of scale in provision of the core utility components
    4. highly stable revenue, by being a first mover to industrialise a component to utility services the company gains large (volume operations) but well established revenue streams.
    5. maximum wealth generation, by being a fast follower to any spreading successful change then the company maximises new benefits without incurring risky and costly R&D.
All five of the above factors will increase simultaneously with the size of the ecosystem which can in turn increase in a super linear fashion to the physical size of the company.

Ok, now we have some basics lets understand the situation:-
  • Cloud is simply about the shift from product to utility. Infrastructure as a Service is no different. Companies have inertia but we are experiencing a punctuated equilibrium hence the change is likely to be very rapid as we're all forced to adapt.
  • Amazon is playing what looks very like an ILC model and it has dominant ecosystem which is growing exponentially in terms of revenue. As AWS gets bigger it will get more innovative, more efficient, more customer focused, more stable revenue and more wealth opportunities. It is literally chewing up the future market and it's way beyond the 800 lb gorilla in the room.
At first glance it looks a pretty dire situation and to be frank it is. This situation can be fought against by exactly the same mechanisms I described six years ago but its just much more difficult to play the game today.

The way you play the game is as follows :-
  • Companies have a concern which is about second sourcing options and buyer / supplier relationship. What they want is a competitive market of IaaS providers. This concern is your friend and so embrace it.
  • The ecosystem effects can be neutralised by co-opting the ecosystem i.e. building a market of AWS clones. This will also make it easier for existing AWS users to use the market. NB. 
    1. you don't have to implement all of AWS but the most commonly used aspects of it to begin with. 
    2. innovation needs to focus behind the interface (APIs) on operational efficiency.
    3. you are only temporarily beholden to Amazon. As the ecosystem around the market grows to exceed the ecosystem around Amazon then the market of AWS clones actually takes control of the AWS APIs and can set the direction.
  • Compute demand is elastic but building data centres is time and resource constrained. Hence by introducing a price war, a market can increase demand beyond the ability of a competitor to supply and hence naturally fragment a market in its favour.
Now, ideally this game would have been played many many years ago (2008 was ideal). But It wasn't. However it's still possible to play it today (just about). What is needed is:-
  • a large player willing to spending billions on building an AWS clone around OpenStack.
  • consolidation of OpenStack providers around this effort and the creation of a market of AWS clones.
  • co-operation with other open source projects (Eucalyptus, CloudStack) to build an AWS compatibility suite and hence extend the AWS clone market to other technology platforms.
  • introduction of assurance services which monitor the market for compatibility.
However, it's worth also noting what is not needed and what will almost certainly drive OpenStack into a niche from which it won't recover. These are :-
  • continued focus on differentiation on the API as though the interface for a commodity is somehow a game changer.
  • continued misunderstanding of the power of ecosystems and why they have to be neutralised. 
  • focus on a temporary transitional market such as private cloud. Yes, you can win some space from VMware but that's buying into a market which is heading for niche under its own steam.
  • the collective prisoner dilemma of OpenStack with everyone differentiating for their own position and existing product sets.
OpenStack's chance of success has IMHO been severely harmed over the last few years by what I can only describe as "flat earther" opinions e.g. a product mentality of feature differentiation and company competition applied to a utility world ruled by service quality differentiation and a battle of ecosystems. In terms of strategic play, pretty much the only thing OpenStack has got right is being open.

They need to stop using product examples to justify differentiation plays and realise they are ultimately not in a product game. If they lose the public arena then its niche time. They need to change soon. They need to learn how to play the game.

And this is the problem. OpenStack appears to have become a battle of egosystem over ecosystem. Without a forceful player changing this then the prognosis doesn't look good. This lack of strategic play is why I hold the view that OpenStack is a dead duck. Its future is with niches. I can't see this changing unless a miracle happens e.g. some level headed knight rides to the rescue with a few billion to spare.

Saturday, August 03, 2013

Does commoditisation lead to centralisation?

tl;dr Maybe. 

This question came up again recently, so it's probably worth covering this very old ground. 

All activities commoditise to good enough components providing interfaces that allow higher order systems to be rapidly created - nuts and bolts, electricity, you name it. The process of how things evolve to become commodity is driven by competition (supply and demand) between all actors in the market. Evolution towards a commodity is inevitable if competition exists.

Do we end up with one interface or many? Generally, we tend towards very few for a single act though there can be constraints (e.g. geographical, political) along with specific forms of provision for specific (more niche) markets. 

Does this inhibit innovation? It's always worth separating out the interface (which represents the commodity or utility) from innovation above the interface (i.e. building  electronic goods which consume standard electricity supply) and innovation below the interface (i.e. operational improvements in power generation). It's worth noting that whilst the interface becomes standard, innovation occurs above and below the interface and the rate of innovation above the interface accelerates as the interface becomes more standard (componentisation effects).

Does that mean we end up with centralisation? Not at all. The question of central vs decentralised depends upon an entirely different set of economic characteristics and we can often yo-yo between both. Take for example, electricity provision for which behind the "interface" a wealth of change is allowing decentralisation of some of the supply. The reasons for centralisation vs decentralisation often have a lot to do with strategic game play i.e. when a market evolves from product to utility then if a few players perform well and the rest of the competitors run around like headless chickens then it tends to centralise around the few players. It doesn't have to be this way.

So, take cloud for example and in particular IaaS. Amazon has played a good game so far and built a powerful ecosystem which it exploits. The competitors (with the exception of Google and to a lesser extent Microsoft) appear to have been fairly clueless so far. Hence, we're likely to see centralisation around Amazon, Google and MSFT. It didn't have to be this way.

How could it not be this way? Well, the shift from product to utility was highly predictable and not an unexpected change. By 2004 it was crystal clear the change was about to happen but we had plenty of warnings as far back as 1966. So, this isn't the usual case of "disruptive innovation" where a combination of inertia and unexpected market change (i.e. product substitution due to some change in characteristic such as cable vs hydraulic excavators) disrupts a firm.  In this case, it's a highly predictable and expected market change which disrupts. This only occurs because of executive failure to prepare for this.

When Amazon launched EC2, the competitors had multiple options. For example, in '07 / '08 :-

1). They could have flooded the market with AWS clones and created a price war.  This would have increased demand (IT demand is elastic) beyond the ability of one competitor to supply (Data Centres are capital and time intensive) causing a fragmentation of the market. You would have created a federated market of clones and as the "clone" market grew to exceed the AWS ecosystem then it would in effect capture, dominate and own the AWS APIs.

2). They could have flooded the market with their own individual systems, creating a price war. Again a fragmentation play but also a more aggressive (and dangerous) ecosystem play. Ecosystems if properly used can enable a company to simultaneously increase innovation, efficiency, customer focus, stable revenue and wealth generation opportunities at a rate in proportion to the ecosystem. By flooding a market with your own systems, you're basically going for a sink or swim - either you'll build the bigger ecosystem and runaway with a chunk of the market or you won't and then its often niche land for you. Chances are we would end up with a few big providers and Amazon might not have been one of them.

3) They could have flooded the market with their own systems based upon a common system, creating a price war to fragment the market. In this case you're likely to end up with a federated market around the common system and assuming they quickly built a larger ecosystem (certainly possible in 2007) and didn't go off on some collective prisoner dilemma then it would have easily won creating a federated market without Amazon.

However, this didn't happen. The executives of the competitors did nothing and sat and watched AMZN steal the show. Today, AMZN ecosystem is so large that your moves are limited by its effect but it's still possible to build a federated market around AMZN or to co-opt GCE. Anything else is likely to end up niche pretty quickly. Though, niche isn't all bad as you can eek out some form of existence just not a very big one.

The point is don't confuse how things evolve to commodity and utility with the question of centralisation or de-centralisation. These are entirely different.

The process of evolution simply requires competition by all actors. The question of centralisation or decentralisation when a shift from product to commodity or utility occurs is usually a question of whether the competition is on the ball or clueless. 

Oh, and don't blame the engineers, culture or trot out the old "innovation dilemma" excuse - this is purely an executive responsibility. If centralisation occurs (which is likely) and you're not one of these big players then if you were a former giants, you've simply been outplayed. Strategic gameplay is critically important when it comes to change. 

It's like Canonical (Ubuntu) vs RedHat (RHEL). RedHat dominated the field and let Canonical steal the future. I'm sure, as we shall see over this decade, RedHat has hardly been the smartest player. Even OpenShift versus CloudFoundry doesn't appear to be going that well for them. They should have bought Novell, at least it would have forced IBM to acquire them.  Oh well, it does look like one of those ignominious endings awaits. Buy-out by Oracle or CA? I suspect it'll be something pretty miserable.

-- Update 21st August 2013

According to this recent article by @mjasay based upon Gartner's new research then "AWS Now Five Times The Size Of Other Cloud Vendors Combined". If I was a shareholder in any of those dominant hardware players of recent years, I'd be spitting fury by now. It's not like there wasn't oodles and oodles of warning.