Tuesday, July 30, 2013

The 'Innovation' Battle

For those of us who have accepted that IaaS (infrastructure as a service) is all about utility provision of a commodity then innovation is not about the interface but should be above and below the interface. Above the interface, this is all about the creation of novel higher order systems in much the same way that utility electricity enabled television, radio, consumer electronics and computing. Below the interface is all about operational improvements (efficiency in production, distribution of production etc) which normally manifests itself in price vs quality of service. This is the tune which I and many others have been singing for the last six years and +Randy Bias 'open letter' on OpenStack and AWS compatibility is just a reinforcement of that point. 

However, others sing a different tune. They believe that innovation of the interface is paramount. They believe that what matters is the interface, they hence strongly oppose adoption of AWS APIs. It's akin to a constant argument over what voltage and frequency electricity should be provided at rather than concentrating on development of higher order systems or operational efficiency. +Robert Scoble 'open letter' on Why OpenStack shouldn't focus on AWS compatibility is very much of this ilk.

I cannot reiterate more strongly that was doesn't matter is innovation of the interface and so it's better to simply adopt the dominant interface and instead focus on above and below the interface i.e. concentrate on operational efficiency in provision of good enough components or the creation of higher order systems (see figure 1). The same was true with electricity, nuts and bolts and a vast sway of other activities that evolved to more industrial components over the last three hundred years.

Figure 1 - Competition and Componentisation effects


From the figure above, competition (demand and supply) drives any activity to a more evolved state e.g. commodity and utility. Provision of an activity as a good enough commodity component is not just about efficiency but also via the provision of a standard interface it enables the rapid development of higher order systems where new sources of value (i.e. worth) are created.

This effect is known as componentisation and occurs through economic, technological and biological systems. As an activity evolves to more of a commodity / utility then the focus should ideally be on operational efficiency behind the interface and rapid development above the interface. A constant focus on 'innovating' the interface diminishes both operational efficiency and agility in building higher order systems by introducing additional costs of change and should only be undertaken when there exists a clear reason why the interface is not good enough.

Given widespread adoption of the AWS APIs, there doesn't appear to be a clear case for the AWS APIs not being good enough. Focusing on differentiation of APIs against a dominant and growing ecosystem rather than adoption of the APIs and focusing on operational efficiency seems like a losing strategy. Maybe I'm missing something?

Saturday, July 20, 2013

On Evolution, Strategy, Cycles and Bias

On Evolution

Many years ago I developed a model of how activities (i.e. things we do) evolve due to competition both demand (user) and supply side from the genesis of the act to commodity and utility provision. See figure 1.

Figure 1 - Evolution


Whilst this process appears self evident i.e. simply looking around seems to show that the once novel and new (genesis) becomes commodity, I took the time (about 6 months) in 2007 to collect 4,088 pieces of data to plot this out and have since then extended the data set. Whilst this represents a weak hypothesis, I have yet to find a better model to explain what is happening though I continue to look.

The process is unavoidable because it depends upon the actions of actors in the market (i.e competition). So, you can't stop it unless you prevent competition. This is also a good thing because it drives development of higher order systems through provision of defined interfaces (an effect known as componentisation) and without it our society would not have become so technologically advanced (see figure 2).

Figure 2 - Componentisation


We don't live in a society where everyone has their own home grown custom built nuts and bolts or home grown custom built electricity supply ... if we did, we almost certainly wouldn't have got much further than home grown custom built light bulbs.

On Strategy

Now evolution is part of one axis of a landscape map which is a technique I use (and others) in determining strategic play. However, mapping is not what I actually want to talk about but instead the importance of strategic play.

Some time ago, I did a piece of work which looked at the level of strategic play between companies and their use of open techniques to manipulate the market (see figure 3). It turned out that some companies were highly strategic players who manipulated environments with open techniques whilst others were not. The size of bubbles in the figure represent number of companies at that particular point.

Figure 3 - Strategic vs Open


What was interesting is that when it came to looking at commercial success then the level of strategic play was a much greater indicator of success than the meme (whether open, cloud etc). This piece of work provided weight to a suspicion that I had but had not been able to previously test. Many companies survive not because of excellence but because competitors are equally incompetent.

On Cycles

Now, it turns out that as a result of competition and the evolution of activities we create inertia to change. That inertia is actually incredibly important as it delineates multiple states of competition which occur at a macro and micro level. These states are known as Wonder, Peace and War.

In the state of war, a pre-existing act is commoditising usually initiated by new entrants whilst past suppliers are stuck behind inertia barriers. It causes a punctuated equilibrium (a period of rapid change) where new organisational forms appear and the past is flushed away (see figure 4)

Figure 4 - Wonder, Peace and War


Now, this model was developed some time ago but unfortunately there was no way of testing it until a cycle occurred. Fortunately cloud computing gave the perfect opportunity to test and in 2011 using a set of  techniques from population genetics (I'm a former geneticist by training) we detected one of the consequences occurring - a new form of organisation budding, a Next Generation. I've written on the characteristics of the Next Generation many times before.

On Bias

The problem for many companies, is in this state of war in IT caused by the inevitable commoditisation of pre-existing acts through competition then not only do we have inertia but a change in economic state, a punctuated equilibrium and a new forms of organisation. The level of strategic play is critical here because you can no longer rely upon the incompetence of competitors. There are a new breed of organisations out there and they use ecosystems, they use technology and open source as a weapon, they understand the landscape and they have great situational awareness.

Strategic play is critical in this world.

So, why do I think AWS is going to be the dominant force in infrastructure services. Well, because of their level of game play (use of ecosystems etc). I think Google Compute Engine is a serious threat and MSFT is dangerous but it has its own problems (i.e. the biggest threat to MSFT is MSFT).

Why do I think OpenStack is likely to be a dead duck? For exactly the same reasons. The level of game play sucks (e.g. collective prisoner dilemma, focus on a transitional market of private, differential play rather than co-opt). Unless a real player enters the space around OpenStack then I don't rate its chances.

Unfortunately this stuff is uncomfortable reading for some especially when their commercial self interest is at stake. So, I've been called everything under the sun from buffoon, to gaga, to pundit, to fake academic, to zealot ... it's a long list with usual questioning of my integrity by self proclaimed men of reason. What they don't do is attack the model by providing a better model to explain the changes we see, they just proclaim it is wrong and ask we believe them. Well, I don't.

I'm a skeptic. I don't even believe the model because that's all it is - a model. I simply understand that it's the best one that I have found until I find better. If you find it useful, good but if you want to take it apart then just provide one activity which won't commoditise through competition or provide me with a better model. I've yet to find it, I'll be very happy if someone can point me to one.

I'm also happy to report that the process of competition driving commoditisation of activities still seems to be occurring. It's not going away. Nothing has changed.

 Get used to it.

Wednesday, July 17, 2013

Where would we be if Linus wrote an MS-DOS clone 20 years ago?

In response to my discussion on the Trouble with OpenStack, I get a lot of questions of the kind "Where would we be if Linus wrote an MS-DOS clone 20 yrs ago" or "Yeah, Linux should copy Windows APIs" etc.

Ok, I'm a bit tired of repeating the same old stories for what has been 6 years now but ... one last time.

You're mixing the world of products with the world of utility.

They are not the same.

The game is not the same.

The power of ecosystems is different.

With products you can't use consumption to determine novel and diffusing acts that have been built on your product, you have to use market research which is costly and slow. With utilities, you can use consumption and this is a huge sodding deal.

With consumption information, you can run an ILC model (innovate-leverage-commoditise) by being a first mover to industrialise an act, getting everyone else to innovate on top of it, leveraging consumption of the service to spot diffusion, being a fast follower to any successful change you've spotted and then commoditise to new components to grow the ecosystem.

By doing so, you can gain :-
  • highly stable, high volume, increasingly low margin revenue
  • maximum wealth generation by being fast follower to novel
  • high rates of innovation (as everyone else is doing it for you)
  • high rates of customer focus (by leveraging data in the ecosystem)
  • high rates of efficiency (by focusing on this and economies of scale)
You have to manage it carefully because people will grumble about you eating your ecosystem but each time you do you provide new components which grow it.  Not only do you get all five effects but all five increase as your ecosystem grows which increases super linearly with the physical size of your company i.e. under a well run model, as you get bigger, you get more innovative, more efficient, more customer focused, more stable and more wealth generation all at the same time.

Have people really not twigged why Amazon isn't just winning the space it's accelerating further away from its competitors? Do you think it's random that they sometimes gobble up their ecosystem? 

Yes, you can grumble about Linux or MS-DOS but these were product plays. You couldn't play an ILC game effectively in a product world only in a utility because the consumption of APIs is what gives you the mechanism to do this. Oh, and there is absolutely no point in trying old product based feature differentiation strategies against it. 

You might as well pick up a copy of Porter's Competitive Advantage and go "how shall we beat Amazon? Do we do innovation, customer focus or efficiency?" because guess what, they're doing all three of these plus stable revenue plus maximising wealth generation simultaneously against you and growing all of this with the size of the ecosystem. 

So, sure, if you want to go and try a differentiate on APIs route then you better magic up a larger ecosystem rapidly which means you need to drop billions upon billions upon billions in the next six months and even then its doubtful. A much more sensible route is co-opting that ecosystem as old tales about product battles ain't going to help you one jot.

Finally, it's not like this game hasn't been known about for the best part of a decade. Wake up, smell the roses and realise you're not fighting in a product world. I'm sure you can find some analyst or flat earther to tell you that "it's just like Proprietary product vs Open product" or that "Ecosystems aren't important" but realise the game is different by simply opening your eyes and looking.

Blindly following a product mentality of feature differentiation in a utility world against a well managed ecosystem is not going to work out well for you.

--- Update 17th July

Just to comment, Porter's book was fantastic for its day and I still recommend it but the game has changed since then. Still it's a good read just the game is different.

Tuesday, July 16, 2013

Could CloudStack, Eucalyptus, Open Nebula and Open Stack all win in the cloud?

I was asked a question whether we could follow a different road and have a market of AWS clones with multiple open source systems providing this and hence all the open source systems win?

Well, assuming we're not talking about the parts of OpenStack fixated on differentiating from Amazon which I have strong opinions on but instead those focused on co-opting Amazon under an embrace (for now) and extend (when your ecosystem is bigger) play then ... technically ... yes. But it's a difficult road.

It has always been possible to create a market of AWS clones with multiple different open source systems providing this. The problem is that in a market of AWS clones then you need to have an open source reference model as the 'standard' for reasons of semantic interoperability. It's easier if that 'reference model' is in fact a system (e.g. Cloud Stack or Eucalyptus) that all the providers implement. This is the more likely route but there is another way.

You could have multiple different open source systems that providers implemented and still maintain semantic interoperability if all those open source systems comply to another 'reference model'. This would actually be beneficial for reasons of competition on operational efficiency and reduction of systemic failures. 


So, surely that means we could have CloudStack, Eucalyptus, Open Nebula and some of the OpenStack party create a rich set of AWS compatible environments? Well, of course but there is a problem. How do you define one thing amongst this group as the 'reference' model?

There is only one way around this that I know and it was a core part of the Zimki plan back in 2005.

For a quick history lesson, Zimki was a Javascript based Platform as a Service, all provided through APIs with object storage systems, billing, management tools, easy switching between Zimki installations etc. Think of Cloud Foundry but way before Cloud Foundry and limited to one language. Zimki was supposed to be open sourced in 2007 in order to establish a competitive market of platform providers but this didn't happen for reasons of the parent company being advised that the future was outsourcing. Long story.

A key part of Zimki was the massive test scripts - everything was accessible and hence testable through APIs. Hence each different installation could be tested and confirmed whether it was compatible (within the bounds of the test script). Think Cloud Foundry Core.

As part of the open sourcing, a remote compatibility testing service was planned with a trademarked image. Any provider meeting the compatibility tests could use the "Zimki Provider" logo rather than the "Powered by Zimki" logo. We had assumed that providers would want to create operational efficiencies but for a market then we wanted to ensure compatibility of services and  switching. Hence the compatibility service (we called it an assurance service) was essential to resolve this.  Zimki also had goals of establishing an exchange and for this the assurance mechanisms were critical. This was all written and talked to death about between 2005-2008, it's old ground.

Now, it is possible to play the same trick with AWS compatibility today. Hence rather than use an open source system as the 'reference model', you could instead create a massive set of tests and a compatibility service. This way you could have multiple open source systems complying to the 'reference model'. I've told enough people enough times in the past that this is a possible route but there's no takers so far. For point of interest, there's potentially huge value in building an AWS compatibility service (assuming we get a market of AWS clones) due to exchanges and other instruments. And yes, you could have multiple compatibility services, in the financial world they would be the equivalent of rating agencies.

Hence these open source groups could come together by creating a massive set of test scripts and providing some form of joint AWS compatibility service. Given a large enough set of test scripts they could ensure reasonable fidelity of behaviour between different open source systems and with AWS. If they did this, then the compatibility service would be the 'reference' model and each provider could show compatibility to it (reinforced with trademark images) and hence you could create a market of AWS clone with multiple open source systems.

So how difficult would this be? Well, given that many of the groups must have some form of AWS test scripts already, then it shouldn't be technically difficult to create a large enough set (think many hundreds of thousands of tests) and establish this. However, you need to bring those groups together to make this happen and that is the tricky part (or it certainly has been in the past).

It would probably require all those interested parties getting together in some form of AWS Compatibility Summit to make it happen and realising this was in their common interest.

But, yes ... it is possible. First of all though, it requires someone to try and make it happen.

The trouble with OpenStack

Depending upon which analyst or flat earther you listen to, then OpenStack rules. I don't buy this for a second and instead take the view it's most likely to be a dead duck. Why?

Well, from its very beginning, when I put OpenStack on stage at OSCON Cloud Summit in 2010, I warned several of the executives behind the project on the dangers of Amazon's ecosystem. Being an Amazon clone and co-opting was priority number one in my view. Alas, they wanted to talk about differentiation and not being beholden to Amazon. This was their 'strategy' in a commodity market and they ignored that if you grow your ecosystem to be the larger then you actually control the API. Since that time, I've heard this story of differentiation repeated over and over again. It felt as though 'anything but Amazon' ruled their thinking rather than sound strategic sense.

Now, I need to be clear here. This wasn't everyone. There was one particular individual (who I previously worked with) behind OpenStack who really understood the importance of co-opting and how to play a good game - Rick Clark, Unfortunately that battle didn't seem to be won. Personally, I take the view that if Rick had been given entirely free reign in the project then things would have been very different. 

Anyway, that isn't what happened and so today we find that OpenStack has a significant developer and vendor ecosystem but is categorised by a collective prisoner dilemma. Forget fidelity with Amazon there isn't even fidelity between OpenStack distributions and no well established mechanism of doing this (though first steps have been made). The solution to this problem and the use of trademarks has been well known for many, many years but to compound it the focus of OpenStack also seems to have shifted to private rather than public as though avoiding the battle with Amazon has become a goal?

There are a some voices of reason in the mix, people like Randy Bias has a relentless focus on building AWS compatible OpenStack clouds (something I strongly agree with being an advisor to his company) but overall in my opinion the potential of OpenStack has been squandered by a heady mix of marketing and poor strategic play. Had only the wisdom of Mark Shuttleworth been listened to or potential customers like NetFlix's Adrian Cockroft - "please try to build AWS clones that scale".

Don't get me wrong, I'm all in favour of an open source reference model as the basis of a cloud market - I keynoted about the necessity of this at all layers of the stack at OSCON in 2007. Nothing has changed my mind since then but OpenStack IMHO failed to follow the path of pragmatism and give users what they've been screaming for - a market of public AWS clones. Even the recent survey of OpenNebula users reinforces the point. 

Does it mean OpenStack is out for the count. Well, not quite - as I mentioned last year it's possible if the players behind OpenStack flood billions into the system and create a massive competitive market around OpenStack by end of 2013 then they can still compete with this whole differentiation play. I see the hope of that happening as somewhere between Bob and no and Bob has left town. Time is running out, hope is fading fast.

So, is that it for OpenStack in my opinion? In my view, yes. Short of a miracle such as OpenStack announcing it'll become an AWS clone or some new entrant coming into the market, grabbing OpenStack and building the world's largest public AWS clone by flooding billions into its creation then I can't see it ever reaching its potential. In fact, I see things disintegrating as VC's start to ask "where's our future money?" 

Do miracles happen? Unlikely, you'd need a company with huge amounts of capital (many many billions) and vision, a great strategist and practitioner who understood the importance of ecosystems and you'd have to build a rock solid engineering team and large scale operations in lightning fast time with a brutal commodity focus. That seems like wishful thinking, the song "livin' on a prayer" comes to mind.

So barring miracles or somehow a massive market forming in the next six months then I hold to the view OpenStack is a dead duck. I don't see that changing. It shouldn't have been this way, it should have from the very start been an AWS clone.

-- Update 16th June 2013

There are a couple of other scenarios where OpenStack might succeed beyond forming a massive market by end 2013 (unlikely), a u-turn on the whole differentiation play (unlikely) or some other miracle (unlikely). This includes co-opt GCE (Google Compute Engine) or magic lobbyists somehow persuading Government's to ban Amazon or adopt some other API set as the standard. The former is possible and raises its own set of questions, the latter assumes some rather strange behaviour by Government's and seems pretty unlikely.


Tuesday, July 09, 2013

The console.log() to printf() conudrum

"What's the difference between console.log() and printf()" is an interesting question. One which can dive into the semantics of language, the concepts of streams, functions and object methods, past design choices and object orientated design, stack vs register, the history of language, the hierarchy of languages and this is for starters ... there's a lot to choose from.

Or it could simply result in "not a lot, they generally both print to screen". The choice of where you want to take this discussion depends upon the audience, their willingness to be involved and how willing are you to make the distinctions by fighting through some of the inevitable confusion.

I'm often faced with this choice, we all are and hence we have to be mindful of the context we find ourselves in. Most recently the same issue came up with "What's the difference between Open Source and Free Software" - well, there's a lot. For me the gulf is as large as console.log() to printf() but that's because I'm interested in the history, the culture, how it formed, the ideals, the overlap, the separation and the development of both movements.

To most, they are identical. The distinctions are lost in the magnolia of modern reporting. Does it matter? Well, that's another question - I tend to think it does but I'm well aware of how we tend to paper over the fine cracks of history to produce a digestible prose. 

Monday, July 08, 2013

A question of standards ... and patents

Variation is essential for competition and competition is essential for evolution.

At first glance, you could argue that we never need any form of standards. However, as things evolve towards more uniform, standard, commodity like components then they enable the evolution of higher systems which in turn evolve.

Evolution begets genesis begets evolution.

Hence we have a natural conflict within technological and economic systems, namely between variation which enables evolution of a system versus standardisation of that system which enables the evolution of higher order systems which consume it.  In other words;

Variation is essential for evolution of a system but lack of variation is essential for evolution of higher order systems.

Standard nuts and bolts does limit variation of the nut and bolt but it enables rapid development of higher order machines. Standard electrical interfaces does limit variation of electricity supply but it enables rapid development of higher order systems that consume electricity.

The history of technological progress involves a constant resolution of this conflict. 

But the question is how do you choose a standard and when do you know it's right to choose a standard? The answer is generally that you don't, fortunately we have a mechanism which can help us to determine the answer - it's called us or more aptly the market.

Our ability to make a reasonable choice depends upon the size of the group involved (i.e. a large market trumps a committee) and experience (i.e. examination of actual use trumps a survey of people who aren't involved). Markets because of number of actors (both consumers and suppliers) involved in active use including those creating higher order systems are pretty good means of identifying the standards. What the market chooses is commonly called the de facto standard.

The only time any form of interference is needed is when it becomes clear that the market is failing i.e. some form of level playing field is not being created for a maturing act. At this point, Government's often have to step in and ideally this should be through open standards in order to create a more functioning market.

Hence, in the case of Cloud then we're almost certainly too early for such interference. The market seems to be functioning well, de facto are emerging and there seems to be a high level of competition. All looks good and healthy. In the case of document formats then a level playing field for a mature act didn't seem to be forming and hence the introduction of a open standard like ODF seems a necessity to correct this.

Making a choice on standards is not easy but must be carefully balanced with the user needs, the maturity of the market, how well it is functioning and the state of evolution of an act. Interference should be minimised especially when it is clear that there exists high levels of competition and a probable de facto (e.g. with Cloud IaaS then the AWS APIs have multiple open source implementations from Eucalyptus to CloudStack to Open Nebula to various distributions of Open Stack). Choice should be kept to a preference unless it is clear and the Government can make an informed decision.

Alas standards can be gamed to the advantage of a firm. There are many historical precedents for companies using standards processes to undermine others. I tend to view the use of committees to create standards as a last resort best avoided. Do we need standards in the cloud? Yes but that's already starting to happen in the market, so let it happen. There are however places Governments can help such as simplifying and unifying legislation such as data protections rules. Which brings me to my next topic ...

On Patents

The reason why I mention this whole topic is due to a conversation on Patents. There is variation in the patent systems of different countries and that's actually good for competition. We are far from knowing what a good enough patent system is and attempts to force a uniform patent system are just as likely to create problems as resolve others. The patent system itself suffers from what I consider to be a one size fits all mentality in that the length of terms are generally fixed. You can never find a single length of time which fits all markets i.e. what is too long for the software world is too short for the pharmaceutical.

A better option, which I've argued for over the last decade, is to allow the length of term to be variable upto an upper limit (say 25 years). For every patent, the length of term should be set to "slightly more than the likely time for independent discovery of this invention". Hence if you had patents in the software world, then they should probably be of the order of a few weeks. Whereas discovery of a teleportation system certainly merits consideration for the whole hog.

But how do you determine the right mechanism for this? How do you determine the right length of time? You don't. You let the market fight it out and only interfere again when what is created is not suitable. But in order to do this, you need to set the conditions carefully and to create a conflict which resolves the issue. 

In the same way the conflict between evolution today (variation in underlying component) and evolution tomorrow (variation in higher order systems based upon a lack of variation in underlying component) helps create de facto standards for the market through competition then you need to set up similar conditions for patents.

Hence your patent process, a trade between society and innovators, could start with the award of a patent with a term of "notional length of time equal to [time awarded] and initially determined to be a maximum time for independent discovery with any actual time to be determined by market conditions". Hence I get my patent for five years say but I know that the amount of time awarded might actually vary below this amount.

Now, we need to set up the conflict as the company (and its lawyers) will have an interest in gaining the maximum time and so we have to a mechanism to encourage others to minimise it. The best way to do this is to put the whole value generated (which increases over time) up for grabs.

An example would be: "Should the length of patent time exceed the actual time for independent discovery, then any party can claim against the patent holder all direct and indirect revenue along with any costs incurred associated with the invention. An equal sum to that awarded to the party must be paid to the state."

This creates a hard choice for patent holders, the longer you hold onto the patent then the larger the risk becomes that someone will sue you for exceeding the time of independent discovery and what is at stake increases. You can't set up a subsidiary to sue yourself as any successful win means an equal payment to the state.

For a patent troll, along with the normal battle of trying to apply your patents then as soon as you've won then you potentially open yourself to counter suits depending upon the strength of your patent. The cost of which could be the original claim which is then paid to an third party and the same again to the state. Hence, winning with trivial patents could become very costly. And yes, you would enable an entire market of lawyers acting as Anti-Patent trolls and suing any and everyone for holding patents beyond what they can demonstrate is a likely time of independent discovery but then that's the point, to create a natural conflict.

The conditions set up as above would encourage patent holders to use patents (as is the current situation) but also to retire them early and to apply them only for a reasonable length of time. Now, I'm not proposing the above be used but I'm setting it out as an example that you can create conditions which enables the market to determine the answer. 

What you want to avoid is pre-supposing you know what is right (i.e. 20 years as a length of term) and instead create a natural conflict which enables the market to decide. It's the same with standards, there's a natural conflict which de facto solves and the only time you should directly interfere is when the market is failing.

Friday, July 05, 2013

On Prism, European Clouds and European Cloud Standards

First, as I've said several weeks ago, I love PRISM

This is not because I think PRISM is a good idea but a great opportunity to be exploited for the benefit of the European Union.  The 'breach of trust' can be used to renegotiate trade, create a more favourable economic environment for Europe and even resolve some tax issues with certain companies.

I outlined how to do this in the above linked post and it begins by declaring outrage and threatening to ban US web services in Europe. Alas, negotiators will always call your bluff and so you have to be prepared to do this and encourage the development of our own equivalent systems. This will require a huge investment fund. I suggested about 100 billion Euros and this is nothing more than a modern technology equivalent of 'Freedom's Forge'.

I'm all in favour of re-balancing trade and the market in this way. Yes, there will be years of pain but the benefits can be explained if the investment fund is large enough. You want to create a massive start-up boost to fill the vacuum that banning those services will create, let the market form and then let the VCs pile in.

Would I do this? Without any hesitation I'd use this as my negotiating position.  I love PRISM and thank you America.

What about the idea of building a European Cloud? 

Hmmm, hold your horses here. What we want to do is encourage the market to fill the vacuum not for the EC to attempt to build it themselves or get one of the many IT dinosaurs to role in with an 'outsourced solution' for Europe. Markets are pretty good at this stuff and competition, that's why we should take advantage of them.

What we want is a strong European market which meets the needs of users.  We'd be just exploiting the 'breach of trust' around PRISM as the excuse to help boost that European market without anyone being able to shout 'protectionism' and start a trade war.

So, if you mean by European Cloud a 100 billion Euros investment fund to enable a massive start-up market then I'm all in favour.  If you mean the European Commission should build a cloud for Europe then no.  Please don't.  You'll outsource it all, the costs will spiral, it'll become a cluster frack and cause untold damage to the economy.  Let the market do what markets are good at.

Well, at least we're going to need European Cloud Standards?

Hmmm, hold your horses here. As I said before markets are pretty good at sorting this out and creating de facto standards which meet users need. You only need government interference when the market is clearly not functioning correctly or there is little evidence of a level playing field developing or it's mature enough that there is a clear benefit.

There's been a glorious history of standard bodies interfering in no-ones interest but their own e.g. OSI vs TCP/IP.  Standards can often seem to end up being more about the politics and favouring one vendor over another than any notion of user benefit, though they always describe themselves as such.

In the cloud, we're too early for standard committees to come rubber stamping standards in the space, the de factos are forming and so we should let them form.  Of course we could do all sorts of things with legislation to make cloud adoption easier.

So, overall ...

1) Do I like Prism ... yes, and god bless America and the NSA for handing this golden opportunity to us.

2) Do I think we should brutalise Trade Agreements by exploiting PRISM and the breach of trust ... oh yes, absolutely.

3) Do I think we should be prepared to go the whole hog, ban US services and create a 100 billion Euro investment fund for small tech start-ups in Europe to boost the market ... oh yes, without hesitation.

4) Do I think the EC should build a cloud ... oh frack no. Let the market do that. We don't need some massive cost over run of an IT blunder outsourced to the same sort of companies that have failed to exploit the shift to cloud and been living of overcharging governments.

5) Do I think the EC should create standards for the cloud ... oh frack no and double frack no. Let the market do that. We don't want a set of imposed standard that users don't want because some standards body committee wants to earn its keep by 'helping'.

---

Update 1st Sept 2013

I was recently asked what impact do I think PRISM will have on the cloud industry? Well, ignoring any gaming (e.g. renegotiation of trade, tax position and the potential long shot of a 'freedom forge' play) then any impact is likely to be confined to those who have inertia to the change anyway.

Many of these laggards were always likely to hold up against using cloud for as long as possible and PRISM is just another excuse in a long list.  Beyond this I can't see PRISM having any material effect whatsoever or if it does, it'll be minimal and short lived.

This whole situation reminds me of the late 1990s / early 2000s and the resistance against e-commerce because it "wasn't secure", "most people are not going to use a credit card online", "it dis-intermediates our existing channels", "people want to go into shops and have the personal touch of meeting a sales rep", "we're a relationship business" and any other excuse that could be found.  Many of those companies who finally did take the plunge often built their own online payment systems because somehow "in-house security was better".

The idea of using a third party payment system such as a company like PayPal was general frowned upon by certain groups and considered fairly risky. Did this stop the change ... nope.

Is PRISM going to stop cloud ... nope.

Thursday, July 04, 2013

An interest in interest

Today, I noticed two adverts. One for some form of card with an APR of 627%, one for a current account with around 2.5% interest. The problem with these adverts is that many people neither realise how percentages work nor the impact of compound interest. Hence, I prefer a simpler way of expressing this in an easy to understand language.

What I'd like to see is the adverts shout out the impact of either saving or borrowing £100 under these terms for five years. However, the problem with this is the impact of tax, any charges and the issue of inflation i.e. tomorrow's pound may buy more than today's but it is more likely to buy less than today's and hence you have to adjust for this. Even if you decide to deal with inflation then which inflation figures? We have a tendency of changing the basket of goods we measure on. However let us assume we take CPI as the inflation measure, let us also assume for simplicity of this example that the figure is 5%.

So in the case of saving my £100 would become :-
£100 * (1 + 0.025)^5 / (1.05)^5 = 100 * 1.13 / 1.28 = £88.

Of course, the actual amount you get back will be higher but its spending power due to inflation is lower hence we will reflect this in real terms of today's money. So my current account advert rather than saying 2.5% interest could simply state "Save £100 with us and we will give you back £88 in five years time (minus any tax)"

For my loan of £100 then the amount to be repaid would be
£100 * (1 + 6.27)^5 / (1.05)^5 = 100 * 20,308 / 1.28 = £1,586,580

Hence my credit card advert rather than saying 627% APR could simply state
"Borrow £100 from us and pay back £1,586,580 in five years time (plus any charges)"

Now these are figures people can understand. Obviously the loan industry will be trying to get reported CPI figures to be shown as higher (so the cost is lower) and the saving industry will be trying to get CPI figures shown as lower (so the return is higher). However that sort of tension is always good if you want to get something closer to the truth.

So, I thought I'd create my own updated adverts. Ok, they're not so jazzy as the real thing but I did add kittens for a bit of cuteness and some action statements - spend and save. I also think the message is a bit clearer.


Wednesday, July 03, 2013

There can be only one ...

Many years ago, at OSCON 2007, I gave a talk on the importance of open sourced standards to create competitive markets in the developing utility computing industry at all levels of the stack. There are many things we require for a functioning market but the most basic is switching which at the very least requires semantic interoperability. To achieve this and to create a free market as opposed to a captured market requires running open source code to be defined as the 'standard'. Any specification will result in interpretation and whilst such translation errors are acceptable in a product world this will not suffice in a utility world.

I've repeated this message many times over the last six years and how "Open source is absolutely essential for creating a competitive free market" but this has to be tempered with a focus on user needs and the reality of the market. Hence, I've long argued for open source AWS clones and the acceptance of EC2 / S3 / EBS as de facto standards. Whether we like it or not, Amazon has a huge ecosystem advantage which cannot be ignored. If you want to create a successful competitive market then embrace rather than differentiate is the order of the day.

Many people tell me that API compatibility is not enough, to which I normally smile sweetly and nod whilst muttering unmentionables and banging my head against a wall. Of course, compatibility with the APIs is not enough because of interpretation errors. For semantic interoperability then systems have to behave in the same way which is also why a reference model (running code) must be the standard and not a piece of paper.

Many people tell me that we will have multiple open source systems in the cloud, to which I normally smile sweetly whilst muttering unmentionables and banging my head against a wall. Of course, you'll get different islands in the cloud but for a market you're going to need semantic interoperability and because of complexity of systems that requires one open source reference model as the standard for that market. You're unlikely (though it's not impossible) to have multiple reference models for the same 'standard' in a market because each will behave differently. Whilst we will end up with some islands, there will be one dominant market based upon the dominant open source reference model providing the de-facto standard as we are fundamentally talking about a commodity market with little differential value. That is likely to translate into one reference model for a large competitive market of AWS clones, one for GCE clones etc.

Many people tell me that OpenStack will win, to which I normally just start muttering unmentionables  and banging my head against a wall. There are many potential candidates for example CloudStack, Eucalyptus and even Open Nebula. Key will be the creation of a competitive market and OpenStack already has its own collective prisoner dilemma and questions of interoperability between different distributions of OpenStack. Don't get me wrong, open source is important and a powerful competitive weapon but even more important as an indicator of long term success is the level of strategic play. Mixing both is extremely powerful and there are great players out there such as Netflix. Poor levels of strategic play was why I called into question Apple's future at the height of Apple fever in 2010 when Jobs was still in charge resulting in me taking one of my usual 'cup of tea' bets in 2011. It's the same reason why I call into question the future of OpenStack.

The reference model I favour is CloudStack. First, CloudStack has been clear that it wants to be an AWS compatible cloud (same as Eucalyptus). Second, CloudStack is part of the Apache Software Foundation and backed by a strong player. Third, CloudStack has numerous large scale implementations and most importantly CloudStack shows signs of good game play.

Oh, and before you ask .. don't I advise CloudScaling (an OpenStack shop) and shouldn't I be saying OpenStack over CloudStack? Well, yes I do advise CloudScaling. I also think +Randy Bias is spot on with his focus on making an OpenStack version compatible with AWS because that's what users need.  However, when I look at the game being played with CloudStack vs OpenStack vs Eucalyptus vs Open Nebula then I hold the view that CloudStack is playing the better game overall (though I have to admit that it's splitting hairs between CloudStack and Eucalyptus in terms of game play these days).

Many people tell me that by promoting AWS compatibility that Amazon will win, to which I normally just start banging my head against a wall. If you create a competitive market of multiple providers around a single open source reference model as the standard which is as close to semantically interoperable to AWS as possible then it will grow and you can take advantage of the ecosystem AWS has built. The wider market is looking for second sourcing options. 

Once the ecosystem around this open source reference model (i.e. all the providers) becomes bigger than AWS then the open source reference model will start to define the future market. Yes, you will need some form of assurance system (e.g. testing service and trademarked image to provide assurance to end users that a provider is compatible) to prevent a collective prisoner dilemma as companies do like to differentiate on features even though it's clearly not in their interests. Never underestimate the stupidity of companies to undermine their long term future.

You should never confuse AWS (in terms of the APIs and the services that they represent) having won with Amazon having won. In the short to mid-term then yes, it's likely that Amazon will take a huge chunk of the market but that doesn't mean this situation can't be supplanted by a marketplace of AWS clones. What matters is the game play by the individual actors and even Amazon might play a open source card yet. If Amazon does end up dominating the entire market for the long term, well you can put that down to shoddy game play by other executives.

Many people tell me that this is wrong because AWS APIs are proprietary and open APIs will win to which I normally just start thumping with rolled up newspaper at this point. All APIs are principles not expressions and therefore not subject to copyright law in the EU or US. You can't "own" an API. You can certainly be the body that is providing that system (as with Google providing the open source Android) but APIs are inherently open. To wrestle control of an API you need to become the dominant ecosystem. Ignoring the existing dominant ecosystem and user needs in pursuit of an open ideal which doesn't really exist since all APIs are open is ... words fail me.

But what about patents? Have you read any standard documents recently? All standards can be challenged by third party patents. In the world of software, this is a thorn that Governments will finally have to grasp and rebalance the trade to society's advantage. In the UK, judges do a fairly good job of reminding people you can't patent software or software dressed up as something else though lawyers are forever trying to get examples passed them.

Anyway, I'm tired and bored to death of this discussion. Fortunately in the platform space there are good players who understand the game like Cloud Foundry. They get the importance of markets, ecosystems and assurance systems though I do wish they would take Cloud Foundry into the Apache Software Foundation. In the application space, I'm sure we will see others and yes, there's lots and lots of companies and analysts who use the words but don't seem to understand the game.

Whether we want it to happen or not, the cloud market is forming and the 'standards' are already starting to develop in the marketplace. Despite concerns of many this is generally fairly healthy and open source has a critical role to play. Of course, there are many twists and turns yet to come. One huge concern of mine is that governments might waltz in and attempt to interfere with this process. 

The UK government in its open standard policy sensibly defined a set of criteria including not imposing undue costs and effective selection requires pragmatic and informed decision making. This is all good, because what you don't want to do is select a 'standard' in a market which is still forming. Standards themselves can, and have, been used as ways of manipulating markets and not necessarily in the interests of end users, though it's often window dressed as such.

For example, let us suppose I was a hardware manufacturer and OpenStack provider who was concerned about the growing strength of AWS and worried that I couldn't differentiate in a commodity market. I would have two choices - I could either compete directly by providing an AWS clone with better price vs QoS or I could find some way of hampering AWS to my own advantage. If I was inclined to take the latter route then I'd probably argue for the adoption of open standards (such as OGF's OCCI) as some form of Government standard under the argument of interoperability. If successful then I'd use this to promote the selling of my OpenStack distribution as compliant to Government standards.

That OpenStack has its own interoperability issues, that OCCI is just a specification not a reference implementation, that user are selecting AWS, that even in open source projects such as Open Nebula you have users demanding more AWS compatibility, that multiple open source projects (Eucalyptus, CloudStack) are focused on AWS compatibility, that the market is too early to define standards and that de facto standards are already emerging through competition would all be neatly sidestepped. This wouldn't be for the benefit of the end users though. 

It's always better to let a marketplace mature and form and give it the chance to create its own standards (the de facto) before rubber stamping standards on it. This is also why I have concerns over the whole EC examination into cloud standards. I hope it comes back with a "marketplace is still deciding a list of standards" rather than trying to interfere unnecessarily.

--- update 16th July

I was asked a question whether we could follow a different road and have a market of AWS clones with multiple open source systems providing this?

Well in a market of AWS clones then you need an open source reference model as the 'standard' for reasons of semantic interoperability. Now, you could have multiple open source implementations of that reference model and this would be beneficial for reasons of competition on operational efficiency and reduction of systemic failures. 

So, surely that means we could have CloudStack, Eucalyptus, Open Nebula and some of the OpenStack party create a rich set of AWS compatible environments?

Well, of course but the problem becomes how do you define one thing amongst this as the 'reference' model and which do you pick? 

The only way around this that I know is for those groups to come together and create a massive set of test scripts and provide some form of AWS compatibility service. It's not impossible but tricky to do this, however by defining that compatibility service as the 'reference' model then each could show compatibility to it (reinforced with trademark images etc).

It's possible, I've hinted enough times in the past that people could try that route but there's no takers so far.

Monday, July 01, 2013

Where, Why, How, What and When

The point of mapping a system or line is business is more than just promoting a focus on needs,  a mechanism of communication and collaboration, a way of applying the right methods, a means of identifying areas of differentiation and efficiency and finding common components.

It's most important function is as a tool of strategy.  Once you have mapped the environment then you can start to ask yourself questions of where you should change the landscape. Do I want to industrialise this component or protect this space? Mapping simply provides you a view of the chessboard, you now have to do the hard work and start by identifying where you can move (see figure 1).

Figure 1 - Where with Maps


Once you have determined the possible wheres then you can determine the why of strategy as why is simply a relative statement of why here over there. That choice of why should be determined by numerous factors from suitability, ability of competitors to play the game, opportunity to create ecosystems and take advantage of componentisation & volume effects, inertia, resource constraints, your own desires (see Nilan P's point below in the comments) etc. 

Once you have your why then you can determine the how - use of an open approach to manipulate the environment by encouraging commoditisation, introduction of a tower and moat play, creation of legislative barriers etc. There's a long list of games and dark arts that can be played. Once you've determined this, then you can determine the what and when of action.

Mapping is far from a perfect representation of the environment, in fact perfect representations are never possible. It is however generally useful in the same way that looking at a chess board is generally useful when playing the game. Mapping won't make you a great chess player though but it should give you an edge over those who don't look at the board.

As for how many don't look at the board in the game of chess they're playing ... it's surprisingly common. To find out if you're in this camp then go grab you're IT or Business strategy, whichever is closer. Rip out of the document any references to purchasing choices, implementation details, operational details and tactical plays and leave only that which refers to the "why" you're doing whatever it is you're doing.

If that "why" is vague, often hand waving with generic platitudes such as "innovation" and "opening new markets" and you have a sneaky feeling that the real why often appears to be because everyone else is doing it (e.g." 67% of successful companies are doing cloud and so should we") then you can safely bet that no-one is looking at the chess board. 

However, don't fret if your company is running blind. The chances are your competitors are as well and really that's all that matters, relative positioning ... until of course someone else enters your market.