Monday, August 31, 2015

A lament to the Enterprise of yesteryear

We're being hit by disruptive innovation! 


Our industry is being commoditised! 



Our business is complex! 


We're behind the curve!


We're going to innovate!


We've created a strategy!


Hired the best!


They said they were experts!


Marketing is key!


And the future is Private!


Or Enterprise!


Or Hybrid!


But we need to re-organise!


We have the solution!


Our strategy tells us so!


If we just fix our culture!


Then this time, it'll be different!


Or I will rend thee in the gobberwarts with blurglecruncheon, see if I don't!


An ode to a small lump of enterprise I found in my portfolio one midsummer morning, a lament to the Enterprise of yesteryear. Best repeated with the true poetic wit of a Vogon constructor fleet.

We're being hit by disruptive innovation! 
Our industry is being commoditised! 
Our business is complex!
We're behind the curve!
We're going to innovate!

We've created a strategy!
Hired the best!
They said they were experts!
Marketing is key!
And the future is private! 
Or Enterprise!
Or Hybrid!

But we need to re-organise!
We have the solution!
Our strategy says so!
If we just fix our culture!
Then this time, it'll be different!
Or I will rend thee in the gobberwarts with my blurglecruncheon, see if I don't!

Anyone looking for a short cut out of this, I'm afraid I can't help you. However, I would suggest mapping your landscape ideally by going on a get fit regime and cleaning up the enterprise then afterwards applying some thought. Adapting to the changing technological-economic environment is not a choice.

Thursday, August 27, 2015

Amazon and the last man standing

I often talk about the 61 repeatable forms of gameplay in the market and I know I'm a bit behind on doing those posts. I don't normally stray off the path but I thought I'd cover a well known game called last man standing. The reason why I want to talk about this, is there seems to be continued misunderstanding about Amazon and what's likely to happen. Now there are two possible reasons - either I'm wrong or lots of other people are.

Hence, I'll put my stall forward.

Amazon is likely to be supply constrained when it comes to AWS and EC2. What I mean by this is that it takes time, money and resources to build data centres. You can't magic them out of the air. With AWS already doubling in physical size (or close to) each year, this creates considerable pressure and if AWS were to drop the price too quickly then demand will go up to outstrip supply (i.e. it just won't be able to build data centres fast enough). Hence Amazon would have to control pricing in order to control demand.

I know that people talk about AWS being a low margin business but I'll stick with older figures and say that Amazon is probably making a gross (not net) margin of 80%+.  Let us look at revenue and for this, I'll turn to an old model from my Canonical days (see figure 1) after which we will cover a couple of key points in time that are coming up in that model.

Figure 1 - Estimated of Forward Revenue Run rate.


By my past reckoning then by 2014, AWS would have a forward run rate of around $8Bn. Which means in 2015, it would make around $8Bn or more in revenue. Currently people are estimating at around $5-6Bn, so I count that as pretty damn good to get into the right order of magnitude. However, this is not about how accurate or inaccurate I might have been. This is about the steps and what roughly will happen.

1) In 2015, I expected AWS to clock a revenue of $8Bn+, a gross margin of 80%+, for Amazon still to be supply constrained and for a few examples of some large companies reliant on cloud (i.e. what we now call data centre zero companies)

2) In 2016, I expected AWS to clock a revenue of $16Bn+, a gross margin near to 80%, for Amazon still to be supply constrained, a very visible movement of companies towards using AWS and the market around AWS skills to heat up. I expected by the end of the year for the wheels to start coming off the whole private cloud market (which is why I've warned about this being the crunch time).

3) In 2017, I expected AWS to clock a revenue of $30 Bn+, a gross margin near to 80% and Amazon still to have to control pricing. However, by the end of the year I expected this supply tension to reduce as the growth rate would show signs of levelling. This will provide more opportunity to reduce pricing to keep physical growth to doubling. I expect AWS skills to be reaching fever pitch and the wheels to be flying off the private cloud market.

4) In 2018, I expected AWS to clock a revenue of $50Bn+. I expected gross margin (and prices) to start coming down fairly rapidly as Amazon has significantly more price freedom (i.e. is far less price constrained than is currently the case). Data centre zero companies will become prevalent and there will still be a fever pitch around AWS skills.

5) In 2019, I expected AWS prices to be rapidly dropping, the growth rates to continue levelling, the fall-out to start biting into hardware competitors, the private cloud industry to have practically vanished and the remaining laggards to be making a desperate dash into cloud.

6) By 2020, the game is not only all over (last chance saloon was back in 2012) but we start chalking up the casualties. 

This doesn't mean there won't be niches - there will be and it's in these spaces that some open source efforts will hopefully hide out for future battles. This doesn't mean that some geographic regions won't try and hold out for spurious reasons - they will and at the same time harm their own competitive industries. This doesn't even mean I think my own figures or timing will be right, remember this model is ages old. I'm no fortune teller and at best I view it as being in the right direction. However, until someone gives me a better direction then this is the one that I've stuck with and so far, it seems to be fairly close.

Oh, and the last man standing? Well, in the last few years of the model when the price is dropping then it is all about last man standing. Many competitors won't be in a position to cope with how low the prices will go. The economies of scale will start to really tell here. Many will fall and it won't be gentle and graceful like. It'll be more brick like as in brick fired from a howitzer pointing downwards on the top of a building.

P.S. Before someone tells me the big hardware vendors are going to make a difference in infrastructure ... please don't. It's over. It has been over for sometime. Even if I had $50 Bn, I need to build the system, build the team, build the data centres before I launched and at any reasonable scale (even with using acquisition as a short cut) I'd be talking two years+ at lightning fast speed. I'd be walking into this market as a well funded startup against a massive behemoth who owned the ecosystem. Even those ex-hardware vendors with existing cloud efforts have too little, too late. No amount of money is going to save them here. These companies are just going through the motions of hanging on for as long as they can. There's a platform play but that's a different post.

P.P.S There will be some cloud players left - AWS will dominate followed by MSFT and then Google and a player like Alibaba. There'll be some jostling for position and geographic advantages.

Wednesday, August 26, 2015

The Open Source Cloud, start playing the long game.

Back in 2007, I gave a keynote at OSCON on the future importance of open source and open standards to create competitive utility computing markets (i.e. the cloud). We had a chance for an early land grab to make that happen in what is called Infrastructure as a Service but we lost that battle to AWS (and to a lesser extent MSFT and Google). There are numerous reasons why, far too many to go through in this post.

Just because the battle was lost, doesn't mean the war was. Yes, because of the punctuated equilibrium, we're likely to see a crunch in the 'private' cloud space and near dominance of the entire space by AWS with MSFT following. Yes, Amazon plays a very good ecosystem game and they are a tough competitor. However, in about 10-15 years in what will feel like the wilderness, we will get another opportunity. In much the same way that Linux clawed its way against the near total domination of Microsoft. There are numerous reasons for this, again too many to go through in this post and of course, there could be many twists and turns (e.g. the somewhat unlikely open sourcing of AWS technology).

For the time being, the open source cloud world (and yes by that I mean systems like OpenStack) need to hunker down, to firmly entrench itself in niches (e.g. network equipment), to build up and mature and prepare for the long fight and I do mean a LONG fight. A couple of encouraging signs were @jbryce comment at OpenStack SV 2015 on "having a reason" to build a cloud and not just because it's cool technology along with the discussion on maturity vs adoption of technology. This was good. But afterwards some of the conversations seemed to slip into "the path to Cloud Native", "democratising IaaS", "a platform for containers" (an attempt to re-invent again but around Kubernetes), "the problem is you" (as in IT depts not adopting it), "open source is a competitive advantage" (depends upon the context) and on and on.

You need to remember that for companies who might use these services their focus should (and increasingly will) be on meeting some need with speed (i.e. quickness of delivery), agility (applying more or less resources to the problem as needed) and efficiency (being cost competitive to others). Yes, things like mobility matter from the point of buyer / supplier relationships and in some niches there are location constraints. However, no business under competition is going to last if they sacrifice speed, agility and efficiency in order to gain mobility. To survive, any open approach needs to solve these problems and deal with any issue created by Amazon's huge ecosystem advantage. There is lots of good stuff out there such as Docker and in particular Kubernetes but the strongest plays today in the open source world are around the platform with Cloud Foundry and in the operating system where Ubuntu dominates with some competition from the challenger CoreOS. 

The battle for IaaS maybe lost but the war is far from over and yes, I hear that this or that paradigm shift will change the game again - oh, please don't bother.  The open source world will get another chance at the infrastructure game as long they focus on the long term. Probably the best route of attack in the long term starts with Kubernetes but that's another post.

P.S. People ask why I think CloudStack has a shot. Quite simply, the Apache Software Foundation (ASF) can play the long term game. I'm not convinced that after the crunch that OpenStack will be in such a position. We shall see.

P.P.S. People ask why am I so against OpenStack? This might surprise you but I'm not. However, OpenStack needs to hunker down against the storm and play the long term game. I'm not convinced by its earlier examples of gameplay that it either understands this or is willing to do anything about it.

On Diffusion and Evolution

I recently saw this tweet and unfortunately, despite good intentions, there's a lot wrong with it. I've taken the main image (unknown copyright) for figure 1 and I'll go through what is wrong.

Figure 1 - Evolution mixed with diffusion



The fundamental problem with this image is it conflates diffusion with evolution. Whenever we examine an activity, practice or form of data then yes, it tends to diffuse. But, it also evolves through the diffusion of ever more mature, more complete and more certain forms of the act. Hence in the evolution of an act there maybe hundreds if not thousands of diffusion curves involved. The problem you have with trying to tie diffusion curves to evolution is in trying to determine which diffusion curve you're on. So lets go through this. 

Diffusion curves were first noted by Everett Rogers (made famous by Geoffrey Moore in Crossing the Chasm) and they show adoption over time. It's normally presented in the cumulative form (an S-curve) as shown in figure 2 rather than the above figure 1.

Figure 2 - Diffusion Curve


Now, as Roger noted, there is no single diffusion curve - they operate over different timescales and its adoption to the relevant market. Examples of diffusion curves can be seen in figure 3 from a wide range of technology related activities.

Figure 3 - Diffusion of technology


From the above you can clearly see how these S-curves have different time spans and the problem is often with when do you determine the origin of the technology (not an easy task in itself).

Now technology doesn't just appear and diffuse, it also evolves. Electricity sources for example started with the Parthian Battery (as best as we can tell, in 400AD) but had evolved to utility provision of electricity by 1886 (e.g. Tesla, Westinghouse and Edison). But is this a result of diffusion?

A simple test for this is to first ask yourself - "Are smartphones a commodity?" - now whilst some might say yes, others would argue not which means there is variability. However, in a market like the US there are more smartphones than people and so you can say it is widely diffused.  Now ask yourself - "Are gold bars a commodity?" - to which almost everyone would reply yes. However, it is certainly not the case that everyone in the US owns gold bars, they are far less diffused. You cannot therefore argue that some level of diffusion (i.e. adoption in a market) relates to some level of evolution. In some cases, a thing may become a commodity when 5% of the population have it. Whilst in other cases, a thing is still evolving when 95% of the population have it.

Evolution instead often involves multiple improving examples of the act, all of which diffuse through some form of applicable market. But the relation to overall diffusion is anything but simple. For example, let us take an activity A[x]. Let us suppose it evolves through multiple diffusing instances of the act (e.g. if A[x] was telephony then A[1], A[2], A[3] and so forth would represent ever better phones). I've added these diffusion curves into figure 4 below.

Figure 4 - Diffusion of an activity A[x]


Now each of these diffusion curves can cover different time periods and different applicable markets but each one goes to 100% adoption of its applicable market. Each will have a chasm i.e. in the evolution of A[x] there will be many chasms to cross and not just one. So, when examining the question of early adopters to laggards then we have to ask, which diffusion curve are we on? The laggards of A[1] are not the same as the laggards of A[5].

The natural tendency is to respond with - "well, we will measure the overall one i.e. when it becomes ubiquitous" but this leads to issue highlighted above because gold is a commodity (i.e. well defined, understood, standardised) but diffusion amongst the population is low. The problem is that ubiquity is to a market and so you can't just say "measure its ubiquity" because you need to understand the applicable market first. Hence in some cases, a ubiquitous market is 5% of the population owning an example of this thing (i.e. that's all it will ever get to) but in other cases a ubiquitous market is everyone in the market owning fifty examples of this thing. 

So how do you determine the appropriate market and how ubiquitous something is? Actually, this was the trick I discovered and refined back between 2005 to 2007. As things evolve, they become more defined and certain and the type of publications associated with the act change. It's through understanding uncertainty related to an act that you can determine the ubiquitous market and how evolved something is. But how?

To begin with, there are four basic types of publications show in figure 5.

Figure 5 - Publication Types.


So when something appears e.g. Radio then we first write about the wonder of radio, then how to build and construct a radio crystal set, then we move onto to differences between radios until finally being dominated by guides for use.  I used just over 9,000 articles to determine these four types and used this to develop a certainty axis (shown in the figure above and developed from type II and type III publications) and a bit more detail on this is provided here and here.

Now, the transition from Type III to Type IV in the graph above is critical because this defines the point of stability (i.e. something stops changing in characteristics) and this can then be used to identify the point of ubiquity in an overall market (see figures 6 & 7 below). In other words, you find when something is stable and then determine the size of the actual market. You then use this to trace back history.

Figure 6 - Point of Stability


Figure 7 - Point of Ubiquity


If I take this as the point of ubiquity and now plot back through history over both ubiquity and certainty for a wide diversity of activities then you get the following (figure 8)

Figure 8 - Ubiquity vs Certainty



The figure above represents a large range of different activities from telephones to TV to fridges etc. It's important to note that there are multiple activities in the above but one pattern that emerges. This is because it's not a diffusion curve (which is dependent upon an applicable market and varies in timescale) but instead it's a pattern for evolution. It's unfortunate that the pattern happens to be s-curve shape because I'm sure that adds to the confusion. However, the above is not adoption vs time but ubiquity vs certainty.

Now, if you overlay the the different publication types (i.e. type I, II, III and IV) as shown in figure 9 and extend for both the future continuing after it becomes ubiquitous & stable and the actual invention was before someone wrote something down about it then you create the evolution curve (see figure 10). This is what I published almost a decade ago.

Figure 9 - Adding publication types



Figure 10 - Evolution curve.


What drives this evolution is competition (supply and demand) and that's marked on as well. The point of the evolution curve is it shows you the path of how things evolve - hence its name. Ubiquity is to the applicable market and certainty is a measure of how well understood, defined and stable it is. The evolution curve itself is the x-axis for Wardley mapping (a mechanism of visualising a competitive environment). Again this has been used for a decade across many companies and even Governments but that's another post.

When can now go back to our diffusion curves in figure 4 and plot them on the evolution curve. I've illustrated this onto figure 11 (nb. this particular graph is just an illustration, not based upon data). For example, what you'll find is that a thing becomes diffused in its market (100% adoption) but still has a long way to go before it's evolved e.g. 100% adoption of A[4] is on the border of product / commodity. 

Figure 11 - Diffusion on Evolution


So when we look at A[1] from a diffusion point of view we might have crossed the chasm and the laggards maybe joining but it's very much in the early stages of evolution. We know from the publication types that despite the act reaching close to 100% adoption of its market that the market is nowhere near evolved. But at A[5] the act is very evolved and we already know that we've reached the point of ubiquity in the market from the publication types. It might not be the case that everyone has this item but this is what the ubiquitous market for this item looks like and it is now a commodity.

Now with evolution I can add all sort of changing characteristics i.e. genesis is very different from commodity (see figure 12). So for example, I know those activities or components in the genesis phase are uncertain, rare, risky, a point of differentiation, poorly understood, chaotic, deviates from the past, a source of worth, rapidly changing etc etc. This is a subject I talked on at various conferences (wrote articles including in the Butler Group Review 2008) during 2007-2009 and it's based upon the two extremes identified by Salaman & Storey in 2002 in their Innovation Paradox.

Figure 12 is my "cheat" sheet. I call it that because by looking at the characteristics I can roughly determine how evolved something is. So when someone says to me "Gold Bars" or "Smart Phones" then I can look at the cheat sheet and roughly determine how evolved it is.

Figure 12 - Changing Characteristics


So, lets go back to the original image at the beginning of the post. You can't just mix diffusion and evolution together in that manner. Everyone might own something (i.e. a smartphone) and it can still be constantly changing or barely anyone can own something and it might be stable and more commodity like. You certainly can't add concepts of time into this because you have no idea what diffusion curve you're on and diffusion curves have different timescales.  I understand what the author and tweet was trying to convey but alas - as simple and as seductive as it sounds, it's just plain wrong.

For those wanting to use the diagrams above, they all date from 2007 onwards and are creative commons licensed. The original work (i.e. data collection and testing) was done in 2006 & 2007 and the concepts actually date back much earlier in case you're interested (e.g. I was using the "pattern" of evolution back in '04/'05 though at that time it was just a noticed pattern rather than something with substance).

--- 4th Feb 2016

Added graphs 6,7 and 9 to make it clearer and tidied up a few typos. Added a few lines to make it clear what the evolution curve is, a description of the path of how things evolve.

Also, worth noting that evolution is only a model and like all models it will also be wrong and will be superseded by something better. However, that said, it's not an excuse for just going around taking diffusion curves and randomly adding characteristics to them. I'm sure the author means well but alas I keep on coming up against endless "graphics" in management. It would be nice if people actually put some effort in, collected some data and critically questioned. It shouldn't have taken much to realise that some technology can be in the hands of everyone (i.e. widely adopted) and still not stable, defined, complete and commodity like.

It's really important to understand this distinction otherwise people start making huge leaps. They start plotting diffusion curves in the general population and going - "well, it'll become a commodity at this point in time". You can't do that as the act of shifting from product to commodity depends upon individual actors' action (See Hayek and the pretence of knowledge) and a number of factors (concept, suitability, technology and attitude) which means you can't predict the change over time in this manner. However, what you can do is look at publication types and estimate when combined market forces are likely to make the change occur i.e. it's a probability function.

For example the probability function may indicate that it is increasingly likely that a commodity version will appear (all the factors are in place, the publication types indicate this) when 20% of people have one or it may indicate this when everyone has several of whatever it is (say 150% of population). Hence knowing that 10% of people have one today doesn't help you as you don't know if the change will occur at 20% or 150%.

Monday, August 24, 2015

On the common fallacy of hypothesis driven business.

TL;DR Look before you leap.

There's a lot wrong with the world of software engineering but then again, there's always been a lot wrong with it. Much of this can be traced back to the one size fits all mentality that has pervaded our industry - be agile, be lean, be six sigma, outsource

However, there is a universal one size fits all which actually works. It's called look before you leap or in other words observe the environment before you decide to take any action. In the case of OODA loops there's even two whole steps of orientate and decide before you get from observe to the action bit. Alas in many companies action seems to be the default. Our strategy is delivery! Delivering what exactly? Who cares, deliver!

Your organisation, your line of business and even a discrete system consists of many components. All of those components are evolving through supply and demand competition from the uncharted space of the uncertain, unknown, chaotic, emerging, changing and rare to become industrialised over time. The industrialised have polar opposite characteristics to the uncharted something we've known about since Salaman & Storey's Innovation Paradox of 2002. If you want to accelerate the speed at which you operate and create new things then you have to break down complex systems into stable components and treat those components appropriately

So, how do you manage this? Well, since most companies fail to observe the environment then they will resort to the only thing possible which is backward causality or meme copying - "Everyone else is doing this thing, so lets adopt DevOps, Agile, Lean, Digital, Cloud, APIs, Ecosystems, Open Source, Microservices" and on and on. Each of these approaches have certain benefits if used in the right context but in most cases, the context is missing. Furthermore in today's software world various claims are given to being more scientific, to being driven by hypothesis but many of these ideas are misguided without context.

To understand why, we need to explore the game of chess. A key part of the game of chess is understanding the board i.e. where the pieces are (position) and where they can move to (movement). You don't actually have to physically see the board if you're good enough. You can create a mental model of the board and play the game in your mind. But the board is there, it's an essential element of the game. Though each game may be different, you can learn from each game and use these lessons in future games. This is because you can understand the context at hand (the position of pieces and where they can move) and can learn consequences from the actions you take. You can apply such lessons to future contexts. This is in fact how we learn how to play chess and why practice is so important.

Now, imagine you have no concept of the board but instead all you see is a range of computer characters on the screen (see figure 1). Yes, you can play the game by pressing the characters but you have no understanding of position or movement. Yes, over time you can grab the sequences of thousands of games and look for secrets of success in all those presses e.g. press pawn, pawn, queen, queen tends to win. You will by nature tend to copy other successful players (who also have no context) and in a world dominated by such chess play then memes will prevail - expect books on the "power of the rook". Action (i.e. pressing the key) will dominate, there is little to observe other than the sequence of actions (previous presses) and all these players exist in a low level situational awareness environment.

Figure 1 - Chess with low levels of situational awareness.


If you ever come up against a player who can see the context (i.e. the board) then two things will happen. First, you will lose rapidly despite having access to thousands of games containing millions of data points from sequences of action. Secondly you'll be bewildered. You'll start to grab for the spurious. Naturally, you'll try and copy their moves (you'll lose), you look for all sorts of weird and wonderful connections such as the speed at which they pressed the button (you'll lose), whether they had a good lunch or not (you'll lose) and whether they're a happy person or not (you'll lose). It's like the early days of astronomy where without any understanding we collected all sorts of things such  as whether it was a windy day. Alas you will continue to be utterly outplayed because the opponent has much higher levels of situational awareness and hence they understand the context better than you. To make matters worse, with every game your opponent will actually discover new patterns, new ways of defeating you and they will get better with time. I've tried to show an example of low vs high situational awareness in figure 2.

Figure 2 - low vs high situational awareness.


The player who understands the board will be absorbed by first observing the environment, understanding it (i.e. orientate and decide) and then making a move (i.e. acting). Terms like position and movement will matter in their strategy. Their strategy (the why of action) will be based upon why here over there i.e. why this move over that move. 

Most businesses exist in the low level situational awareness environment described by figure 1. They have no context, they are rife with meme copying, magic sequences and are dominated by action. We already know that this has an impact not only from individual examples but by examination of a range of companies. It turns out that high levels of situational awareness appears to be correlated with positive market cap changes over a seven year period (see figure 3).

Figure 3 - Situational Awareness and Market Cap changes.


So what has this got to do with hypothesis driven business? Hypothesis without context is often akin to saying "If we press the pawn button will it give us success?"

The answer to that question is it might in that context (which you're unaware of) but as the game changes with every move then there is nothing of real value to learn. Without understanding context you cannot learn patterns of play to use from one game to another. To give an example of this, let us examine The Scenario as described in an earlier post. This scenario has all the information you require to create a map and to start learning from previous conflicts and repeating patterns. However, most companies have no idea how to map and hence have no mechanism of past learning through context. 

It is certainly possible without context to create multiple hypotheses for the scenario e.g. expand into Brazil or maybe attempt to differentiate with a new feature? These can be created and tested. Some may well show a short term benefit. However, if you take the same scenario and map it  - as done in the Analysis post - then a very different picture appears. Past and repeatable patterns such as co-evolution, ILC & punctuated equilibriums can be applied and it shows the company is in a pretty miserable state. Whilst a hypothesis around differentiate with a new feature might show some short term benefit and be claimed as successful, we already know it's doomed to fail. The hypothesis therefore appears to be right (short term) but before acting, from the context, we already know it's wrong and potentially fatal (long term). It's the equivalent of knowing that if you move the Queen you might capture a pawn (i.e. success from a hypothesis of pressing the queen button) but at the same time you expose the King to checkmate (from looking at the context, the board).

The mapping technique described is about the equivalent of a Babylonian clay tablet but it's still better than having no map as it provides some idea of context covering position (relative to a user need) and movement (i.e. evolution). There will be better mapping techniques created over time but at this moment, this is the best we have. Many of us over the last decade have developed a sophisticated enough mental model of the environment, principles and repeatable patterns that we can just apply to them to a scenario without mapping it first. In much the same way that if you get really good at playing chess, you don't even have to look at the board. However, most have no understanding of the board, of position, of movement, of context or the numerous repeatable patterns (a subset of these, 61 patterns are provided below in figure 4). 

Figure 4 - An example list of repeatable patterns / gameplays


Without understanding context then most have no mechanisms of anticipation, learning and cannot even use weak signals to refine this. In such cases, you can make an argument that hypothesis driven business is better than nothing at all but it's a very poor substitute for understanding the context. Even if your hypothesis appears to be right, it can be completely the wrong thing to do.

This is the fallacy of hypothesis driven business. Without a mechanism of understanding context then any hypothesis is unlikely to be repeatable as the context will likely change. Yes, you can try and claim it is more scientific (hey, we've pinched a word like hypothesis and we're using data!) but it's the equivalent of saying "If I have a good lunch every day for a month then the heavenly bodies will move!" ... I had a hypothesis, I've eaten well for a month, look those stars moved ... success! Oh dear, oh dear, oh dear. Fortunately, astronomers also built maps.

This doesn't mean there is no role for hypothesis, of course there is! For example it is extremely useful for exploring the uncharted spaces where you have to experiment or for the testing of repeatable patterns or even for refinements such as identifying user needs. But understand the context first, understand where you can attack and then develop your hypothesis. The context is your route to continued learning.

Observe (i.e. look at the context) then Orientate & Decide (i.e. apply thought to that context) then Act (i.e. do stuff in that context). 

Saturday, August 15, 2015

The Analysis

Ok, this post provides a quick analysis of the Scenario. As a guide, this sort of analysis should take about 30 minutes. To get the most out of this exercise, read the scenario post, write your plan and then read this analysis. In a final post, we will go through gameplay.

The Analysis

First, lets start by creating a basic map. Our users are data centre operators, they have a need for a mechanism of improving Data Centre efficiency in electricity consumption, we have our software product which is based upon best practice use of an expensive sensor that we purchase and a custom set of data. This is shown in figure 1.

Figure 1 - Initial Map



In this exercise, I'm going to slowly build up the map. Normally, I would just dive into the end state and start the discussion but that'll be like one of those "it's therefore obvious that" exercises in maths which often confounds others.  

First of all, I'm going to add some bits I know e.g. we anticipate an opportunity to sell into Brazil (I'll mark as a red dotted line) and we have a US software house in our market selling a more commodity version as a utility service (I'll mark as a solid red line as it's something that is definitely happening). From the discussion with the head of sales (who was rather dismissive of the US effort) and the strategy, I already know we're going to have inertia to any change, so I may as well add that in (a black bar).

Figure 2 - Brazil and US.


However, we also know that the US version provides a public API and has a development community building on top of it. The US company is also harvesting this, probably through an ILC like model. The consequence of this, is the US company will start to exhibit higher rates of apparent innovation, customer focus and efficiency with proportion to the size of their ecosystem. Those companies building on top of their API act as a constant source of differential for them. I've added that in the figure below.

Figure 3 - ILC play.


Given the US company growth last year and that a shift from product to utility is often associated with a punctuated equilibrium, I can now take the figures and put together a P&L based upon some reasonable assumptions. Of course, we're missing a lot of data here in particular the development cost of the software etc. However, we'll lump that into SG&A.

Figure 4 - P&L and Market.


Ok, so what I now know is that we seem to be a high gross margin company (i.e. a juicy target) and a good chunk of our revenue is repeating software licenses. If this is a punctuated equilibrium (which seems likely) then I expect to see a crunch time in 2020 between us and our US company as we will both have around 50% MaSH. Unfortunately, when that happens then they're likely to have higher rates of efficiency, apparent innovation and customer focus due to their ecosystem play. Furthermore I'm going to have inertia to any change probably due to existing practices, business and salespeople compensation.

If I do make a utility play then I'm going to need to gain the capability to do this, raise the capital needed to build a utility and launch fast. Let us suppose this takes two years. Then I'll be entering a market where my competitor has 8 years of experience with a large & growing ecosystem and 100% MaSh of the utility business (worth £30M to £60M). I'll have no ecosystem, no MaSh and a salesforce probably fighting me and pointing out how our existing business is worth £144M to £173M. In the worst case, if I haven't explained the play properly then they could even be spreading FUD about my own utility service and trying to get customers to stick with the product.

Even my own board could well push against this move and the talk will be of cannibalisation or past success.  Alas, I know our existing business is a dead man walking. Post 2020 things are going to be grim and by that I mean grim for us. Despite the competitor only being 3% of the market, I've already left it late to play this game. I've got some explaining to do to get people on board.

Unfortunately there is more bad news. Let us look at that the other changes in the market, such as the shift of sensors.

Figure 5 - Change of sensors.


Now, we've already seen signs of inertia in our organisation to using these sensors. As the product manager says they're not as good as the old. However, we also know that as an act becomes a commodity then practices co-evolve and new methods of working emerge. Hence the future systems probably won't have one sensor in the data centre but dozens of cheap ones scattered around. Unfortunately, our software encodes best practice around the expensive product based sensor and if the practice evolves then our software is basically legacy. I've added this to the diagram below, however rather than using a solid red line (something we know is happening) then in this case I've added a dotted line (something we anticipate or an opportunity).

Figure 6 - Co-evolution


So, our business is being driven to a utility and we don't have much time. Even if we get started now then by the time we launch we'll be up against an established player with a growing ecosystem. Our own people will fight this change but even worse our entire system will become legacy as commodity sensors lead to co-evolved practice and new software systems designed around this. So along with my head of sales and marketing fighting me, I'm pretty sure I can add the product manager and a good chunk of an engineering team that has built skills around the old best practice. 

Now, if you're used to mapping then you'll have spotted both the punctuated equilibrium and the danger of co-evolution. As a rule of thumb, these forms of co-evolution can take 10-15 years to really bite (unless some company is deliberately accelerating the process). Hence, even if we somehow survive our current fight in the next five years, we're going to be walking smack bang into another one five years later.

Of course, at this point I need to start to consider the other players on the board e.g. the US competitor. They're already providing a utility play, so we can assume that they have some engineering talent in this space. This sort of capability means they're likely to be pre-disposed to building and using more commodity components. The chances are, they're already thinking about the commodity sensors and building a system to exploit this. That could be a real headache. I could spend a couple of years getting ready to launch a cloud based service based upon the expensive product sensors and suddenly find I'm not only behind the game but the competitor has pulled the rug under me by launching a service based upon commodity sensors. I'll be in no man's land.

The other thing I need to look at is that conversion data issue. I know it's evolved to a product but it could easily be pushed to more of a commodity or provided through some API and play some form of open data ecosystem game on me. I've shown this in the following diagram.

Figure 7 - Data Ecosystem


I've now got a reasonable picture of the landscape and something I can discuss with others. Before I do, let us check the proposed "Growth and sustainability in the data centre business" strategy.

First up was expansion into Brazil. This will require investment and marketing but unfortunately we're not dealing with the issues in our existing market. At worst, we could spent a lot of cash on laying the groundwork for the US company to chew up Brazil after they've finished chewing us up. Still, we need to consider expanding but if we do so in our current form then we're likely to lose.

Second was building a digital service including a cloud based provision for our software system that enable aggregated reporting and continued the licensing model. Ok, one of the killer components of the US system is the API and the ecosystem it has built around this. We could easily invest a significant sum and a few years building a cloud based service, enter the market and be outstripped by the encumbent (the US company) because of their ecosystem and even worse find our entire model is now legacy (because of co-evolved practice). I know it's got the word "digital" and "cloud" in the strategy but as it currently stands then this seems to be a surefire way to lose.

Thirdly, the strategy called for investment in sales and advertising. Well, we've plenty of cash but promoting a product model which as it stands is heading for the cliff and may become entirely irrelevant seems a great way of losing cash.

Lastly, we're to look into the use of the data conversion product. Ok, this one doesn't seem so bad but maybe we should drive that market to more of a commodity? Provide our own Data API? 

On top of all this, we have lots of inertia to deal with. Now that we understand the landscape a bit better then we can craft a strategy which might actually work. Of course, I'll cover that in another post. However, in the meantime I'd like you to go and look at the scenario, look at your original plan and work out how you might modify it.

Happy Hunting.

A scenario

A scenario for you to run through. Have a think, write down your answers and later on I'll add a post as to things you should have considered.

The Scenario

You’re the CEO of a UK based company serving the European Market. Your company produces a software system that monitors a data centre's consumption of power in order to determine whether power is being used effectively. The system involves a proprietary software system which runs analytics across a data from a sensor that is attached into the data centre. The sensor is a highly expensive piece of kit which monitors both the electricity input into the building, the temperature of the building & airflows. The analytics software is based upon best practice for use of this sensor. The sensor itself consumes conversion data that your company creates.

You’re profitable with a revenue in excess of £100M p.a., a net margin of 15% and an annual growth rate of 20%.  You have a healthy cash flow and reserves of around £25M.  The process of setting up a new client involves installing a sensor, setting up the equipment and a two year license fee for the software. Around 40% of your revenue comes from re-occurring license fees and 85% of the initial 1st year costs for a client is related to the purchase of the sensor.

Whilst you have some competitors in Europe, most of these are custom built solutions. You’re the only with a software product. There’s a more developed market in the US and even a software as a service offering which uses the same sensors but the software is sold on a utility basis rather than a license fee. The US solution also provides cross company reporting, industry analytics and a public API, something which your system does not. However, as your head of marketing points out, the US competitor (a much larger company) has been operating in Europe for almost 7 years and represent less than 3% of the market though their CEO claims they are growing rapidly and doubled in size last year. There are a number of other company products built on your competitor's APIs and a fairly active development community on this. However your head of sales chimes in that we rarely come across them in competitive tenders and in any case there have been some blog posts about your competitor 'eating up' the business model of some of those products by adding similar capability into their own system. The head of sales points to data showing that in the European market, your company has around 40% MaSh (which is holding steady) and the current market represents 70% of the total applicable market. Both the head of sales and the head of marketing agree we should focus on increasing our MaSh (market share) by focusing on sales and advertising.

Your head of operations points out that there is a range of new, more commodity like sensors that has been launched in China by an extremely large and well respected manufacturer. They’re far simpler, vastly cheaper (about 1/100th of the price) and highly standardised. However, they are also basic and lack the sensitivity of the sensor we use. The product manager points out that we have attempted replacing the expensive sensor with one of these cheaper versions but the performance and analysis was severely degraded. The product, operations and sales manager all agree that these cheaper sensors aren't upto the job. In the conversation, the product manager points however to another opportunity. One of the significant costs in the system is in the conversion data which requires extensive testing and modelling of various bits of kit within the data centre.  Whilst this is done in-house, there is now a product available on the market which offers this conversion data. It’s not as good as our data at the moment but the product is vastly cheaper than our in-house operations and we could therefore reduce costs here.  Your head of marketing supports the idea as there is some recent evidence that despite the benefit (in terms of energy savings through efficiency) that the system allows, there is some concern over the high cost of the software in the market. The product manager believes we should investigate though this was faced with some resistance from both the head of operations and the head of IT. You do not feel you have a deep enough technical understanding to answer this.

On the revenue side, the head of sales points out there is a growing market of data centres in Brazil which currently no-one is providing a solution for. They consider this to be a highly attractive future market and would like to investigate. Your head of strategy also agrees. 

The new strategy which is focused on a vision of “Growth and sustainability in the data centre business” has highlighted a number of possibilities. First is expansion into overseas markets such as Brazil. Second is provision of a more digital service including a cloud based service for provision of the software (enabling aggregated reporting) but provided on a license basis in order not to create conflict with the existing model but also to counter any threat from the US system.  Thirdly, we should undertake a significant marketing campaign to promote our solution in the existing market. Lastly the report focuses on efficiencies in operation including investigating the use of the data conversion product that is available. 

What do you do?

Add your 'answers' in the comments below.

Once you're done, then you can move onto The Analysis

Thursday, August 13, 2015

On Platforms and Ecosystems

This stuff is a decade old for me and I can barely drag myself to repeat it. But, I've read lots of daft stuff recently on platforms and ecosystems normally out of the mouths of half witted strategy consultants, so I will. 

The reason why you build a platform is to enable an ecosystem. A platform is simply those components (ideally expressed through APIs) that your ecosystem exploits.

The reason why you build an ecosystem is for componentisation effects and to exploit others through data mining on consumption. 

If you create a platform of commodity components (ideally as utility services) then you not only enable the ecosystem to quickly build (increase agility) but reduce costs of failures. By mining what they build (by looking at consumption of your components) you can use this to identify patterns useful for others. Hence, you can leverage your ecosystem to spot useful patterns which you can then commoditise to new components in the platform to help grow the ecosystem. This is a model known as Innovate - Leverage - Commoditise and it's so old, it's dull. You can call it network effects if you must.

Effective exploitation of an ecosystem depends upon you actively detecting those new patterns, the speed at which you can detect new patterns and the size of the ecosystem. 

IF effectively exploited then your apparent rate of innovation, customer focus and economies of scale all increase with the size of the ecosystem. 

A few basic pointers.

1) If you don't focus on user needs and reducing friction (i.e. making it easy to use) then you lose. No-one will turn up or those that do will quickly leave for something else or build their own out of desperation.

2) If you limit your platform to internal (i.e. company only) then your ecosystem will be smaller than a company which exposes their platform to the public. Their rate of apparent innovation, customer focus and efficiency will massively outstrip yours as their ecosystem becomes larger. You lose.

3) If you fail to data mine consumption then you won't be able to leverage the ecosystem to spot new patterns that are useful to the ecosystem. Your ecosystem and platform will stagnate compared to a competitor that does this. You lose.

4) If you do mine your ecosystem and aggressively harvest without giving the ecosystem reasons to stay ... everyone will run away. You lose.

5) If you build a platform based upon product concepts then the cost of data mining consumption becomes high and the speed low compared to a platform providing utility components through an API. If you're trying to build a platform of products against a competitor who is providing a utility then - you can guess. You lose.

6) If you build a platform with components that are not industrialised (i.e. commodity like) then the interfaces will continuously change and your ecosystem will not be able to rely on you. If you're up against someone who industrialises those components then ... you lose.

7) If you have little to no ecosystem and decide to take on a large ecosystem in the same space without co-opting then assuming they are public, provide industrialised components through an API  as a utility, focus on removing friction & user needs whilst data mining effectively then ... you lose. You never get a chance to catch up.

8) If you build new components on a platform and fail to implement a mechanism of evolving those components to industrialised services then you build up technical debt. Over time, you build new upon new and this becomes spaghetti junction. Your platform creaks and collapses. The fastest way I know to do this is to have one team building new stuff, one team taking care of the platform and no-one in between. This creates almost an internal war of them vs us exacerbating the problems of technical debt. Against anyone with a faintest clue of what they're doing ... you lose.

9) If I say phrases like ILC, two factor market, supplier ecosystem and you go "eh?" ... you'll probably lose. There are many forms of ecosystems with many different models and mechanisms of exploitation. Try to learn the different types.

10) If you think platforms are all about marketing ... you lose.

11) If you think platforms are all about engineering ... you lose.

12) If you think platforms are easy (ps. I built the first platform as a service in 2005 and ran a large single sign on and imaging platform back between 2001-2006 with many millions of users) then don't even bother. You'll lose.

13) If you think the secret is to build an API specification, call it a standard, even an open standard and vendors will all come and build against it creating your ecosystem in the sheer delight of your wonderful gesture ... oh dear, you're in so much trouble. Cheaper to open your wallet to others and say "help yourself".

There's more but I'd rather gnaw my leg off than talk about platforms and ecosystems again. This is enough to begin with.

Wednesday, August 12, 2015

On the future.

I often talk about the importance of situational awareness. The technique I use for this, is known as Wardley mapping and you can read about it on CIO magazine. If you're new to this then the rest of the post won't make sense and so I'd advise you to save some time. Tl;DR it's complex.

Once you have a map, it becomes fairly easy to see how a market will evolve. There are numerous common economic patterns from componentisation to co-evolution to inertia along with various forms of competitive gameplay that can be used to manipulate this change and create an advantage. With a map (which provides position and movement) of an economic space you can examine the line of the present and work out points to attack. From here, working out strategic play (i.e. why attack here over there) is fairly easy. I've summarised this in figure 1.

Figure 1 - Determining future from now.


With reasonable situational awareness you can anticipate certain changes and prepare for them through scenario planning. You can avoid getting caught out unnecessarily. This is more than enough (along with operational efficiencies through removing duplication and bias) to compete against most companies. However, there are some more advanced techniques. 

When we look at a map, certain aspects of change are more predictable than others. I've provided a list in figure 2.

Figure 2 - Predictability of Change.


For example, existing trends (i.e. stuff that is happening) are fairly obvious in terms of what (i.e. the trend) and when (i.e. now). There's little advantage in this stuff despite it filling up endless management journals. At the same time there's the unknowable e.g. genesis of a new act or impending product to product substitution. The best you can do here is scan the environment, notice it's happening (i.e. it's become an existing trend) and react accordingly. You can't anticipate this stuff i.e. Blackberry couldn't anticipate the iPhone would appear and disrupt it.

However, the knowable stuff is the most interesting because here you can create an advantage and you can anticipate what is going to happen (but not when) or vice versa. In certain special cases you can do a reasonable job of both through the use of weak signals but I'll get onto that.

For example, when something new appears we can anticipate that if there is competition (supply and demand) then it'll evolve! We can even specify the stages of evolution (for activities, practices, data and knowledge) e.g. an act will evolve from genesis to custom built to product (+rental) to commodity (+utility). We can state how its properties will change (from the uncharted to industrialised) and how competition will drive this. I even know that on average it'll take 20 - 30 years to go from genesis to the point of industrialisation. We know an awful lot about the what

Unfortunately we can't predict when the state changes will occur with any detailed level of precision as this depends upon individual actors actions i.e. I know bio-printing will eventually become a product and then a commodity component but I don't know who will make this happen or precisely when each of these state changes will occur.

However, there are some special classes of change. For example, I know that any act will evolve from product to commodity (+utility). But, I also know that as it does so, past product companies (suffering from inertia built up during a time of relative peace between product vendors) will be disrupted by new entrants. It'll take about 10-15 years for the change to become obvious and the past vendors to be on their way out. There'll be an explosion of new activities built on top of this commodity (a time of wonder) and a change of practice related to the act (co-evolution). There's an awful lot I can say about the what of this product to commodity (+utility) state change, which we describe as a point of 'war' in the economy.

Fortunately, in this special case there's a very specific weak signal technique which I can use to narrow down the target range of when a 'war' is going to occur. I've provided some results from this technique in figure 3.

Figure 3 - Points of War


(P.S. Green is an unpredictable. Muddy brown is middling. Red are the points of war)

It's through an earlier version of the technique that I knew that compute was moving towards a utility before AWS. It's also how in Canonical in 2008, I knew we had to focus on the co-evolved practices (devops), as well as capturing the cloud market, any new activities building on top and how past vendors had far less time than they realised.

So, for example, I happen to know the 'war' in 'big data' systems is kicking off. I've actually known for quite sometime this was heading our way. This 'war' means we will see utility providers in this area (they already have launched). The 'big data' product vendors (who have inertia) will dismiss these players and declare that the new entrants are useful for development but when it comes to production you will want to talk to them. They'll probably even spread FUD.  However, in about 10-15 years the past vendors will be in serious trouble. I can even tell you that this type of change (known as a punctuated equilibrium) will catch those past vendors out i.e. in 5-10 years those vendors will be crowing about how the new entrants represent less 3-5% of the market but by 10-15 years those new entrants will be 30-50%. If you want (and I felt inclined), I could already give you a list of the dead.

This change will cause an explosion of new activities (i.e. genesis) based upon these standard components in a time of wonder around data. I know that a time of wonder will occur, I can say roughly when but of course I don't know what those new activities are (no-one does). Genesis is unpredictable but at least I can tell you to keep an eye out - new stuff will happen! There will also be new practices developing around the use of such utility services, we'll probably even give it a meme (hopefully not DataDev or DataOps or any other awful combo).

Now, if I understand my value chain then I can scenario plan around fairly predictable patterns and use weak signals to identify when it's likely to happen. I can't avoid the unpredictable (e.g. product to product substitution) any more than I can avoid the need to gamble and experiment in the uncharted space if I'm trying to create something new. But I can ruthlessly exploit the knowable against opponents who can't even see the board. If they could, they'd never be disrupted by anticipatable forms of change (e.g. cloud) because even with the inertia, you could overcome it.

For most companies however, they have little to no situational awareness which means everything bar the obvious existing trends appear to be unknowable and comes as a complete shock. These are my favourite companies to compete against and there's an awful lot to choose from out there.

Happy Hunting.