Thursday, October 16, 2014

Of Peace, War and Wonder vs Company Age.

One of the more interesting discussions in recent times has been Prof Jill Lepore’s arguments against Clayton Christensen’s concept of disruptive innovation. In her now famous New Yorker article, Lepore argued that disruptive innovation doesn’t really explain change, but is instead mostly an artefact of history, a way of looking at the past and is unpredictable. This really is a non-argument because both Christensen and Lepore are correct. The problem stems from the issue that there are two forms of disruption - one of which is predictable and one which isn't.

The two main forms of potential disruption are product to product substitution and product to utility business model substitution. 

With product to product substitution then the predictability of when (depends upon individual actors’ action) and what (genesis of some new feature or capability) is low (see figure 1). This means a new entrant can at any time create a disruptive product but a company will have no way of ascertaining when that will occur or what it will be.  So whilst disruption will occur (as Christensen points out), it is unpredictable (as Lepore points out).  Apple’s iPhone disrupting the Blackberry is a good example of this type of disruption.

Figure 1 - Predictability.


With product to utility substitution the “what” and “when” can be anticipated. Hence a new entrant can more effectively target a change to disrupt others. However, it also means an existing player can effectively mount a defence having prior knowledge of the change and time to prepare. Fortunately for the new entrants, the inertia faced by incumbents in terms of existing business models, developed practices, technological debt, behavioural norms, financial incentives to Wall Street expectations and self interest are often insurmountable, so the start-ups often win.  Hence, whilst the change is entirely defendable against (with often many decades of prior warning) companies fail to do so. This form of disruption is entirely predictable and it is here where Christensen's theory excels.

Now product to utility substitution is a key part of the commonly repeating cycle of peace, war and wonder that we've discussed extensively about. The 'war' element can be anticipated and the cycle occurs both at a macro and micro-economic level.

You can even model out the potential impact of this cycle on company age. Creating a simulation with 1,000 actors, assuming all actors are in competition, that the largest companies start with an age of 45 years, there exists some unpredictable disruption from product to product substitution and no peace/war/wonder cycle then you can graph out the emergent change of company age with the top 400 due to new entrants and previously successful companies failing. I've provided the output of such a simulation in figure 2.

Figure 2 - No Peace / War / Wonder cycle



In the above the average age remains fairly constant over time (shown as sequence steps of the model on the x-axis with each step of the model being analogous to a year). This is because whilst companies age, there is some substitution by new entrants counteracting the growing age. 

This requires a set of specific conditions including a moderate level of disruption from product to product substitution (3% p.a.) but I'll use this simulation as our base line. By adding in the peace / war and wonder cycle, starting with a condition of 30 steps (e.g. years) for an act to evolve from genesis to commodity and 10 steps (e.g. years) for a commodity to disrupt an existing industry then the following pattern (figure 3) emerges.

Figure 3 - Company Age with Peace / War and Wonder Cycle



What's happening now is a constant undulation in average company age as the environment moves through these cycles. It constantly attempts to return to a higher average age but the constant 'war's and disruption by new entrants (on top of the normal product to product substitution) keeps this in check.

Of course, one of the interesting aspect of the peace, war and wonder cycle is that this not only affects all activities, practices and data but these components can be communication mechanisms. Such communication mechanisms (e.g. telephone, postage stamp, Internet etc) will increase the rate of diffusion of information which impacts the speed at which evolution occurs. This in turn accelerates the speed of the peace, war and wonder cycle.

Rather than a case of we are becoming more 'innovative' as a species, it appears that the speed at which things industrialise (i.e. evolve to more commodity and utility forms) and hence the rate at which we are forced to adapt and move onto the next wave has accelerated. If you now add this communication impact into the simulation (i.e. assume some of those peace, war and wonder cycles impact communication causing a subsequent higher speed of future cycles) then the following pattern emerges (see figure 4).

Figure 4 - Company Age with Peace / War and Wonder Cycle plus Communication impacts.


What's happening is the system is constantly trying to maintain an age but the peace / war and wonder cycle is causing oscillations arounds this (due to new entrants and failure of past giants). However, the acceleration of the cycle (due to commoditisation of means of communication) is causing a shift downwards to a lower age (and a new stable plateau around which age will oscillate).

However this pattern is highly influenced by the ability of the agents to adapt (i.e. if we assume high levels of situational awareness and the ability of companies to evolve then this pattern doesn't happen and a completely different pattern of dominance emerges).

I mention this because the same pattern in a simulation - which is derived from supply and demand competition causing evolution and the interplay with inertia causing the peace, war and wonder cycle  - can also be seen in the graph of S&P500 company age over time by Foster (see figure 5)

Figure 5 - Variation in average company age with S&P500


You have the same undulation that is caused by peace, war and wonder cycles plus a decline in average age which would be expected from commoditisation of the means of communication and acceleration of the cycle (e.g. Telecommunication, the Internet etc).

So, why mention this? Well, I'd argue that what we're experiencing is all perfectly normal. The system is rebalancing to a new company age around which it will oscillate. There are many ways of countering this effect by exploiting predictable change (which is unknown to many) and through the use of ecosystems but that's another post for another day.

The good news is that most companies have appalling situational awareness and so it's very easy to exploit. I've only been involved in three startups (all sold to large companies) and I've used these techniques in working for Canonical (we stole the cloud from RedHat) and Governments. It's amazing how much power a little situational awareness and understanding of common economic patterns gives you.

--- Update 17th October

After talking with Florian Otel, a smart cookie and always a good chat, I decided to run the simulation above several times to see if we could get it to mimic real life.

After a bit of trial an error, I set the starting company age at 50 years, with each step of the model representing 270 days, a low product to product substitution rate, a higher rate of disruption from the 'war' phase of the cycle (i.e. extremely low levels of situational awareness and ability to adapt in the agent), a base time to industrialise of 30 years, a time to disrupt once industrialisation starts of 15 years, a set of peace/wonder and wonder waves each impacting communication, a 7 year rolling average of the top 400 companies, 1,000 competing companies (actors) and an initiation time of April 1937.

I ran the simulation 10 times, because each step in the simulation is probability based and each actor therefore has a chance to age or be disrupted or disrupt others in each step. Hence after each simulation is completed, different actors have died or taken over etc. No two simulation runs are identical.

Figure 6 provides the overlapping result of 7 year rolling average of company age for all ten simulations on a single graph. A very strong emergent pattern can be seen which doesn't do a bad job of mimicking real life.

Figure 6 - Approximation of Real Life through Simulation


It's not a bad approximation to Foster's graph but it's by no means perfect. There's some variation in the simulation when re-run (as can be seen by the different lines in the graph). The times are not perfect (often being out from real life by many years) nor is the shape identical however it should be remembered that in the simulation the agents actions are random but in real life we have the ability to anticipate change - I'll come back to that point.

I've overlapped both Foster and in the Simulation on the same company age / time axis scale in Figure 7.

Figure 7 - Comparison of Foster to Actor (Agent) based Simulation


Now, not a lot can be inferred by this. It's an agent (actor) based simulation which creates an emergent pattern which approximates real life. It reinforces the internet (1991), cloud computing (2008) and information technology revolution (1967) as key moments of industrialisation of communication. It's not bad though and gives food for thought.

However, one key thing was very noticeable and that's the point about anticipation. The model only ever comes close to mimicking real life when the agents themselves act as though they have little to no situational awareness and ability to anticipate change hence creating high rates of disruption in the war part of the cycle. If I'm going to infer anything from the model, it would be the implication that most companies are running blind.

Sunday, October 12, 2014

On maps, component class, pipelines, markets, inertia and economic states.

When drawing maps, I often use different symbols to represent different aspects of competition. For example, since activities, practices, data and even knowledge evolve then I'll often mark these aspects on the map (see figure 1).

Figure 1 - Activities, practices, data and knowledge.


With most maps, I tend not to mark up the different component classes unless it is useful. When it is useful then in practice though I might add a legend to show the different class of components (activities to knowledge), I tend to not fully write out the process of evolution for each class on the evolution axis - it becomes unwieldy.  I simply use the evolution of activities to represent the different publishing type I to type IV of evolution. 

It's worth remembering that maps (even geographical maps) are simply a representation of the space. 

In some cases, within a map there will be a pipeline of constant change e.g. content for publishing. I'll normally mark this on (as with the case of the TV industry in figure 2).

Figure 2 - A content pipeline


When scenario planning, I'll tend to add on different markets to show comparison to the market in focus. I'll also add on further contextual information such as price elasticity, known forces (buyer vs supplier), known constraints and known difference between the company and the market (usually a dotted red line identifying a delta or a solid red line indicating a difference). An example of this is shown in figure 3.

Figure 3 - Comparison to market.


I'll also add on potential force multipliers (e.g. ecosystems), potential sources of inertia, likely points of change (a grey dotted line) and general comments or areas of interest.

Figure 4 - Ecosystems, inertia, points of change and areas of interest.


Lastly, I'll add different competitive states (peace, war and wonder), current and future states, competitive forces and potential for impacts. See figure 5.

Figure 5 - Competitive states and competitive forces.


The final maps I produce tend to contain elements of all the above. They are complex but then, so is competition.

When scenario planning then all of these components from activities, practices and data, to inertia, competitive forces, constraints, economic state, points of change, ecosystems, comparison to other markets, buyer vs supplier relationships, pipelines, elasticity and other compound effects (co-evolution, Jevons' paradox etc) need to be considered. Trying to do this in your head without a map i.e. a way of visualising and discussing a landscape - is almost impossible for any complex business.

Saturday, October 04, 2014

Something that will change the world of competition ...

One of the most powerful force multipliers in competition is the use of an ecosystem model known as ILC built around a utility service. This model has been in operation for about a decade and can be shown to create network effects in terms of innovation, efficiency, customer service and stability of revenue. There's nothing quite like it but since it's old hat, I won't go through it again.

However, the ILC model doesn't work quite so well in the product space (because the capture of consumption data requires expensive market research) nor in the physical commodity space (again there is no way of capturing consumption data).

This is all about to change. 

Sensors are getting to the point of being industrialised to commodity components that will capture and centrally store data through a "Sensor as a Service". Future products, even physical commodities will contain multiple "Sensor as a Service" components. This provides the capability for ecosystem games like ILC to be played out in the physical world. 

Supplier companies will start providing low cost commodity sensors with an attached Sensor as a Service capability as a highly industrialised platform. Other companies will deploy these components into their products, new inventions and hence an ecosystem will build around these Sensor as a Service. The benefit for the deploying companies is the sensors will be low cost and the Sensor as a Service will provide either data aggregation, market comparisons (performance compared to other sensors) or a range of other useful capabilities. Whilst useful for lowering cost of experimentation and product implementation for the deployer, the real beneficiary is the supplier. 

The supplier can play the same trick that happens in the digital world of not interrogating what the sensor is doing (that'll be private to the company deploying it) but simply monitoring consumption through the Sensor as a Service to identify the spread of new successful innovations (whether genesis of a new act or a product differential). It's no different to the ILC model but now played out in the physical world and it will have the same impacts. 

From figure 1 below - you industrialise a component activity representing a sensor (A2) to a more commodity form (A3) which is provided with a Sensor as a Service capability (e.g. data and / or code with some form of connection to a remote API). Other companies then build new inventions / feature differentiation (B1, C1, D1) on top of the Sensor (A3) because it is provided at very low cost and hence reduces their risk of experimentation. You then simply monitor consumption of the API to identify what changes have been successful and when identified you aim to commoditise any component (D2) to a future Sensor service and repeat the game - you get everyone else to Innovate, you Leverage the ecosystem to spot success, you Commoditise - ILC.

Figure 1 - ILC Model


For example, let us suppose we were Amazon (they are very good at ecosystem games) and with big data becoming a rapidly industrialised component (already services like BigTable and EMR exist) and CCD being a fairly commodity component then let us hypothesise that we introduced a CCD Sensor as a Service. For makers of devices which include CCD, they would get low cost CCDs and a service telling them about the performance of their CCDs in the wild, maybe some other data aggregation capability (even to the point of customisation to location / time given environmental conditions). Of course, as the supplier, we would get to know what products (in which our sensors are deployed) are rapidly growing and being used regardless of who is making or selling them or the data being transferred. This is achieved by simply looking at consumption of the service, the actual sensor data being private to the deploying company. This is incredibly useful for the supplier and why ecosystems are powerful future sensing engines.

The net effect will be the same as the digital world. The supplier will start to simultaneously exhibit :-
  • rapidly growing rates of innovation - it's in fact the ecosystem that is doing the innovation for it. 
  • rapidly growing rates of customer satisfaction - by using consumption data to pick successful changes in the ecosystem and then providing this as new components to everyone else
  • high rates of efficiency - simply economies of scale
  • high stability of revenue - through provision of industrialised components and reduced risk of experimentation (everyone else is taking the risk)
  • eventual grumbling - as other companies start to complain "they've eaten our business model again"
There's a whole new world approaching where ecosystem games (from gaming theory to open source as weapon) can be re-applied in the physical world. Competition from physical engineering to healthcare is going to get seriously interesting. We've seen early starts in this space over the last couple of years but it is building. Key to success of course will be to position yourself as the supplier of commodity sensors with the Sensor as a Service attached i.e. you need to identify those sensors suitable for such a game, industrialise to components and start building the ecosystem of other companies building on top of it (see figure 2 for a rough simplification of the game).

Figure 2 - high level map of the game


That's the really interesting thing about the Internet of Things. The real battle will be over the underlying components and ecosystems that are built around this. Sensors are sexy - well, if you're a competition nut like myself. You don't want to be the device manufacturer, you want to be the component sensor as a service in every other manufacturers device.

Oh, and the best news is ... most of the competitors in this space probably won't see this coming (poor situational awareness), they'll focus on the device and communication between devices whilst you can start to build up in underserved markets. When it finally hits then combined with inertia, this will be one of those predictable forms of disruption that any start-up can have a field day in. There's a few billion to hundred billion dollar companies to be made in this space.

How do I know this? Well, I don't - well not for definite. With another 7,200 days of data collection I could be more conclusive (or not) but alas that's another story. At the moment, I'd advise taking the above with a pinch of salt as with any other prognostication on the future unless you're a start-up in which case I'd say 'you become what you disrupt'.  It'll take 10-15 years before this space really kicks off, so it's time to start building now.

There's a lot of future value in sensors and sensor ecosystems.

On the future ...

The Chinese philosopher Lao Tzu once stated, “Those who have knowledge don’t predict. Those who predict don’t have knowledge”. However, let us assume that there is future change that is known to everyone and change that is unknown to all (i.e. we’re forced to speculate and predict). Advantage can be created by a business through change that is known only to a few and “unevenly distributed”. 

This raises a question; can we determine a means of identifying future change that is knowable but not known to all? When examining literature, we can often cite examples of past science fiction novels that appear prophetic. However, the sheer volume of publications (in excess of 70K novels & short stories per annum) means that this can be attributable to pure coincidence and often such predictions suffer from interpretation effects (i.e. we read into them prophecy where there is none, the prognosticators’ equivalent of the P.T Barnum effect). 

The challenge is whether we could develop a means of more accurately predicting change beyond random coincidence. Could we predict the predictable because the knowledge was already there even if we were only vaguely aware of it? Could we create a more ‘prophetic’ story? To exploit the future, we need to somehow create a framework that allows us to uncover knowable change. Such a framework must be inherently holistic, interdisciplinary, relative, repeatable and useful: - 
  • Holistic because of the potential for combinations to be greater than the individual change (e.g. standard electricity plus material science leading to computing. 
  • Interdisciplinary because it impacts not just technological, economic and physical systems but also social systems. 
  • Relative because the changes may be different depending upon the observers’ viewpoint i.e. the impacts in one industry may not be the same in all. 
  • Repeatable because the validity of single, one off predictions provides no method of testing beyond the scope of the single prediction. 
  • Useful because vague generalisations and known effects provides no means of exploitation. 
Is this possible? The answer turns out to be ... sort of, maybe ... but I'll leave that to another day ... well, to be precise ... another 7,200 days approximately in order to gather the data to validate it.

On disruption and executive failure

When examining the issue of predictability, it becomes fairly obvious that there are two important axes to consider - the predictability of what and the predictability of when. By examining change on these axes (see figure 1) then one pattern becomes clear - there is more than one form of disruption.

Figure  1 - Predictability of What vs When


The two main forms are disruption by Product to Product substitution (which is unknowable i.e. unpredictable) and disruption by Product to Utility substitution (which is knowable) :-

The Unknowable (e.g. product to product substitution). In this form, a change in the value chain (caused by some new component added to the product e.g. size) is unpredictable. The entrant has no idea whether it will be successful nor does the incumbent. However, the incumbent has inertia caused by past business success and if the change is successful then the speed of adoption tends to be non-linear. This means the incumbent has little time to react, lots of inertia to overcome and is unsure whether the change will dominate. Once they realise the change is inevitable them then it's usually too late to do anything about it. This type of disruption can only be accurately seen post event and in this case, Jill Lapore is correct that disruption is a post event classification and unpredictable.

The Knowable (e.g. product to utility substitution). In this from, the change is highly predictable and can be prepared for many years (or decades) in advance. Cloud computing fits into this category. This predictable disruption can be determined pre-event but for the same reasons this can defended against and shouldn't disrupt. The incumbent will have inertia but they also have plenty of time to prepare - from years to decades. The reason why incumbents become disrupted is due to one of two variations.

The first variation is that the change is usually unseen due to poor situational awareness & scenario planning on behalf of the executive. Less than 30% of companies have any form of scenario planning and less than 4% have any means of visibly seeing the landscape, hence such blindness is rife. In this scenario the incumbent fails to prepare for a predictable change and suffers the same consequences as though the change was unknowable. For an attacker, this is highly attractive as the change is highly predictable and therefore a new entrant can target a space with a good degree of certainty of success. If you can find a product to utility substitution for which all the weak signals indicate it is ready to happen and which incumbent executives show low levels of situational awareness ... well, it's like stealing candy from a child. For extra icing on the cake, you can often use incumbents inertia against them. This is how Canonical managed to walk in and help itself to the entire future cloud market against the vastly better resourced and funded competitor of Red Hat.

The second variation is that the change is seen but the incumbent still fails to act. In some cases (quite rare) this failure to act is due to extremely strong and entrenched inertia to change. This is what happened with Blockbuster vs Netflix - Blockbuster saw the change, was first with video online and on demand but completely failed to deal with the inertia cause by shops. The same happened with Kodak - first in with DSC and online photos but failed to deal with inertia caused by its analog fulfilment systems. Not seeing a predictable change (due to poor situational awareness) is a failure on behalf of the executive however seeing the change and failing to act is shockingly poor.

The points to take home are 1) disruption occurs (as described by Christensen) and 2) there are also two forms of disruption - unknowable and knowable. The unknowable type is difficult to defend against due to its unpredictability. So, in the case of APPL vs RIM then you can't really point the finger at the executives. It's just down to chance, a gamble and it's not surprising that companies get hit by this. Lapore is spot on regarding lack of predictability and post event classification with this form.

However, the knowable form is highly defendable against - it is easy to adapt to this. That companies get disrupted by this is either failure of executives in terms of situational awareness or even worse a failure to act upon what is known. There isn't much of an excuse for this. 

I mention this because 'disruptive innovation' appears to have become a common excuse for company failure. In many cases, the reason why the companies were disrupted was because of executive failure to either see or act upon a visible change i.e. the storm was visible, they either failed to look or failed to move the company out of the way. This shouldn't happen but unfortunately it does.

On Prognositication

The Internet did not catastrophically collapse in 1996 as Robert Metcalfe (co-inventor of the Ethernet) predicted. Apple was not ‘dead’ by 1997 as Fortune, Business Week and the New York Times told us. Neither wireless nor airplanes made war impossible – Marconi and the Wright brothers got this wrong.

I don’t use a nuclear powered vacuum cleaner nor do I live in Arthur C. Clarkes’ vision of the autonomous home that flies into the air to head south for the winter. Apparently I was supposed to in 2001 when not travelling around in my Fred Freeman 1999 Rocket Belt or taking saucer shaped flying ships to NASA’s permanent moon base which still hasn’t appeared after 50 years of prediction. Edison’s prognostication that Gold would become worthless never came true. My automated vehicle of the 1980s that David Rorvik assured us that we would get, never turned up. Without a getaway vehicle, it’s fortunate that Donald Michael’s warning that by the 1980s we would be replaced by intelligent machines never actually happened. Of course, I could have escaped to my 2014 underwater city of Isaac Asimov … except it’s not there. Nor am I taught by electrocution, Fireman do not have flying wings and I’m still waiting for my personal helicopter in my garage (I was supposed to get that in 1968).

Prognostication is a very sorry business.

Friday, October 03, 2014

When to use a curve, when to use a map

The mapping technique is based upon two axes - one which describes a chain of needs (the value chain) from the user needs (visible to the user) to the underlying components (whether activities, practices or data) that meet those needs (invisible to the user) versus evolution. A description of how the evolution axis was developed including its relationship to publication types can be found in this post.

Now generally, when examining change in any complex system then you'd use a map because the system contains many evolving components and the interaction between them is also important (see figure 1).

Figure 1 - A map


The history of each component is simply a movement from left to right on the map and this can also be viewed on the evolution curve - for example, you can examine the change of computing infrastructure from its genesis with the Z3 to commodity forms today (see figure 2)

Figure 2 - Evolution of a single component (computing infrastructure)

However, sometimes it is useful to simply focus on the current state of change and view the position of all the components of a current map on the evolution curve  - see figure 3. This is part of a process known as profiling.

Figure 3 - The components on the Figure 1 map on the evolution curve


Now, the components shown on figure 3 are all independent and they are all evolving along the same pathway. Creating profiles is an extremely useful technique for competition and one I've mentioned beforehand, however for the time being I simply want to mention it.