Wednesday, April 16, 2014

'The best summary post that you've done' ... on mapping.

Things appear and diffuse in society through diffusion curves of ever maturing versions of the act. Product A1 leads to a better Product A2 and so on, until it finally becomes more of a commodity (or utility if suitable). These diffusion curves follow a s-curve shape and have an exponential component.

However, many of the curves (A1 to A5) tend to be related to products leading us to a misperception that since the time for products is long then the time to convert to utility will be equally long. This creates a period of overlapping conflict between the product view of the world and the more commodity / utility view of the world.

Figure 1 - Diffusion curves of A1 to A6
*example A1 to Aabove is not real data but a hypothetical.

Whilst diffusion curves are not identical over applicable market and the same time frame, you can graph evolution by measuring over ubiquity and certainty. What drives the process is competition both demand side (which drives ubiquity) and supply side (which drives certainty).

Figure 2 - Evolution of A1 to A6
*example A1 to Aabove is not real data but a hypothetical.

You can then draw a map of the environment and overlay common economic factors e.g. the evolution from the uncharted space (the rare, uncertain and changing) to the industrialised (the common, the defined, the static) and the change of properties this involves. Included in this is how evolution not only drives efficiency but also through the provision of ever more standardised interfaces enables the development of higher order systems which enables future but uncertain sources of wealth (e.g. electricity enabling TV, radio, computing etc).

 Figure 3 - Map of A1 to A6


*example A1 to A6 above is not real data but a hypothetical.

Of course, competition doesn't happen in isolation and the improvements to efficiency (+e), agility in building higher order systems (+a) and hence ability to create new sources of wealth (+w) create competitive pressures for others to adapt. This pressure only increases as more adapt. This mounting pressure creates a punctuated equilibrium between the past and future. Hence the early exponential aspect of diffusion curves.

 Figure 4 - Pressure to adapt mounts as others adapt.


Alas we have inertia to change due to past success. There are over 16 different forms of inertia to be considered.

 Figure 5 - Inertia to Change


We can now take into consideration inertia, the competitive consequences of evolution (+e, +a, +w), the increasing pressure to adapt, the exponential nature of change, the properties of each stage of evolution (uncharted vs industrialised) and the impact of diffusion curves (time for product vs time to utility) to determine an overall pattern of economic change. This has three components - peace, war and wonder and it has a corollary in Hollings adaptive renewal cycle which is unsurprising given the work on evolution is driven by competition effects (as with nature).

 Figure 6 - Peace, War and Wonder from A1 to A6



*example A1 to Aabove is not real data but a hypothetical.

This pattern of peace, war and wonder occurs both at a micro and macro-economic scale. The larger waves (known as Kondratiev waves) are determined by whether the component that is evolving impacts many value chains (i.e. common components such as mechanical components, power, money and computing create large macro waves). The pattern of peace, war and wonder and its characteristics is reflected in the work of Carlota Perez with Wonder covering eruption & frenzy, Peace covering synergy & maturity and War being the phase between one wave and the next.

 Figure 7 - Peace, War and Wonder at a Macro-economic scale.



Understanding the basic patterns, mapping environments and exploiting how to alter the position of pieces on the map is an essential part of strategy. The effects of which are dramatic, as demonstrated by an examination of 100+ companies I undertook back in 2011/2012.

 Figure 8 - Strategic Play and effect on Market Cap over a 7 yr period.


* Examination of high tech leading companies, bubble size refers to number of companies in this group.

Unfortunately most companies have poor situational awareness (understanding of where and why) compared to action (the how, what and when) as exhibited by strategic documents. The main culprit for many of the problems faced is the box and wire diagrams we tend to use to try and understand the jumbled mess of activities, practices and data that represent organisations.

Figure 9 - The box and wire


All of these problems can be improved through the use of maps. However, mapping is only the start of the journey. Then you need to learn how to play chess between companies (and how to organise around evolution) and alas this is not something I can cover in a single blog post.

However, I'll be talking on this subject at OSCON.

PS. The title change was courtesy of @cpswan

For more reading on 
1) How to create a value chain - see here
2) Evolution - see here
3) Some of the basics - see here
4) The overall process - a long and somewhat rambling set of posts which never was completed and which I've since tidied up terms - see here
5) The finer details - hmmm, sorry ... somewhat scattered throughout this blog over the last seven years.

Tuesday, April 15, 2014

Why Map?

In comparison to box and wire diagrams (IT system diagrams, business process maps etc) then ...


Friday, April 04, 2014

On Mapping and Licenses

I was recently asked a question on licensing of the mapping technique. 

I developed the mapping technique (and the later work on evolution) initially for my own use in 2005 and later refined this in 2007. I have always published the technique under creative commons license. The license I use is known as CC BY-SA 3.0

What this means is that you are free to copy the mapping technique and make derivatives of it under the same license. Does this mean I expect you to open up any maps you create? No. Only the technique, derivatives of the technique and anything which is built upon the technique whether guides or otherwise.

If you don't like that, well ... you can always create your own technique? It only took me several years of thinking about the problem and collecting data to demonstrate the effect. I gave it to the community as a gift as the community has given me so much. Do likewise.

As for writing a book - well, I've started again in my spare time. You can read part of my earlier effort here through a series of posts. As for workshops, I currently do these for LEF members and speak on the subject at Open Source Conferences such as OSCON. I will look at running more public workshops on this subject at some point in the future.

Thursday, April 03, 2014

Creating a value chain

The first part of mapping is to create a value chain. This is a chain of NEEDs. You start with the User need and determine the components needed to meet these needs and then any subcomponents that those components need ... and so on.

I often use the example of a cup of tea in tutorials / workshops. I've provided a simple and incomplete example of this in figure 1.

Figure 1 - a chain of needs.



It should be noted that :-

a) The value chain represents a chain of needs with the top being the user (i.e. the people we provide)

b) Things are valuable to others that NEED them.

c) The chain can have cascade effects e.g. if POWER is down then the USER won't get their cup of tea. However what the user cares about (and hopefully pays for) is the cup of tea.  The user doesn't care about your power supplier. As the supplier of the cup of tea then power is your problem. Your power supplier is in effect 'invisible' to the user and the only thing the user cares about is whether you deliver on their cup of tea. If the power fails then the user problem is you failed to give them a cup of tea - this is what is 'visible' to them. 

When mapping, it's important to start with USER needs and by that I mean what they need and not what you need i.e. starting of with a top level need of profitability or branding will lead you down the wrong path as I can guarantee you that most USERS don't have a top level need of making you profitable.

Tuesday, March 18, 2014

Daisy Chains and HR

One of the most interesting aspects of the recent Satoshi Nakamoto saga is how an experienced computer engineer near Silicon Valley has not been able to find gainful employment in the industry for a decade despite a massive shortfall in skills?

It's interesting to me because a friend of mine was recently made unemployed. They had twenty years of coding experience, proficient in many modern languages but had great difficulty in finding a job. Fortunately that's now been resolved through personal contacts. 

My friend's problems in finding employment were threefold.

1) Age
First, they were over 40 years of age and unfortunately our industry suffers from a 'culture of youth'. I can sort of understand that in non mathematical based industries where a pre-occupation with data doesn't exist but in computer engineering than given the idea that the youth are more innovative is based upon extremely flawed concepts then I'm surprised it exists.

Alas, in my experience the HR departments of companies (with obvious exceptions such as the amazing work being done by Google's People Analytics) are not often populated with mathematically questioning folk but instead rely on more 'softer' skills. If they were then we probably would not have dubious schemes like Myers Briggs (MBT) being used despite being disproved as practically worthless in the 1970s by Barnum responses (aka Forer effect) and being rejected by the US Army. However, this is not the case, these myths exist and age is an issue but not the biggest one.

2) Loyalty
The second issue is tending to stick in a job for a long time. I'll explain why in the next section.

3) Not having a job
Not having a job is usually the biggest problem in getting a job particularly in industries where recruitment consultants are rife. To explain why, lets take two candidates for a job :-

1) Candidate Sarah has 20 years of directly relevant experience, a tendency to stick with a company for a long period of time and has been unemployed for six months. They are happy to accept $70K per year.

2) Candidate Sue has 10 years of mostly relevant experience, a tendency to leave a job after a few years and is currently employed. They are after $120K per year.

Which candidate would you put forward as a recruitment consultant? Well, I'm sorry to say that Sue is the most financially attractive for multiple reasons of which salary is the least important. The real reason why Sue is the most attractive is if they take the job then they leave a new vacancy to be filled i.e. their current employment.

Many recruitment consultants I've known make their living from creating daisy chains i.e. they tout Sue to the employer whilst at the same time quietly preparing another candidate (e.g. Bob, preferably employed) for Sue's job and finding another candidate (e.g. Alice, preferably employed) for Bob's job. Think of it like a house chain but with people. 

Alice -> Bob -> Sue -> New Role.

Daisy chains create huge benefits. Firstly, there's the financial benefit if you take 10% on annual salary for everyone jumping role in the chain.

Second, it can help strengthen your relationship with the employer / client. Take for example Sue. The employer might not know Sue is looking but you do. You can prepare the 'perfect' candidate Bob and turn up at the right time when the employer discovers that Sue is leaving and before any job has been advertised. A quick, 'I know someone who is perfect for this' and you can probably get an interview done in a few days.

You then repeat the trick with the employer of Bob and all down the chain. Each time, with each employer (aka client) it helps strengthen your position as the goto person and you can help reinforce this - 'remember the time when Sue left, I found you Bob in less than a day' etc.

From talking to people in the industry, my understanding is that daisy chains are common practice. In some cases, these chains can be ten people long and ideal candidates are those which tend not be loyal for long periods of time i.e. repeating chains is also a fairly common practice.

If you have a chain such as 

Dave (junior) -> Alice (senior) -> Bob (team lead) -> Sue (manager) -> New Role (director)

and you shift everyone, pocketing 10% of annual salaries, you end up with 

Dave (senior) - Alice (team) - Bob (manager) - Sue (director)

All you have to do is wait a couple of years, assuming you've managed to populate your chain with people who don't tend to stick around and then you can repeat it by finding Sue an even more senior job.

Dave (senior) -> Alice (team) -> Bob (manager) -> Sue (director) -> New Role (VP)

Leaving

Dave (team lead) - Alice (manager) - Bob (director) - Sue (VP)

Loadsa money ... daisy chaining and managing such chains is where the action is at. 

Until this is somehow resolved, then the golden rules of getting in a job in IT tend to be a) don't grow old b) don't stick with an employer for a long time and most importantly c) have a job in IT. In terms of progression, make sure you get yourself on a good daisy chain and so be friendly to your recruitment consultant however unpalatable that seems.

Saturday, March 15, 2014

On mapping and the evolution axis

For a long time, I've been using maps of industries, businesses and systems to determine gameplay, management, learning of economic forces and how to manipulate markets. The map has two axis - one of value chain (which represents a recursive set of needs from user needs to supplier needs) and the other of evolution. See figure 1.

Figure 1 - a map of HS2


The first maps I produced were in 2005 and at that time whilst I suspected and had examples of a pattern for evolution (from the genesis of an act to commodity provision) , I actually had no way of describing why it occurred. At that time, I was familiar with concepts like diffusion (adoption over time) but this provided no consistent pattern for change - the diffusion curve of one instance of an activity is not the same as the diffusion curve of another if measured on identical axis. See figure 2.

Figure 2 - Different diffusion curves for maturing instances of the same activity


The solution to the problem occurred during a chance set of conversations in which I noticed that whilst people could agree whether something was a commodity when it came to products and something novel (i.e. the genesis of an act) then disagreements abounded. This led me back to the Stacey Matrix (see figure 3, this version is a simplified diagram created by Brenda Zimmerman).

Figure 3 - The Stacey Matrix

The Stacey Matrix is useful in discussion of how groups agree and what peaked my interest was the use of a certainty axis. This coincided with something Everett Rogers had said that activities evolve through multiple waves of ever improving and more mature examples.  A key part of evolution seemed to have something to do with certainty.

Hence in 2006 and 2007, I spent a great deal of time trying to determine a measure for certainty for an act. It was by looking in detail at publications of journals and papers on activities that I noted how they changed with time.  In examining a core set of activities and 9,221 related articles, I was able to categorise the articles into four main types - see figure 4.

Figure 4 - type of articles


I then developed a measure of certainty that used the volume of articles of type II and type III produced relative to the total volume of articles when that activity is commonly described as a commodity.  I used that reference point (when the activity was commonly described as a commodity) to examine how ubiquitous the act was and then compared past market adoption and publications against this measure. The result was the common pattern shown in figure 5.

Figure 5 - Ubiquity vs Certainty 


Now, when examining each of the types of the publication used to manufacture the graph - it became clear that each type of publication was related to a stage of evolution.  See figure 6.

Figure 6 - Types of publication and relationship to evolutionary stage


By overlaying the types onto the ubiquity and certainty curve and extrapolating the ends (i.e. when something is novel we have very little information on it), I was able to finally produce in 2007 the evolution curve (see figure 7 below) which demonstrated the pattern which I had used in mapping (see figure 1 at the top). It was shortly after this, I was able to demonstrate the reason why the pattern occurred was simply the interplay between supply and demand competition. .

Figure 7 - the evolution curve.

I then gave a series of talks on this in late 2007 and early 2008 using the curve to explain the impacts of cloud, 3D printing and highlighting some other common economic patterns e.g. how organisations evolve, how you can exploit ecosystems to manage the future etc. However, as backward as this process sounds it's actually quite normal for a pattern to be 'noticed' and found useful well before any evidence or model demonstrates whether that pattern might exist or is entirely false. In my case mapping was based upon a pattern I noticed (pre 2004), that had proved itself useful (post 2005) and this was well before I had any solid model behind it (2007).

It doesn't mean the pattern is right, just that I've yet to find a better one. I'm sure I'll find some examples which break the model at some point in time - though after 7 years, I'm still looking.

How to fix bitcoin

There are many things to be admired about bitcoin - its simplicity, the convenience, the ability to create plaforms for new forms of transactions, its flexibility and the public nature of the block chain.

However, I'm not a fan in its current form and I've written about the negative impacts of bitcoin if left unchecked (i.e. no strong Government intervention) because there is one unfortunate side effect - its impact on taxation. The problem is bitcoin addresses can be used to obfuscate ownership. After a $100 million heist last year and despite the efforts of the community, the thief has yet to be found. Regardless of the public nature of the block chain, identity can be obfuscated effectively.

This creates the problem because bitcoin represents a cash based society where who owns the cash is difficult to determine and the cash can disappear across legislative borders almost instantaneously. It's like a 'cash in hand' environment but without the risk of being caught with a pile of cash in your safe at home nor caught handing over a wad of notes in the middle of a transaction. In practice at scale, it is relatively easy to transact in bitcoins in a way to obfuscate your involvement such that it becomes impracticable for anyone to trace this even though the block chain is public. Taxation on goods and income in such an environment becomes 'voluntary' and competition creates pressure for the avoidance of taxation especially because of the ease in which this can be done. The net result of bitcoins growth will be a reduction in taxation for income and goods with an increased reliance on land and citzenship tax. But not everyone makes a good living and how can you have a welfare state when everyone can claim income poverty regardless of whether they have millions in value from bitcoins?

Bitcoin in its current form will lead to the future dismantling of all such state apparatus.

Now, whilst some might welcome the reduction in the state caused by a loss of taxation, the state is THE key economic driver of innovation, prosperity and social mobility. The laissez faire economic system is an extreme mindset of some of the more ardent supporters of Freidmanism and the Chicago School. There is no basis for assuming a beneficial society can be created without the state and in all likelihood it'll lead to a future consolidation of wealth, extremely low levels of social mobility and weak economic performance compared to countries that use a more mixed method. From a competition viewpoint this is not a good position.

So how do we fix bitcoin? Well, the state could introduce its own state backed crypto currency but an alternative solution is to simply ensure that ownership of the addresses are public and state verified. This can be done with bitcoin as it stands through draconian legislation requiring all addresses owned by its citizens to be a matter of public record. In China, this is already happening and traders are required to register their addresses.

Yes, this means high levels of transparency on what you spend and buy. You might accuse the state of surveillance but by making the addresses a matter of public record then everyone can use the public block chain to see what others are buying. The advantage of such transparency is that taxation (in terms of income, goods bought and wealth) becomes more feasible. We can have the benefits of transparency with a flexible, convenient and platform based currency along with a strong state, taxation and welfare.

The downside - a loss of privacy. 

However, given a choice between :-

Option 1) privacy, flexibility, convenience and a platform for further innovation VERSUS a lack of transparency on ownership, lack of taxation, a weak state, poor social mobility and poor economic performance.

Option 2) flexibility, convenience, platform for further innovation, transparency, taxation, strong state, better social mobility, better economic performance VERSUS a lack of financial privacy.

Then I go with Option 2) every time. I take the view that a loss of financial privacy through greater transparency is a small sacrifice compared to the potential benefits gained. 

Hence, I'm not a fan of bitcoin in its current form, left unchecked I consider it a form of economic weapon with likely severe consequences for the state. But even such a system can be manipulated to public benefit and I am in favour of crypto currencies in general.  The more disastrous impacts of bitcoin can be mitigated by creating a public register of addresses with state verified ownership and requiring by law all transactions to use such addresses.

It's time that this Pandora's box was fixed to benefit the state. I'd strongly recommend all Western Governments to follow the example set by China and start to bring bitcoin and other crypto currencies under a measure of control through the use of transparency. In this case, by public registers of addresses.

It is better to do this now than to wait until bitcoin and other cyrpto currencies spread further and the introduction of such measures becomes politically unfeasible. At this moment in time - though there will be lobbying against such transparency - the introduction of a register is an option. With such measures then at least we have the opportunity in the future to apply transaction taxes to a citizens use of the currency. There are other issues with bitcoin but the worst effects can be mitigated if Governments act now.

Of course, such a choice would lead us along a path of radical transparency should crypto currencies succeed and dominate. All financial transactions would be public record. Hiding wealth and avoiding taxation would become difficult at best. With a few clicks I would be able to discover the inner workings of most companies, who they paid, who they were paid by - from lobbyists to the normal course of business.  There's probably a lot that people don't want to share but the choices are bleak - either a laissez faire system and all that it creates or a level of radical transparency that we've never experienced before.  I'm in favour of the latter.

Friday, March 14, 2014

On Government IT

There is a tendency of people to grasp onto one size fits all management methods for any problem whether it's agile, six sigma, lean or ITIL. The problem of course is that any large scale system (whether an IT system or a line of business or even an industry) contains many different components (activities, practices and data) which are at different stages of evolution. The method you need to use depends upon the stage of evolution because characteristics of the component change with evolution.

Hence for example, the genesis of something is highly uncertain, constantly changing and needs an agile approach due to its uncharted nature (it's a voyage of discovery). However, the provision of a commodity is all about volume operations, efficiency, standardisation and removing deviation for what is essentially industrialised. There is no one size fits all management method for complex system, you need to use a mix of methods.

But how do you decide what to use? Well, to do this you need to map out the environment. In order to map out the environment you need to first create a value chain starting with visible user needs (i.e. not what you want to create but what users actually need). In fact, the value chain is simply a recursive expression of needs. At the top are users and their needs with high level components that meet this. Underneath this are the subcomponents which meet the needs of higher level components and so on.

With a map you can then determine how something should be treated i.e. which methods to use. Maps also turn out to be essential for strategic play, risk management, cost mitigation, organisational learning and a whole bunch of other stuff but then that's hardly surprising. If you're playing a game of chess, your play will only be improved if you actually look at the board.

In figure 1 is a map of large complex project with user needs clearly marked at the top. Unfortunately due to the nature of the project, I can't give you more details than what is provided (i.e. what the components are).

In figure 2 the same map broken into how you treat components. The use of agile on one side, the use of highly structured methods on the other. Remember that the map is constantly evolving from left to right due to supply and demand competition and hence how you treat something will change over time.

In figure 3 the same map is broken into contracts by logically grouping components together. This is actually a useful technique for purchasing, mitigating cost overruns, ensuring the right methods are used and organisation (e.g. two pizza models, FIST etc).

Figure 1 - Map with User Needs.



Figure 2 - Map with Methods



Figure 3 - Map with Grouping



So why do I mention this? 

I occasionally hear people spout that UK Gov IT is somehow political, too focused on cutting costs and too focused on agile. Well, the above is a map from a very large Government project. I have several of these, some of which are being very actively used from management methods to strategic play. To say, UK Gov IT is all about agile and cost cutting is highly naive. It's normally the sort of thing I hear from large vendors who are annoyed that they can no longer just chow down on UK Gov as a soft target. 

What I see in UK Gov IT is the emergence of highly advanced techniques using multiple management methods, high levels of strategic play and a focus on user needs. Of course, it's not uniform. Some are well down this journey whilst others haven't started. This is all normal for an organisation undergoing significant changes. However, have no doubt that this change is occurring. 

Hence when I read ...

“The Government has no vision for digital Britain – the report that Labour delivered in the last year of our Government, Digital Britain, has yet to be superseded.  Four years on the opportunities are different and we are not even beginning to reap the positive benefits of the way in which technology can change our public services."

“Rather than addressing these challenges ad hoc and reactively, we need a framework for the relationship between the people and their data, government and digital."

“Which is why I am pleased to announce today that Labour will be acting where this Government has so comprehensively failed, delivering a new version of our Digital Britain report to be published before the next election.” 

My only comment is ... what a complete load of tosh. It's almost as daft as the statement I heard recently that to solve IT failures you need more specifications. It smacks of extremely poor situational awareness and understanding of the problems at hand.

On politics, well my involvement in writing the 'Better for Less' paper is well known. What is less well known is my strident political views. I'm 'old' Labour and I say that with absolute pride. My heroes were Michael Foot, Tony Benn and Arthur Scargill. I view the market system as a tool, not an ends. But politics never come into any discussion that I've been involved with. It has always been about 'user needs' and 'better for less'. It has been apolitical.

Tony Benn once said 'the Labour Party isn't believed any more because people believe it will say anything to get votes'. I'm one of those people and stopped voting New Labour a long time ago because it no longer represented my party. 

I don't believe that this Government has 'no vision for digital Britain' and its changes in Gov IT have 'comprehensively failed' because New Labour tells me so. I see quite the opposite and I'm glad.

Monday, March 10, 2014

Epic Fails of Sensible Executives.

I often see the same mistakes being repeated by companies when a core component of their value chain is under attack.  I've come to understand that the cause of this is more than simple inertia (given the timespans involved, the ease of defence and predictability of change) but instead blindness to the game that is occurring.

I thought it would be worth writing down a couple of typical epic fails (there's a long list) that companies undertake. I'll ignore the 'do nothing' scenario as I'm more interested in the poor moves that are made rather than inability to make a move.  I'll start by outlining a scenario.

Scenario

I'd like you to put yourself in the position of a CEO of a company that produces a product. Let us take it as a given that you have inertia to change (due to existing business models and practices) and that you are unaware of how things evolve and co-evolve - i.e. no cheating, you're not even aware that you can map an environment. 

Your product represents an activity which is widespread and well defined in the market and a new entrant (not encumbered by existing models and practices) has started to provide a utility form. Apparently the competitor started work on this idea about 8 years ago, they launched 5 years ago and they represent less than 1.5% of the market today but are rapidly growing. According to financial reports they have doubled in size each year for the last two years. 

The competitor is also building an ecosystem of companies that consume their new utility services and they're extremely aggressively priced. You're noticing some price pressure as companies are adopting these new competitor services.

Your company is currently competing with the competitor in an advanced economy, there are however many emerging economies with technology that is far less mature than your product. You've called together some of your executives to discuss the issue.

1) Your VP of Biz Dev explains that the emerging markets represent future growth opportunities and recommends we extend to include these markets. 

2) Your VP of Product Development explains that some customers are dissatisfied with the cost of our product especially when compared to the new utility services. They recommend we should invest in innovation and creating a functional differentiation between our product and the utility services.

3) Your CFO notes that the competitor is far more aggressively priced and recommends we undertake a cost cutting exercise reducing head count, operational and administration costs. They've calculated that with an aggressive programme we can reduce our selling price to match our competitor whilst maintaining a strong margin.

4) Your VP of marketing recommends we should differentiate on customer service and customisation as the competitor is operating a volume operations like business but many of our customer's have highlighted that it doesn't meet their specific needs.

So which of the above do you pick to examine further?


Analysis

There are a number of critical pieces of information missing from the above but given general economic forces we can pretty much make a stab at some of them. The first, and most, important piece of data we need is the doubling rate of the competitor because this will give you an idea of how long you have to react.

Whilst 1.5% of the market doesn't sound much, the changes from product to utility often represent a punctuated equilibrium (a period of rapid change) and hence doubling rates of one year are not uncommon. This means that whilst they're only 1.5% of the market today, they'll be around 40-50% of the market in five years time. This, combined with the evolution to utility has all sorts of catastrophic effects because our time to react is short. 

If we take a year to make a decision to compete followed by a year to build the team and two years to build an equivalent service (the same three year time span the competitor spent building their's) then when we hit the market, we're going to be a start-up in a market dominated by one experienced provider (a decade+ experience) who represents 50% of the market. We're going to have collapsing revenue from our product based industry which will be heading towards niche and inertia to the change due to our past success. Adding on top of this, as these changes from product to utility usually involve co-evolution of practice then we (along with everyone else in the industry) will be scrambling to find new skill sets which will be in high demand. If the doubling rate is annual (which the above scenario suggests) then it's probably too late for us already and we need to be looking at either acquiring the competitor or exiting the industry.

Now, with the above recommendations there are a number of issues which I'll deal with in turn.

Recommendation 1) : The problem with extending into emerging markets is we're not dealing with the fundamental shift from product to utility. Hence, all we're likely to be doing is laying the groundwork for the competitor to chew up the emerging market once they've finished with the more advanced economy. This is not a good move.

Recommendation 2) : The problem with trying to innovate your way out of such a battle is that the creation of the novel and new is highly uncertain by nature and it's far too easy for the competitor to play a tower and moat game i.e. for them to copy any successful differential we create. The effect of the tower and moat play is that whilst they continue to build up and strengthen their 'future' position then our efforts are just enhancing this. When we finally make the plunge into the 'future' market then we're likely to have been delayed because of our efforts to differentiate (not good if the doubling rate is fast) and the competitor will have built a tower of revenue surrounded by a moat devoid of differential opportunity. This is pretty much a disaster.

Recommendation 3) : Whilst cost cutting can be useful, when you use it to attempt to recreate the past then it's likely to cause a death spiral. The past is going, you need to accept this. Unfortunately, unless you understand the competitor value chain then you're unlikely to know if they have constraints which limit their price reductions (e.g. Amazon EC2 and building data centres) and hence they may have the potential for significant future prices reductions. Cost cutting for the reasons of attempting to re-establish past models is probably the most epic fail move I see companies undertake.

Recommendation 4) : The danger here is twofold. First there is the existing consumers inertia to change which is often represented by a desire to maintain the existing model rather than to adapt. The problem is that as their competitors adapt, the pressure on them mounts to adapt and though they tell you they want the past, they often end up buying the future. The second problem is our competitor's use of ecosystem. If they're using an ILC like model then their rate of apparent innovation, efficiency and customer focus will all increase with the size of their ecosystem. Competing against this with traditional approaches is pretty much doomed to failure.

Summary

There are ways you can outmanoeuvre this new entrant and all sorts of counter plays that can be deployed but then 'how to do this' is not the purpose of this post. What I simply want to demonstrate is strategic play is complex and apparently sensible recommendations (focus on emerging market, innovation, cost reduction, customers) often turn out to be epic fails especially when a company has poor situational awareness.

Competition between companies is like playing a game of chess and the first rule of chess is - look at the board.

---- additional notes (for the mapping crowd)

For those of you who like to cheat (like me), here's a mapping view of the environment - see figure 1. In the diagram :-
  • A[1] to A[2] represents the change of the activity from product to utility. Our business in the scenario has established around selling A[1]  whilst the new entrant has introduced the more industrialised form A[2]. As per normal there is inertia to the change caused by changing practices, business models and capital (knowledge, social etc).
  • The competitor is running an ILC like model around A[2] and is building an ecosystem. Along with efficiency benefits this will enable them to accurately identify (through consumption data) future successful changes such as C[1] and then industrialise to additional components (e.g. C[2]).
  • There is an emerging market which is less advanced in terms of provision of the core act, hence we could sell A[1] to the emerging market. However, this won't deal with the issue the A[1] is going to be replaced with the more evolved form of A[2] and instead will simply lay the groundwork for the competitor.
  • We could simply try to recreate past profitability around A[1] but again this doesn't deal with the issue that A[1] is going to be replaced with the more evolved form of A[2].
  • We could attempt to 'innovate' by trying to create a high risk and uncertain differential B[1]. However the competitor can simply copy this and aim to provide it in a more industrialised form B[2].
  • The more cunning competitor will be trying to run a tower and moat play i.e. they build a tower of revenue around A[2] and provide all potential differentials (e.g. B[2] and C[2]) for effectively free. The danger of this play is that as their ecosystem grows they exploit both it and our own efforts to bolster their moat. Once we eventually realise that the future is not A[1] or trying to sell A[1] to emerging markets but instead it's about competing around A[2] then our problem becomes that the competitor has a large ecosystem around its core revenue and there is little room left to differentiate.

Figure 1 - Map




Monday, March 03, 2014

What is right and wrong with Christensen's Disruptive Innovation?

Two of my pet dislikes are the phrase 'Cloud Computing is an example of disruptive innovation' and how 'Christensen was wrong on the iPhone'. At the heart of this is everything I like and dislike about disruptive innovation theory.

To explain this, I need to go back to basics. First, as covered many times in this blog - components of systems tend to evolve through supply and demand competition. This can be measured over two axis - one of ubiquity and the other of certainty.  The certainty axis was actually derived from a combination of the Stacey matrix and modelling in i-Space (another post, for another day). The upshot of this is that when you look at any activity (or practice or data) the type of publications around it evolve through four basic types (see figure 1).

Figure 1 - Certainty and Publication type


By measuring this change in certainty and by examining ubiquity of an component, it was possible to derive an evolution curve through different distinct phases (see figure 2).

Figure 2 - Evolution
Now, when I normally examine any system I do so over two axis - the value chain (from user need to invisible sub components) versus the state of evolution.  Any and all components (activity, practices and data) within a system evolve from one extreme (the highly uncertain and rare) uncharted space to the more industrialised.

This process of evolution takes place through the constant appearance and diffusion of maturing instances of the act, practice or data.  However, diffusion is measured over adoption vs time and the total applicable market and the timespan may vary with each instance of the activity.

For example, take an activity A with different evolved states - A[1] to A[5] - e.g. maturing versions of telephones. Then if you examine the diffusion curves of each instance of the act, then the total applicable market and the time each instance takes to diffuse from early adopters to laggards can be different between the diffusion curves. This is also one of the reasons why evolution cannot be measured over time.

To emphasise this, for the same activity (A) through its different evolved states - A[1] to A[5] - then figure 3 provides an example diffusion curves for each instance, figure 4 provides the evolution curve and figure 5 provides a map. NB figures 3 to 7 are purely illustrative and based upon a fictitious example.

Figure 3 - Example diffusion curves A[1] to A[5] (illustrative)


Figure 4 - Evolution curve A[1] to A[5] (illustrative)


Figure 5 - Map A[1] to A[5] (illustrative)


Now, this is where we get to the fun part (sort of).  As components evolve their characteristics change from the uncharted (rare, constantly deviating and highly uncertain) to the industrialised (common, predictable, standard). Furthermore the component itself itself can represent a range of underlying subsystems bundled together.  Hence activity - A - might represent a range of component sub systems designed to meet a specific user need. For example, a telephone contains many subsystems from the physical shape of the receiver to the electronics within it.  Hence each instance can represent an entire value chain of components (see figure 6).

Figure 6 - Each instance can represent a value chain of components (illustrative)


When we talk of evolution of a product such as the substitution of A[2] with A[3] then such substitution tends to be based upon sustaining change i.e. an improvement.  However, you can also get a change in the associated value network. So for example, let us assume that the shift from A[3] to A[4] is a consequence of a change in the underlying components in order to meet some new need through some new property. An example would be physical size becoming important with disk storage. This type of change is extremely difficult to predict because the change is new i.e uncertain. (see figure 7).

Figure 7 - Change in the Value Network (illustrative)


Combined with inertia caused by pre-existing business models (e.g. A[3]) then such changes can be highly disruptive and difficult to protect against especially if they first appear in novel markets (where the new property is important) and then develop in that space until the performance characteristics are such that it substitutes the traditional market. This is the classic example of Christensen's Disruptive Innovation whether you're talking about hard drive formats (physical size) to hydraulic vs cable excavators. The reason why it's so difficult to protect against is the change in value network is unpredictable and hence there often isn't time to deal with inertia and manage a smooth transition for an existing vendor.

The same problems can occur when the activity itself becomes a component of something else i.e. telephony being a sub component of smart phones.  It's extremely difficult to predict such changes which is why the outcome of RIM vs Apple was highly unpredictable in the early stages and why the 'Christensen was wrong on the iPhone' is somewhat farcical ... it's almost impossible to predict, it could have gone either way.

However, the shift from product to utility (i.e. from A[4] to A[5] in the above diagram) is highly predictable. Even the consequences of this from co-evolution of practice to potential reduction of barriers to entry into other value chains can be determined. Naturally we suffer from inertia but we normally have a considerable amount of time to prepare (in the case of cloud computing we've had since 1966 and Parkhill's challenge of the computer utility) and there are numerous weak signals we can use to identify that the change is upon us.

So, what's right and wrong with Christensen's Disruptive Innovation? 

Well, there's nothing wrong with at all, it's an excellent piece of work. However, the problem is that we describe both the genesis of something (e.g. A[1]), product changes (e.g. A[3] to A[4]) and shifts from product to utility (e.g. A[4] to A[5]) all as 'innovations' when they're not the same.

Product substitution due to an uncertain change in the related value network is highly disruptive because it is incredibly difficult to predict. It doesn't matter whether this is a change in the underlying components or the act becoming a component of something else. Christensen could no more predict the iPhone's success than RIM could and the success of the iPhone was not guaranteed. The only defence against this is a highly adaptable culture.

However, product to utility substitution is a highly predictable consequence of competition and there is no reason for a company to be caught unawares by such a change and disrupted by it. In this case disruption occurs because of poor situational awareness and a poor understanding of the basics of economic change.

So is Christensen's work right? Well, it's certainly a strong hypothesis and well supported by examples. 

But surely Christensen should have been able to predict iPhone's success? Absolutely not. It's a highly uncertain change in the value network which disrupted many due to inertia. Those companies were disrupted by a rapid and uncertain change combined with an inability to adapt quickly enough - a cultural impact. There's no way that Christensen's can predict that the highly uncertain will succeed bar the magical existence of a crystal ball and the failure to do so does not detract in any way from the core hypothesis of disruptive innovation.

Is cloud computing an example of disruptive innovation? Absolutely not in the classic sense. It's a highly predictable change which has disrupted many with inertia due to poor situational awareness. There was no reason for these companies to be disrupted. This has only occurred due to blindness to the predictable (though they had forty years to prepare) and exceptionally poor gameplay. The use of the phrase 'cloud computing is a disruptive innovation' is more synonymous with 'we've been utterly outplayed and we need something else to blame' than it has to do with classic examples of disruptive innovation such as the change in hard drive formats (unpredictable) to cable vs hydraulic excavators (unpredictable) to RIM vs AAPL (unpredictable). Product to commodity (and utility) is an inevitable consequence of competition in the absence of constraints.

Oh, but what about AAPL vs Android?  Well, in this case we're talking about industrialisation of the OS and building of an ecosystem ... these effects were also fairly predictable even with counter plays (use of supply chains, patents etc). But that's another post for another day.

Understanding Ecosystems - Part I of II

Once you start mapping out environments, you can quickly start to discover common economic patterns. basic rules of competition and repeating forms of gameplay.

A typical basic pattern is how supply and demand competition drives the evolution of one component (whether practice, data or activity) to a more industrialised form which not only improves its efficiency but through the provision of stable interfaces can enable rapid development of novel higher order systems. Those novel higher order systems may also turn out to be new sources of wealth but they are highly uncertain and unpredictable (i.e. uncharted). However those that succeed will evolve and the cycle will repeat.  See figure 1.

Figure 1 - Competition enables new higher order systems


From the above, a component (either an activity, practice or data) evolves from A[1] to A[2] to A[3], for example the evolution of electricity from the Parthian battery (A[1]) to Siemens generators (A[2]) to utility provision by Westinghouse (A[3]). 

As it evolves from the uncharted space (e.g A[1] where it is rare, uncertain and constantly changing) to a more industrialised form (A[3]) then it becomes more efficient, defined, stable and standardised. This process enables the creation of higher order systems (e.g. B[1], C[1], D[1]) built upon standard interfaces e.g. standard electricity (A[3]) enabled lighting(B[1]) , radio(C[1])  and television(D[1]) .  

Those newly created novel and highly uncertain components (e.g. the uncharted B[1], C[1], D[1]) then start to evolve if they are successfully adopted via the same forces of supply and demand competition (e.g. D[1] to D[2]). The cycle then repeats.

You can trace this effect throughout history (see figure 2) and it is the combination of this pattern with inertia and co-evolution which creates economic cycles both at a macro (k-waves) and micro economic scale. But, we've covered this many times before and that's not the purpose of this post.

Figure 2 - Cycle throughout history.


Now all of this is relatively dull stuff and there are dozens of common patterns you need to understand in order to play even the most basic strategic games.  However, it's worth noting a couple of things even with this simple pattern. 

For example, in figure 3, I've added some more detail on the characteristics of components at each stage of evolution.

Figure 3 - Treating Components


First, with A[2] to A[3] we're talking about the industrialisation of an existing act which is all about volume operations, efficiency, operational improvements and measurement. Whilst people often talk about the advantage of being a fast follower, in this case there's an additional source of value for being the first to industrialise. The value is derived from others building on top of the utility services you build and I'll explain why a bit later.

Let us assume you have industrialised some act (e.g. A[2] to A[3]), for example the shift of computing infrastructure from computing as a product (A[2]) to computing as a public utility (A[3]).  This can enable others to build novel but uncertain higher order systems on top of your utility services (e.g. B[1], C[1], D[1]).

Those novel higher order systems might be uncertain but they are potential sources of future worth. Since they're constantly changing (i.e. we're exploring the potential) then successful creation is both costly in terms of research and development along with being unknown in terms of success. In this case you ideally want to be a fast follower and let others incur the cost of R&D. But how do you know what to follow?  How can you detect success? 

Fortunately, due to competition then successful acts will start to mature through multiple diffusing waves of ever improving products. This pattern is detectable through consumption i.e. if the novel systems are built on top of your utility services then diffusion of ever maturing and hence successful systems (e.g. D[1] to D[2]) can be detected by simple consumption of your underlying sub system (i.e. consumption of your utility service A[3]).

This provides you with an opportunity.

If you commoditise an act (A[2] to A[3]) to a more industrialised form which enables others to innovate (B[1], C[1], D[1]) then you can leverage the consumption of your underlying component (A[3]) by others to detect successful changes (e.g. D[1] to D[2]).  You can then commoditise any identified successful component (e.g. D[2]) to a more industrialised form in order to repeat the process.  Hence by being a first mover to commoditise (A[2] to A[3]) and by exploiting consumption information then you are constantly in a position to be a fast follower (D[1] to D[2]) to any successful change without incurring the heavy R&D risk because everyone else is innovating for you.

This is a model known as Innovate - Leverage - Commoditise (ILC) and it's fairly old hat having first been applied pre-2005. There are many critical factors to the model including :-

1) Speed of information. Whilst the model can be applied in the product space (e.g. A[2]), the problem is the speed at which you can gain consumption information is limited by market surveys. Utility models (e.g. A[3]) are more apt because you gain direct consumption information.

2) Size of ecosystem. Your ability to innovate, deliver what customers want and efficiency all depend upon the size of the consuming ecosystem for your underlying components (e.g. consumption of A[3]). 

This ecosystem consists not only of your own employees but also any consuming company. The efficiency of provision of A[3] depends upon economies of scale i.e. how big your consuming ecosystem is.  Your apparent rate of innovation (since you're not doing the innovation just fast following others) depends upon the number of companies innovating on top of your component (e.g. B[1], C[1], D[1]).  Your ability to deliver what customers want (i.e. spot successful new things) depends upon your ability to leverage the ecosystem to spot success (e.g. D[1] to D[2]). In a well run model then your apparent rate of innovation, customer focus and efficiency should all increase with the size of the consuming ecosystem. 

I've provided an example of the above figure 3 in figure 4 using a circle model where the centre is your core component services (your platform) surrounded by an ecosystem of consuming companies. Such circle models are woefully inadequate for strategic play but they act as a useful visual reminder that effective play involves exploiting others.

Figure 4 - Ecosystem Size


3) Relevance of component. When commoditising a component, the potential size of the consuming ecosystem depends upon how relevant that component is in other value chains. Hence it's advisable to focus on components that are widely used e.g. computing infrastructure rather than highly specialised to an industry.

4) Speed of action. There's little point in using an ILC model if you don't exploit it to create new components and grow the ecosystem. Obviously, each time you do (whether through copying or acquisition) then you'll get accused of eating the ecosystem but the counter to this is you provide an increasing number of component services (i.e. a platform) which makes the environment more attractive to others. This harvesting of the ecosystem does need careful management.

5) Efficiency in provision. When you commoditise a component to a more industrialised form (e.g. A[2] to A[3]) then your ability to encourage others to build on top of it depends upon how much you reduce their risk of failure and increase their speed of development. Hence efficiency and standardisation of interface is very important in this process.

Now when correctly played you can build a constantly expanding platform of highly industrialised component services in which your rate of innovation, customer focus and efficiency is proportional to your ecosystem size and not your physical company size.  Furthermore, as you repeat the model the attractiveness of your platform increases to others.  Also, by being a first mover to commoditise an act to a more industrialised form then you actually gain highly stable, highly predictable volume based revenue. Finally, by exploiting consumption information (e.g. use of A[3]) to always be the fast follower to the novel but uncertain sources of future worth (e.g. B[1], C[1], D[1]) will enable you to maximise your future opportunity by only selecting success (e.g. D[1] to D[2]).

Simultaneously increasing your rate of apparent innovation, attractiveness to others, customer focus, efficiency, stability of revenue and maximising future opportunity are a powerful set of forces. Using an ILC type model is a no brainer ... except ... unless you map out your environment (i.e. have good situational awareness) and understand the rules of the game then you just won't know where to start other than sticking your finger in the air and saying 'this looks like a good one' or doing what most people do and copying others (i.e. '67% of companies do cloud, big data and social media' and hence 'so must we!').

You're just as likely to undermine a barrier to entry into your own business and encourage attack by others as you are to successfully build an ILC model. The first rule of playing chess is alway - 'Look at the board' - which is why building a map (a snapshot at a moment in time of the situation you find yourself in) is not only about effective management (see figure 5) and scenario planning but it should always be a first step before you embark on any form of strategic play.

Figure 5 - An example Map



In the second post, I'm going to use the above models to explore 'outside-in' approaches and the fundamental problems with circle models for representing change (such as that provided in figure 4).

Before anyone shouts what about 'two factor markets', 'supplier ecosystems' etc - this post is about one aspect of ecosystems and not the entire field. Before anyone else shouts 'this is complex' - well if strategic gameplay was easy then it wouldn't be fun.