Tuesday, March 18, 2014

Daisy Chains and HR

One of the most interesting aspects of the recent Satoshi Nakamoto saga is how an experienced computer engineer near Silicon Valley has not been able to find gainful employment in the industry for a decade despite a massive shortfall in skills?

It's interesting to me because a friend of mine was recently made unemployed. They had twenty years of coding experience, proficient in many modern languages but had great difficulty in finding a job. Fortunately that's now been resolved through personal contacts. 

My friend's problems in finding employment were threefold.

1) Age
First, they were over 40 years of age and unfortunately our industry suffers from a 'culture of youth'. I can sort of understand that in non mathematical based industries where a pre-occupation with data doesn't exist but in computer engineering than given the idea that the youth are more innovative is based upon extremely flawed concepts then I'm surprised it exists.

Alas, in my experience the HR departments of companies (with obvious exceptions such as the amazing work being done by Google's People Analytics) are not often populated with mathematically questioning folk but instead rely on more 'softer' skills. If they were then we probably would not have dubious schemes like Myers Briggs (MBT) being used despite being disproved as practically worthless in the 1970s by Barnum responses (aka Forer effect) and being rejected by the US Army. However, this is not the case, these myths exist and age is an issue but not the biggest one.

2) Loyalty
The second issue is tending to stick in a job for a long time. I'll explain why in the next section.

3) Not having a job
Not having a job is usually the biggest problem in getting a job particularly in industries where recruitment consultants are rife. To explain why, lets take two candidates for a job :-

1) Candidate Sarah has 20 years of directly relevant experience, a tendency to stick with a company for a long period of time and has been unemployed for six months. They are happy to accept $70K per year.

2) Candidate Sue has 10 years of mostly relevant experience, a tendency to leave a job after a few years and is currently employed. They are after $120K per year.

Which candidate would you put forward as a recruitment consultant? Well, I'm sorry to say that Sue is the most financially attractive for multiple reasons of which salary is the least important. The real reason why Sue is the most attractive is if they take the job then they leave a new vacancy to be filled i.e. their current employment.

Many recruitment consultants I've known make their living from creating daisy chains i.e. they tout Sue to the employer whilst at the same time quietly preparing another candidate (e.g. Bob, preferably employed) for Sue's job and finding another candidate (e.g. Alice, preferably employed) for Bob's job. Think of it like a house chain but with people. 

Alice -> Bob -> Sue -> New Role.

Daisy chains create huge benefits. Firstly, there's the financial benefit if you take 10% on annual salary for everyone jumping role in the chain.

Second, it can help strengthen your relationship with the employer / client. Take for example Sue. The employer might not know Sue is looking but you do. You can prepare the 'perfect' candidate Bob and turn up at the right time when the employer discovers that Sue is leaving and before any job has been advertised. A quick, 'I know someone who is perfect for this' and you can probably get an interview done in a few days.

You then repeat the trick with the employer of Bob and all down the chain. Each time, with each employer (aka client) it helps strengthen your position as the goto person and you can help reinforce this - 'remember the time when Sue left, I found you Bob in less than a day' etc.

From talking to people in the industry, my understanding is that daisy chains are common practice. In some cases, these chains can be ten people long and ideal candidates are those which tend not be loyal for long periods of time i.e. repeating chains is also a fairly common practice.

If you have a chain such as 

Dave (junior) -> Alice (senior) -> Bob (team lead) -> Sue (manager) -> New Role (director)

and you shift everyone, pocketing 10% of annual salaries, you end up with 

Dave (senior) - Alice (team) - Bob (manager) - Sue (director)

All you have to do is wait a couple of years, assuming you've managed to populate your chain with people who don't tend to stick around and then you can repeat it by finding Sue an even more senior job.

Dave (senior) -> Alice (team) -> Bob (manager) -> Sue (director) -> New Role (VP)

Leaving

Dave (team lead) - Alice (manager) - Bob (director) - Sue (VP)

Loadsa money ... daisy chaining and managing such chains is where the action is at. 

Until this is somehow resolved, then the golden rules of getting in a job in IT tend to be a) don't grow old b) don't stick with an employer for a long time and most importantly c) have a job in IT. In terms of progression, make sure you get yourself on a good daisy chain and so be friendly to your recruitment consultant however unpalatable that seems.

Saturday, March 15, 2014

On mapping and the evolution axis

For a long time, I've been using maps of industries, businesses and systems to determine gameplay, management, learning of economic forces and how to manipulate markets. The map has two axis - one of value chain (which represents a recursive set of needs from user needs to supplier needs) and the other of evolution. See figure 1.

Figure 1 - a map of HS2


The first maps I produced were in 2005 and at that time whilst I suspected and had examples of a pattern for evolution (from the genesis of an act to commodity provision) , I actually had no way of describing why it occurred. At that time, I was familiar with concepts such as Everett Roger's diffusion curves (adoption over time) but they provided no consistent pattern for change - the diffusion curve of one instance of an activity is not the same as the diffusion curve of another if measured on identical axis.  Furthermore an act didn't evolve in a continuous path but instead involved many diffusing instances of more evolved forms e.g. take an activity A then we would see the diffusion of A[1] lead to diffusion of A[2] lead to diffusion of A[3] with each instance being more evolved than the prior. See figure 2.

Figure 2 - Different diffusion curves for maturing instances of the same activity


The solution to the problem occurred during a chance set of conversations in which I noticed that whilst people could agree whether something was a commodity when it came to products and something novel (i.e. the genesis of an act) then disagreements abounded. This led me back to the Stacey Matrix (see figure 3, this version is a simplified diagram created by Brenda Zimmerman).

Figure 3 - The Stacey Matrix

The Stacey Matrix is useful in discussion of how groups agree and what peaked my interest was the use of a certainty axis. This coincided with something Everett Rogers had said that activities evolve through multiple waves of ever improving and more mature examples.  A key part of evolution seemed to have something to do with certainty.

Hence in 2006 and 2007, I spent a great deal of time trying to determine a measure for certainty for an act. It was by looking in detail at publications of journals and papers on activities that I noted how they changed with time.  In examining a core set of activities and 9,221 related articles, I was able to categorise the articles into four main types - see figure 4.

Figure 4 - type of articles


I then developed a measure of certainty that used the volume of articles of type II and type III produced relative to the total volume of articles when that activity is commonly described as a commodity.  I used that reference point (when the activity was commonly described as a commodity) to examine how ubiquitous the act was and then compared past market adoption and publications against this measure. The result was the common pattern shown in figure 5.

Figure 5 - Ubiquity vs Certainty 


Now, when examining each of the types of the publication used to manufacture the graph - it became clear that each type of publication was related to a stage of evolution.  See figure 6.

Figure 6 - Types of publication and relationship to evolutionary stage


By overlaying the types onto the ubiquity and certainty curve and extrapolating the ends (i.e. when something is novel we have very little information on it), I was able to finally produce in 2007 the evolution curve (see figure 7 below) which demonstrated the pattern which I had used in mapping (see figure 1 at the top). It was shortly after this, I was able to demonstrate the reason why the pattern occurred was simply the interplay between supply and demand competition. .

Figure 7 - the evolution curve.

I then gave a series of talks on this in late 2007 and early 2008 using the curve to explain the impacts of cloud, 3D printing and highlighting some other common economic patterns e.g. how organisations evolve, how you can exploit ecosystems to manage the future etc. However, as backward as this process sounds it's actually quite normal for a pattern to be 'noticed' and found useful well before any evidence or model demonstrates whether that pattern might exist or is entirely false. In my case mapping was based upon a pattern I noticed (pre 2004), that had proved itself useful (post 2005) and this was well before I had any solid model behind it (2007).

It doesn't mean the pattern is right, just that I've yet to find a better one. I'm sure I'll find some examples which break the model at some point in time - though after 7 years, I'm still looking.

Added 27th July 2014

I was asked recently what is unique about the evolution axis? Well, since I discovered and demonstrated the pattern then I'm likely to be biased. However, that said :-

1) There is data behind it (thousands of data points) and examples of the pattern are commonplace. However, I haven't published the details of the technique for categorising publications because I'm still using it (for weak signals). Which means, YOU should not trust it. As far as you should be concerned then this evolution curve is only opinion. If you find the pattern useful then use it until something better comes along. The curve has been published in both peer reviewed academic papers and books as a "useful" model.

2) There is consistency in both axis i.e. we're not talking adoption curves where the time axis can vary according to the activity in question.

3) There is causation and not just correlation i.e. what drives the patterns is competition (both supply and demand)

4) It is weakly predictable and I have run predictability tests on this (in particular with the formation of new organisations). Predictability is the single biggest issue with the evolution curve. In order to create it then I had to remove time but of course, by removing time then it is no longer predictable over it. This is the one bit which makes me feel very uncomfortable with the curve and hence reluctant to publish a book on the subject. Though I find it useful, this is at best a weak hypothesis and I'm still looking for a better way to test / falsify.

5) It is useful or at least it appears to be highly useful as part of the overall map (see figure 1).

Added 2nd Sept 2014

Though I've covered this several times over many years, it's worth reiterating that evolution doesn't just apply to activities but also practices, data and knowledge. Each of these classes of things evolve through the same mechanism (driven by supply and demand competition) with changing properties as noted in figure 6 above. To make it extra clear, I've spelt out those classes in figure 8 and the relationship with the evolution curve (see figure 9)

Figure 8 - Different Classes (activities, practice, data and knowledge) evolve through the same property changes.


Figure 9 - each of those classes, evolve through the same mechanism.


This is why, when we map an environment we can also map - practices, data and types of scientific knowledge along with activities. However, as a guide, the bottom axis of a map I tend to describe activities (genesis, custom built, product and commodity) simply because I don't find Type 1 to 4 as meaningful.

How to fix bitcoin

There are many things to be admired about bitcoin - its simplicity, the convenience, the ability to create plaforms for new forms of transactions, its flexibility and the public nature of the block chain.

However, I'm not a fan in its current form and I've written about the negative impacts of bitcoin if left unchecked (i.e. no strong Government intervention) because there is one unfortunate side effect - its impact on taxation. The problem is bitcoin addresses can be used to obfuscate ownership. After a $100 million heist last year and despite the efforts of the community, the thief has yet to be found. Regardless of the public nature of the block chain, identity can be obfuscated effectively.

This creates the problem because bitcoin represents a cash based society where who owns the cash is difficult to determine and the cash can disappear across legislative borders almost instantaneously. It's like a 'cash in hand' environment but without the risk of being caught with a pile of cash in your safe at home nor caught handing over a wad of notes in the middle of a transaction. In practice at scale, it is relatively easy to transact in bitcoins in a way to obfuscate your involvement such that it becomes impracticable for anyone to trace this even though the block chain is public. Taxation on goods and income in such an environment becomes 'voluntary' and competition creates pressure for the avoidance of taxation especially because of the ease in which this can be done. The net result of bitcoins growth will be a reduction in taxation for income and goods with an increased reliance on land and citzenship tax. But not everyone makes a good living and how can you have a welfare state when everyone can claim income poverty regardless of whether they have millions in value from bitcoins?

Bitcoin in its current form will lead to the future dismantling of all such state apparatus.

Now, whilst some might welcome the reduction in the state caused by a loss of taxation, the state is THE key economic driver of innovation, prosperity and social mobility. The laissez faire economic system is an extreme mindset of some of the more ardent supporters of Freidmanism and the Chicago School. There is no basis for assuming a beneficial society can be created without the state and in all likelihood it'll lead to a future consolidation of wealth, extremely low levels of social mobility and weak economic performance compared to countries that use a more mixed method. From a competition viewpoint this is not a good position.

So how do we fix bitcoin? Well, the state could introduce its own state backed crypto currency but an alternative solution is to simply ensure that ownership of the addresses are public and state verified. This can be done with bitcoin as it stands through draconian legislation requiring all addresses owned by its citizens to be a matter of public record. In China, this is already happening and traders are required to register their addresses.

Yes, this means high levels of transparency on what you spend and buy. You might accuse the state of surveillance but by making the addresses a matter of public record then everyone can use the public block chain to see what others are buying. The advantage of such transparency is that taxation (in terms of income, goods bought and wealth) becomes more feasible. We can have the benefits of transparency with a flexible, convenient and platform based currency along with a strong state, taxation and welfare.

The downside - a loss of privacy. 

However, given a choice between :-

Option 1) privacy, flexibility, convenience and a platform for further innovation VERSUS a lack of transparency on ownership, lack of taxation, a weak state, poor social mobility and poor economic performance.

Option 2) flexibility, convenience, platform for further innovation, transparency, taxation, strong state, better social mobility, better economic performance VERSUS a lack of financial privacy.

Then I go with Option 2) every time. I take the view that a loss of financial privacy through greater transparency is a small sacrifice compared to the potential benefits gained. 

Hence, I'm not a fan of bitcoin in its current form, left unchecked I consider it a form of economic weapon with likely severe consequences for the state. But even such a system can be manipulated to public benefit and I am in favour of crypto currencies in general.  The more disastrous impacts of bitcoin can be mitigated by creating a public register of addresses with state verified ownership and requiring by law all transactions to use such addresses.

It's time that this Pandora's box was fixed to benefit the state. I'd strongly recommend all Western Governments to follow the example set by China and start to bring bitcoin and other crypto currencies under a measure of control through the use of transparency. In this case, by public registers of addresses.

It is better to do this now than to wait until bitcoin and other cyrpto currencies spread further and the introduction of such measures becomes politically unfeasible. At this moment in time - though there will be lobbying against such transparency - the introduction of a register is an option. With such measures then at least we have the opportunity in the future to apply transaction taxes to a citizens use of the currency. There are other issues with bitcoin but the worst effects can be mitigated if Governments act now.

Of course, such a choice would lead us along a path of radical transparency should crypto currencies succeed and dominate. All financial transactions would be public record. Hiding wealth and avoiding taxation would become difficult at best. With a few clicks I would be able to discover the inner workings of most companies, who they paid, who they were paid by - from lobbyists to the normal course of business.  There's probably a lot that people don't want to share but the choices are bleak - either a laissez faire system and all that it creates or a level of radical transparency that we've never experienced before.  I'm in favour of the latter.

Friday, March 14, 2014

On Government IT

There is a tendency of people to grasp onto one size fits all management methods for any problem whether it's agile, six sigma, lean or ITIL. The problem of course is that any large scale system (whether an IT system or a line of business or even an industry) contains many different components (activities, practices and data) which are at different stages of evolution. The method you need to use depends upon the stage of evolution because characteristics of the component change with evolution.

Hence for example, the genesis of something is highly uncertain, constantly changing and needs an agile approach due to its uncharted nature (it's a voyage of discovery). However, the provision of a commodity is all about volume operations, efficiency, standardisation and removing deviation for what is essentially industrialised. There is no one size fits all management method for complex system, you need to use a mix of methods.

But how do you decide what to use? Well, to do this you need to map out the environment. In order to map out the environment you need to first create a value chain starting with visible user needs (i.e. not what you want to create but what users actually need). In fact, the value chain is simply a recursive expression of needs. At the top are users and their needs with high level components that meet this. Underneath this are the subcomponents which meet the needs of higher level components and so on.

With a map you can then determine how something should be treated i.e. which methods to use. Maps also turn out to be essential for strategic play, risk management, cost mitigation, organisational learning and a whole bunch of other stuff but then that's hardly surprising. If you're playing a game of chess, your play will only be improved if you actually look at the board.

In figure 1 is a map of large complex project with user needs clearly marked at the top. Unfortunately due to the nature of the project, I can't give you more details than what is provided (i.e. what the components are).

In figure 2 the same map broken into how you treat components. The use of agile on one side, the use of highly structured methods on the other. Remember that the map is constantly evolving from left to right due to supply and demand competition and hence how you treat something will change over time.

In figure 3 the same map is broken into contracts by logically grouping components together. This is actually a useful technique for purchasing, mitigating cost overruns, ensuring the right methods are used and organisation (e.g. two pizza models, FIST etc).

Figure 1 - Map with User Needs.



Figure 2 - Map with Methods



Figure 3 - Map with Grouping



So why do I mention this? 

I occasionally hear people spout that UK Gov IT is somehow political, too focused on cutting costs and too focused on agile. Well, the above is a map from a very large Government project. I have several of these, some of which are being very actively used from management methods to strategic play. To say, UK Gov IT is all about agile and cost cutting is highly naive. It's normally the sort of thing I hear from large vendors who are annoyed that they can no longer just chow down on UK Gov as a soft target. 

What I see in UK Gov IT is the emergence of highly advanced techniques using multiple management methods, high levels of strategic play and a focus on user needs. Of course, it's not uniform. Some are well down this journey whilst others haven't started. This is all normal for an organisation undergoing significant changes. However, have no doubt that this change is occurring. 

Hence when I read ...

“The Government has no vision for digital Britain – the report that Labour delivered in the last year of our Government, Digital Britain, has yet to be superseded.  Four years on the opportunities are different and we are not even beginning to reap the positive benefits of the way in which technology can change our public services."

“Rather than addressing these challenges ad hoc and reactively, we need a framework for the relationship between the people and their data, government and digital."

“Which is why I am pleased to announce today that Labour will be acting where this Government has so comprehensively failed, delivering a new version of our Digital Britain report to be published before the next election.” 

My only comment is ... what a complete load of tosh. It's almost as daft as the statement I heard recently that to solve IT failures you need more specifications. It smacks of extremely poor situational awareness and understanding of the problems at hand.

On politics, well my involvement in writing the 'Better for Less' paper is well known. What is less well known is my strident political views. I'm 'old' Labour and I say that with absolute pride. My heroes were Michael Foot, Tony Benn and Arthur Scargill. I view the market system as a tool, not an ends. But politics never come into any discussion that I've been involved with. It has always been about 'user needs' and 'better for less'. It has been apolitical.

Tony Benn once said 'the Labour Party isn't believed any more because people believe it will say anything to get votes'. I'm one of those people and stopped voting New Labour a long time ago because it no longer represented my party. 

I don't believe that this Government has 'no vision for digital Britain' and its changes in Gov IT have 'comprehensively failed' because New Labour tells me so. I see quite the opposite and I'm glad.

Monday, March 10, 2014

Epic Fails of Sensible Executives.

I often see the same mistakes being repeated by companies when a core component of their value chain is under attack.  I've come to understand that the cause of this is more than simple inertia (given the timespans involved, the ease of defence and predictability of change) but instead blindness to the game that is occurring.

I thought it would be worth writing down a couple of typical epic fails (there's a long list) that companies undertake. I'll ignore the 'do nothing' scenario as I'm more interested in the poor moves that are made rather than inability to make a move.  I'll start by outlining a scenario.

Scenario

I'd like you to put yourself in the position of a CEO of a company that produces a product. Let us take it as a given that you have inertia to change (due to existing business models and practices) and that you are unaware of how things evolve and co-evolve - i.e. no cheating, you're not even aware that you can map an environment. 

Your product represents an activity which is widespread and well defined in the market and a new entrant (not encumbered by existing models and practices) has started to provide a utility form. Apparently the competitor started work on this idea about 8 years ago, they launched 5 years ago and they represent less than 1.5% of the market today but are rapidly growing. According to financial reports they have doubled in size each year for the last two years. 

The competitor is also building an ecosystem of companies that consume their new utility services and they're extremely aggressively priced. You're noticing some price pressure as companies are adopting these new competitor services.

Your company is currently competing with the competitor in an advanced economy, there are however many emerging economies with technology that is far less mature than your product. You've called together some of your executives to discuss the issue.

Recommendation 1) Your VP of Biz Dev explains that the emerging markets represent future growth opportunities and recommends we extend to include these markets. 

Recommendation 2) Your VP of Product Development explains that some customers are dissatisfied with the cost of our product especially when compared to the new utility services. They recommend we should invest in innovation and creating a functional differentiation between our product and the utility services.

Recommendation 3) Your CFO notes that the competitor is far more aggressively priced and recommends we undertake a cost cutting exercise reducing head count, operational and administration costs. They've calculated that with an aggressive programme we can reduce our selling price to match our competitor whilst maintaining a strong margin.

Recommendation 4) Your VP of marketing recommends we should differentiate on customer service and customisation as the competitor is operating a volume operations like business but many of our customer's have highlighted that it doesn't meet their specific needs.

So which of the above do you pick to examine further?


Analysis

There are a number of critical pieces of information missing from the above but given general economic forces we can pretty much make a stab at some of them. The first, and most, important piece of data we need is the doubling rate of the competitor because this will give you an idea of how long you have to react.

Whilst 1.5% of the market doesn't sound much, the changes from product to utility often represent a punctuated equilibrium (a period of rapid change) and hence doubling rates of one year are not uncommon. This means that whilst they're only 1.5% of the market today, they'll be around 40-50% of the market in five years time. This, combined with the evolution to utility has all sorts of catastrophic effects because our time to react is short. 

If we take a year to make a decision to compete followed by a year to build the team and two years to build an equivalent service (the same three year time span the competitor spent building their's) then when we hit the market, we're going to be a start-up in a market dominated by one experienced provider (a decade+ experience) who represents 50% of the market. We're going to have collapsing revenue from our product based industry which will be heading towards niche and inertia to the change due to our past success. Adding on top of this, as these changes from product to utility usually involve co-evolution of practice then we (along with everyone else in the industry) will be scrambling to find new skill sets which will be in high demand. If the doubling rate is annual (which the above scenario suggests) then it's probably too late for us already and we need to be looking at either acquiring the competitor or exiting the industry.

Now, with the above recommendations there are a number of issues which I'll deal with in turn.

Recommendation 1) Epic Fail : The problem with extending into emerging markets is we're not dealing with the fundamental shift from product to utility. Hence, all we're likely to be doing is laying the groundwork for the competitor to chew up the emerging market once they've finished with the more advanced economy. This is not a good move.

Recommendation 2) Epic Fail :  The problem with trying to innovate your way out of such a battle is that the creation of the novel and new is highly uncertain by nature and it's far too easy for the competitor to play a tower and moat game i.e. for them to copy any successful differential we create. The effect of the tower and moat play is that whilst they continue to build up and strengthen their 'future' position then our efforts are just enhancing this. When we finally make the plunge into the 'future' market then we're likely to have been delayed because of our efforts to differentiate (not good if the doubling rate is fast) and the competitor will have built a tower of revenue surrounded by a moat devoid of differential opportunity. This is pretty much a disaster.

Recommendation 3) Epic Fail : Whilst cost cutting can be useful, when you use it to attempt to recreate the past then it's likely to cause a death spiral. The past is going, you need to accept this. Unfortunately, unless you understand the competitor value chain then you're unlikely to know if they have constraints which limit their price reductions (e.g. Amazon EC2 and building data centres) and hence they may have the potential for significant future prices reductions. Cost cutting for the reasons of attempting to re-establish past models is probably the most epic fail move I see companies undertake.

Recommendation 4) Epic Fail : The danger here is twofold. First there is the existing consumers inertia to change which is often represented by a desire to maintain the existing model rather than to adapt. The problem is that as their competitors adapt, the pressure on them mounts to adapt and though they tell you they want the past, they often end up buying the future. The second problem is our competitor's use of ecosystem. If they're using an ILC like model then their rate of apparent innovation, efficiency and customer focus will all increase with the size of their ecosystem. Competing against this with traditional approaches is pretty much doomed to failure.

Summary

There are ways you can outmanoeuvre this new entrant and all sorts of counter plays that can be deployed but then 'how to do this' is not the purpose of this post. What I simply want to demonstrate is strategic play is complex and apparently sensible recommendations (focus on emerging market, innovation, cost reduction, customers) often turn out to be epic fails especially when a company has poor situational awareness.

Competition between companies is like playing a game of chess and the first rule of chess is - look at the board.

---- additional notes (for the mapping crowd)

For those of you who like to cheat (like me), here's a mapping view of the environment - see figure 1. In the diagram :-
  • A[1] to A[2] represents the change of the activity from product to utility. Our business in the scenario has established around selling A[1]  whilst the new entrant has introduced the more industrialised form A[2]. As per normal there is inertia to the change caused by changing practices, business models and capital (knowledge, social etc).
  • The competitor is running an ILC like model around A[2] and is building an ecosystem. Along with efficiency benefits this will enable them to accurately identify (through consumption data) future successful changes such as C[1] and then industrialise to additional components (e.g. C[2]).
  • There is an emerging market which is less advanced in terms of provision of the core act, hence we could sell A[1] to the emerging market. However, this won't deal with the issue the A[1] is going to be replaced with the more evolved form of A[2] and instead will simply lay the groundwork for the competitor.
  • We could simply try to recreate past profitability around A[1] but again this doesn't deal with the issue that A[1] is going to be replaced with the more evolved form of A[2].
  • We could attempt to 'innovate' by trying to create a high risk and uncertain differential B[1] or by  acquiring a company that provides this. However the competitor can simply copy us and aim to provide it in a more industrialised form B[2]. This is particularly dangerous if part of a tower and moat play.  
  • A cunning competitor will be trying to run a tower and moat play i.e. they will build a tower of revenue around A[2] and build a moat devoid of any and all potential differentials (e.g. B[2] and C[2]) around it.  Every time we try to 'innovate' (e.g. B[1]) whether through acquisition or our own efforts, then the competitor will industrialise the act and provide it for free.  The danger to us of this play is that as their ecosystem grows they exploit both it and our own efforts to bolster their moat. Once we eventually realise that the future is not A[1] or trying to sell A[1] to emerging markets but instead it's about competing around A[2] then our problem becomes that the competitor has a large ecosystem around its core revenue and there is little to no room left to differentiate. It's basically game over.

Figure 1 - Map




Monday, March 03, 2014

What is right and wrong with Christensen's Disruptive Innovation?

tl;dr: There's nothing wrong with the principle but there are two forms of 'disruptive innovation'. One is highly unpredictable and difficult to defend against. The other is highly predictable, can be defended against and therefore shouldn't disrupt.

Two of my pet dislikes are the phrase 'Cloud Computing is an example of disruptive innovation' and how 'Christensen was wrong on the iPhone'. At the heart of this is everything I like and dislike about disruptive innovation theory.

To explain this, I need to go back to basics. First, as covered many times in this blog - components of systems tend to evolve through supply and demand competition. This can be measured over two axis - one of ubiquity and the other of certainty.  The certainty axis was actually derived from a combination of the Stacey matrix and modelling in i-Space (another post, for another day). The upshot of this is that when you look at any activity (or practice or data) the type of publications around it evolve through four basic types (see figure 1).

Figure 1 - Certainty and Publication type


By measuring this change in certainty and by examining ubiquity of an component, it was possible to derive an evolution curve through different distinct phases (see figure 2).

Figure 2 - Evolution
Now, when I normally examine any system I do so over two axis - the value chain (from user need to invisible sub components) versus the state of evolution.  Any and all components (activity, practices and data) within a system evolve from one extreme (the highly uncertain and rare) uncharted space to the more industrialised.

Q. Can't we use diffusion here - from early adopters to laggards? 

A. The process of evolution takes place through the constant appearance and diffusion of maturing instances of the act, practice or data due to competition.  However, diffusion is measured over adoption vs time and both the total applicable market and the timespan may vary with each instance of the activity.

For example, take an activity A with different evolved states - A[1] to A[5] - e.g. maturing versions of telephones. Then if you examine the diffusion curves of each instance of the act, then the total applicable market and the time each instance takes to diffuse from early adopters to laggards can be different between the diffusion curves.

Because of this variability then you can't effectively map over diffusion and in any case, why would you want to?  Diffusion tells you about how things spread, evolution tells you about how things change which is what we really want.

To emphasise this, for the same activity (A) through its different evolved states - A[1] to A[5] - then figure 3 provides an example diffusion curves for each instance, figure 4 provides the evolution curve and figure 5 provides a map. NB figures 3 to 7 are purely illustrative and based upon a fictitious example.

Figure 3 - Example diffusion curves A[1] to A[5] (illustrative)


Figure 4 - Evolution curve A[1] to A[5] (illustrative)


Figure 5 - Map A[1] to A[5] (illustrative)


Q. What has this got to do with disruption?

A. Now, this is where we get to the fun part (sort of).  As components evolve their characteristics change from the uncharted (rare, constantly deviating and highly uncertain) to the industrialised (common, predictable, standard). Furthermore the component itself itself can represent a range of underlying subsystems bundled together.  Hence activity - A - might represent a range of component sub systems designed to meet a specific user need. For example, a telephone contains many subsystems from the physical shape of the receiver to the electronics within it.  Hence each instance can represent an entire value chain of components (see figure 6).

Figure 6 - Each instance can represent a value chain of components (illustrative)


When we talk of evolution of a product such as the substitution of A[2] with A[3] then such substitution tends to be based upon sustaining change i.e. an improvement.  However, you can also get a change in the associated value network. So for example, let us assume that the shift from A[3] to A[4] is a consequence of a change in the underlying components in order to meet some new need through some new property. An example would be physical size becoming important with disk storage. This type of change is extremely difficult to predict because the change is new i.e uncertain. (see figure 7).

Figure 7 - Change in the Value Network (illustrative)


Combined with inertia caused by pre-existing business models (e.g. success of A[3]) then such changes can be highly disruptive and difficult to protect against especially if they first appear in novel markets (where the new property is important) and then develop in that space until the performance characteristics are such that it substitutes the traditional market. This is the classic example of Christensen's Disruptive Innovation whether you're talking about hard drive formats (physical size) to hydraulic vs cable excavators. The reason why it's so difficult to protect against is the change in value network is unpredictable and hence we can't see it coming, there often isn't time to deal with inertia and manage a smooth transition for an existing vendor.

The outcome of RIM vs Apple was an example of such a highly unpredictable change and why the 'Christensen was wrong on the iPhone' is somewhat farcical ... it's almost impossible to predict, it could have gone either way. To be clear, this form of 'disruptive innovation' is extremely hard to see coming, there's little warning that a storm is on its way.

However, the shift from product to utility (i.e. from A[4] to A[5] in the above diagram) is highly predictable. Even the consequences of this from co-evolution of practice to potential reduction of barriers to entry into other value chains can be determined. Naturally we suffer from inertia but we normally have a considerable amount of time to prepare (in the case of cloud computing we've had since 1966 and Parkhill's challenge of the computer utility) and there are numerous weak signals we can use to identify that the change is upon us. If that's the case why do companies get disrupted by it? Well, despite the fact that this storm can be seen from far away, companies still fail to see it because few look at the competitive environment.

Q. So, what's right and wrong with Christensen's Disruptive Innovation? 

Well, there's nothing wrong with at all, it's an excellent piece of work. 

However, the problem is in use.  We tend to describe both the genesis of something (e.g. A[1]), product changes (e.g. A[3] to A[4]) and shifts from product to utility (e.g. A[4] to A[5]) all as 'innovations' when they're not the same.

Product substitution due to an uncertain change in the related value network is highly disruptive because it is incredibly difficult to predict. It doesn't matter whether this is a change in the underlying components or the act becoming a component of something else. Christensen could no more predict the iPhone's success than RIM could and the success of the iPhone was not guaranteed. The only defence against this is a highly adaptable culture.

However, product to utility substitution is a highly predictable consequence of competition and there is no reason for a company to be caught unawares by such a change and disrupted by it. In this case disruption occurs because of poor situational awareness and a poor understanding of the basics of economic change.

So is Christensen's work right? Well, it's certainly a plausible hypothesis and well supported by examples. 


Q. But surely Christensen should have been able to predict iPhone's success? 

A. Absolutely not. It's a highly unpredictable change in the value network which disrupted many due to the inertia those companies had developed through past successful business models.


Q. Is cloud computing an example of disruptive innovation? 

A. Absolutely not. It's a highly predictable change which has disrupted many due to the inertia those companies had developed through past successful business models. There was no reason for these companies to be disrupted other than incredibly poor situational awareness verging on blindness. The use of the phrase 'cloud computing is a disruptive innovation' is more synonymous with 'we've been utterly outplayed by something obvious and we need someone else to blame' than it has to do with classic examples of disruptive innovation.


--- 24th June 2014


Overall, I happen to support Christensen's hypothesis of disruptive innovation with the exceptions that the word 'innovation' has become a catch all and is fairly meaningless and ditto 'disruption'. Not all things classified as 'disruptive innovation' are equal e.g. RIM vs APPL is not the same as Cloud. With the former disruption is reasonable, with the latter disruption should not occur. Some refinement is therefore needed in my opinion but alas, as with many things, 'disruptive innovation' is in danger of becoming a catch all phrase and losing its meaning.

--- 4th Sept 2014

Tidied up some of the language in the post above. 

Understanding Ecosystems

Once you start mapping out environments, you can quickly start to discover common economic patterns. basic rules of competition and repeating forms of gameplay.

A typical basic pattern is how supply and demand competition drives the evolution of one component (whether practice, data or activity) to a more industrialised form which not only improves its efficiency but through the provision of stable interfaces can enable rapid development of novel higher order systems. Those novel higher order systems may also turn out to be new sources of wealth but they are highly uncertain and unpredictable (i.e. uncharted). However those that succeed will evolve and the cycle will repeat.  See figure 1.

Figure 1 - Competition enables new higher order systems


From the above, a component (either an activity, practice or data) evolves from A[1] to A[2] to A[3], for example the evolution of electricity from the Parthian battery (A[1]) to Siemens generators (A[2]) to utility provision by Westinghouse (A[3]). 

As it evolves from the uncharted space (e.g A[1] where it is rare, uncertain and constantly changing) to a more industrialised form (A[3]) then it becomes more efficient, defined, stable and standardised. This process enables the creation of higher order systems (e.g. B[1], C[1], D[1]) built upon standard interfaces e.g. standard electricity (A[3]) enabled lighting(B[1]) , radio(C[1])  and television(D[1]) .  

Those newly created novel and highly uncertain components (e.g. the uncharted B[1], C[1], D[1]) then start to evolve if they are successfully adopted via the same forces of supply and demand competition (e.g. D[1] to D[2]). The cycle then repeats.

You can trace this effect throughout history (see figure 2) and it is the combination of this pattern with inertia and co-evolution which creates economic cycles both at a macro (k-waves) and micro economic scale. But, we've covered this many times before and that's not the purpose of this post.

Figure 2 - Cycle throughout history.


Now all of this is relatively dull stuff and there are dozens of common patterns you need to understand in order to play even the most basic strategic games.  However, it's worth noting a couple of things even with this simple pattern. 

For example, in figure 3, I've added some more detail on the characteristics of components at each stage of evolution.

Figure 3 - Treating Components


First, with A[2] to A[3] we're talking about the industrialisation of an existing act which is all about volume operations, efficiency, operational improvements and measurement. Whilst people often talk about the advantage of being a fast follower, in this case there's an additional source of value for being the first to industrialise. The value is derived from others building on top of the utility services you build and I'll explain why a bit later.

Let us assume you have industrialised some act (e.g. A[2] to A[3]), for example the shift of computing infrastructure from computing as a product (A[2]) to computing as a public utility (A[3]).  This can enable others to build novel but uncertain higher order systems on top of your utility services (e.g. B[1], C[1], D[1]).

Those novel higher order systems might be uncertain but they are potential sources of future worth. Since they're constantly changing (i.e. we're exploring the potential) then successful creation is both costly in terms of research and development along with being unknown in terms of success. In this case you ideally want to be a fast follower and let others incur the cost of R&D. But how do you know what to follow?  How can you detect success? 

Fortunately, due to competition then successful acts will start to mature through multiple diffusing waves of ever improving products. This pattern is detectable through consumption i.e. if the novel systems are built on top of your utility services then diffusion of ever maturing and hence successful systems (e.g. D[1] to D[2]) can be detected by simple consumption of your underlying sub system (i.e. consumption of your utility service A[3]).

This provides you with an opportunity.

If you commoditise an act (A[2] to A[3]) to a more industrialised form which enables others to innovate (B[1], C[1], D[1]) then you can leverage the consumption of your underlying component (A[3]) by others to detect successful changes (e.g. D[1] to D[2]).  You can then commoditise any identified successful component (e.g. D[2]) to a more industrialised form in order to repeat the process.  Hence by being a first mover to commoditise (A[2] to A[3]) and by exploiting consumption information then you are constantly in a position to be a fast follower (D[1] to D[2]) to any successful change without incurring the heavy R&D risk because everyone else is innovating for you.

This is a model known as Innovate - Leverage - Commoditise (ILC) and it's fairly old hat having first been applied around 2005 (the earliest model was called innovate - transition - commoditise, however it was renamed later ILC on the suggestion of Mark Thompson. I'm not very good at naming things and transition was a bit bleh). There are many critical factors to the model including :-

1) Speed of information. Whilst the model can be applied in the product space (e.g. A[2]), the problem is the speed at which you can gain consumption information is limited by market surveys. Utility models (e.g. A[3]) especially where services are provided through an API are more apt because you gain direct consumption information.

2) Size of ecosystem. Your ability to innovate, deliver what customers want and efficiency all depend upon the size of the consuming ecosystem for your underlying components (e.g. consumption of A[3]). 

This ecosystem consists not only of your own employees but also any consuming company. The efficiency of provision of A[3] depends upon economies of scale i.e. how big your consuming ecosystem is.  Your apparent rate of innovation (since you're not doing the innovation just fast following others) depends upon the number of companies innovating on top of your component (e.g. B[1], C[1], D[1]).  Your ability to deliver what customers want (i.e. spot successful new things) depends upon your ability to leverage the ecosystem to spot success (e.g. D[1] to D[2]). In a well run model then your apparent rate of innovation, customer focus and efficiency should all increase with the size of the consuming ecosystem. 

I've provided an example of the above figure 3 in figure 4 using a circle model where the centre is your core component services (your platform) surrounded by an ecosystem of consuming companies. Such circle models are woefully inadequate for strategic play but they act as a useful visual reminder that effective play involves exploiting others.

Figure 4 - Ecosystem Size


3) Relevance of component. When commoditising a component, the potential size of the consuming ecosystem depends upon how relevant that component is in other value chains. Hence it's advisable to focus on components that are widely used e.g. computing infrastructure rather than highly specialised to an industry.

4) Speed of action. There's little point in using an ILC model if you don't exploit it to create new components and grow the ecosystem. Obviously, each time you do (whether through copying or acquisition) then you'll get accused of eating the ecosystem but the counter to this is you provide an increasing number of component services (i.e. a platform) which makes the environment more attractive to others. This harvesting of the ecosystem does need careful management.

5) Efficiency in provision. When you commoditise a component to a more industrialised form (e.g. A[2] to A[3]) then your ability to encourage others to build on top of it depends upon how much you reduce their risk of failure and increase their speed of development. Hence efficiency and standardisation of interface is very important in this process.

Now when correctly played you can build a constantly expanding platform of highly industrialised component services in which your rate of innovation, customer focus and efficiency is proportional to your ecosystem size and not your physical company size.  Your 'platform' is simply your set of component services and it serves the purpose of interacting with and exploiting an ecosystem. The value is in the ecosystem. As you repeat the model building more component services then the attractiveness of your platform increases to others.  Also, by being a first mover to commoditise an act to a more industrialised form then you actually gain highly stable, highly predictable volume based revenue. Finally, by exploiting consumption information (e.g. use of A[3]) to always be the fast follower to the novel but uncertain sources of future worth (e.g. B[1], C[1], D[1]) will enable you to maximise your future opportunity by only selecting success (e.g. D[1] to D[2]).

Simultaneously increasing your rate of apparent innovation, attractiveness to others, customer focus, efficiency, stability of revenue and maximising future opportunity are a powerful set of forces. Using an ILC type model is a no brainer ... except ... unless you map out your environment (i.e. have good situational awareness) and understand the rules of the game then you just won't know where to start other than sticking your finger in the air and saying 'this looks like a good one' or doing what most people do and copying others (i.e. '67% of companies do cloud, big data and social media' and hence 'so must we!').

You're just as likely to undermine a barrier to entry into your own business and encourage attack by others as you are to successfully build an ILC model. The first rule of playing chess is alway - 'Look at the board' - which is why building a map (a snapshot at a moment in time of the situation you find yourself in) is not only about effective management (see figure 5) and scenario planning but it should always be a first step before you embark on any form of strategic play.

Figure 5 - An example Map




Before anyone shouts what about 'two factor markets', 'supplier ecosystems' etc - this post is about one aspect of ecosystems and not the entire field. Before anyone else shouts 'this is complex' - well if strategic gameplay was easy then it wouldn't be fun.

--- 29th August 2014

Just to re-emphasise this. The purpose of a platform (and hence an API) is to create an ecosystem. The value is in the ecosystem. The ecosystem is a future sensing engine. Correctly used (under an ILC model) you can create network effects whereby ...
  1. Your apparent rate of innovation
  2. Your customer focus
  3. Your efficiency
  4. Your stability of revenue
  5. Your ability to maximise opportunity
... all increase, SIMULTANEOUSLY, with the size of your ecosystem and NOT the physical size of your company.

For example, let us hypothesise that Amazon plays an ILC game. The value to Amazon is that the ecosystem creates a constant future sensing engine of all beneficial change. AMZN's ability to sense successful change, to respond to customer needs and efficiency would all be related to ecosystem size. This in turn would create a feedback loop with Amazon exploiting the ecosystem to identify useful patterns (everyone screams "they are eating the ecosystem again") and then providing this pattern as a service which helps grow the ecosystem which Amazon would exploit to … and so the cycle continues and accelerates. The bigger the ecosystem of Amazon got then more Amazon would able to exploit it to find successful patterns and hence the more beneficial and attractive the platform becomes for everyone. In order to test this then you would expect to see Amazon accelerating in innovation, customer focus and efficiency as the ecosystem grew (beyond the ability of equivalent physical size companies) along with complaints that they've eaten the ecosystem. Do remember, no-one is going to tell you they are playing this game - you'd have to detect it.

If you're involved in strategy and are sitting there going - "wow this is new" - then you really need to think about what you're doing if you're this far behind. This stuff is not new. This has been basic gameplay with ecosystems as force multipliers for many of us for about a decade.

Saturday, March 01, 2014

Composability

In figure 1 and 2, I provide an example map from an extremely large and somewhat sensitive Government project. I've removed the terms but I provide the basics of where the user needs are represented (always at the top of the map) and how the different components should be treated.

Figure 1- Map and user needs.


Figure 2- How components should be treated.


Obviously a map is a snapshot at a point in time as all components are evolving due to competition from the uncharted space (genesis of the novel and new) to more industrialised (commodity and utility).

Now whenever you deal with a large system, this system is likely to contain many different components at different stages of evolution and the entire system can be described as composing of components i.e. it is composable (see figure 3).

Figure 3 - A composable system


In some cases, the individual components of a system are themselves composable systems (i.e. contain many sub components). For example, in the case of the HS2 (high speed rail) map then the component - Web site - is in fact an entire system of many components and is therefore itself composable. See figure 4.

Figure 4 - A composable system within a composable system.


Hence whenever you look at a complex map, you can often aggregate components for convenience due to fitness (i.e. related components are part of a sub system) or alternatively you might wish to treat components individually or alternatively you might need to sub-divide components into lower order systems. Whether you need to do this depends upon the granularity of what you need to examine. 

In figure 5 and 6, the aforementioned project show in figure 1 was summarised to a simpler view. First, components were grouped into general systems based upon fitness and  then each system was treated as a component.  Each component is an independent and logical value chain. 

Figure 5 - Grouping of components into general systems.


Figure 6 - High level view of the overall system



Policy type work and overall strategic play usually requires a fairly coarse view of the landscape (such as those shown in figure 6). Purchasing choices and more tactical play often require a finer grain view (such as those shown in figure 1).

Of course, you shouldn't just view an organisation alone but compare with competitors and other markets (see figure 7). The more information you have, the better the play.

Figure 7 - Comparison with competitors


So when mapping, the level of detail (and situational awareness) varies depending upon what you're doing. A strategic game plan between massive companies requires an often coarse overview of the entire landscape whereas the tactical moves for a particular subsystem requires a more finer understanding of a particular section of the landscape.

However, it's not just the components but the interfaces that matter. Ideally you want to have semantic and syntactic interoperability between components and hence standards become increasingly important as the system becomes more industrialised. At the same time, for reasons of second sourcing and buyer / supplier relationship you want to have substitution for components and hence semantic and syntactic interoperability between different instances of the same component becomes important.

For those with a manufacturing / heavy engineering / biological systems background this is all old hat.  However, it doesn't mean it isn't relevant today. Jonathan Murray last year talked about the 'Composable Enterprise Model' and it certainly has struck a cord with the digital crowd trying to grapple with change around them. It's well worth a read.

However as good as it is, just be mindful that these methods have been around a long time and models like Two Pizza and FIST (fast, inexpensive, simple and tiny) exploit componentisation effects. As James Governor said 'ibm has been talking about this shit for YEARS'.  Well, the wider industry has been talking about it and doing it for a considerable time. 

So whilst it's a good idea to think and act in these terms, don't just assume it'll create you an advantage. You should instead think of it as way of competing on a more level playing field against those who have been doing this for a long time.

Oh, and before I go. Mapping is relatively new - starting in 2005. I designed the system because the alternative block and wire diagrams (see figure 8) along with other forms had no concept of change. Understanding change  - which you cannot view over time but instead have to view over ubiquity versus certainty, hence evolution - is essential for any form of strategic gameplay. 

Figure 8 - A typical block and wire diagram


Maps are more than just learning how to manage something but they enable you to start to explore common economic patterns, strategic gameplay and rules of competition. It's a critical part of situational awareness and there's little point in consolidating down to shared components within an organisation if by the time you have achieved this the components have evolved to a more industrialised form.