Tuesday, December 20, 2016

I wasn't expecting that

Chapter 10

I was in a quandary. Having described the three states of war, wonder and peace then I found myself in the unusual position of finding them everywhere. All activities seemed to show these three competitive states. However, I had no real way of testing the existence of these stages and my ability to perceive them might be caused by some sort of bias? It's bit like owning a Mini Cooper, once you have one then you suddenly notice how many other cars are Mini Coopers. I started to scout around for some means of testing these concepts. Did the states really exist? How could I test them? Do they just effect individual activities in industries or could they have a wider effect? 

At the very least I had a set of predictions (from weak signals) for when a range of activities would start to industrialise and so I could just wait. Of course, this could just mean the weak signals were wrong or I was just lucky? There was also something strangely familiar about those three stages. I'm a geneticist by training, I hold a second masters in environmental management and I also have a background in economics, courtesy of a mother who as an economist ignited my interest in the subject. I knew I'd seen these three states elsewhere. It didn't take me long to re-discover that first example - C.S. Holling's Adaptive Renewal cycle. 

The adaptive cycle describes the dynamics of a complex ecosystems in response to change. We start with the creation of some form of disturbance  - the genesis of a new act, some form of wonder. This is followed by a rapid stage of exploitation and accumulation in a stage of conservation where the change has become more stabilised in the ecosystem - the equivalent to a time of products, a peaceful state of competition. Eventually, the change has been normalised which releases energy enabling re-organisation and the genesis of new acts and new disruptions - the time of war. The Holling's cycle is measured over the potential of the system for change and the connectedness of the system. Whilst not an exact corollary, I've overlaid an approximation of the peace, war and wonder cycle onto the Holling's cycle in figure 112.

Figure 112 - Adaptive renewal cycle


The importance for me of this was it gave rise to a number of concepts. First, when considering economic systems we would have to look at them as we do with biological systems and consider how an ecosystem reacts to a change and how competition will drive that change throughout the system. Secondly, the size of the ecosystem impacted should reflect the connectedness of the system that is changing i.e. industrialisation of legal will writing would only impact the legal industry whereas industrialisation of computing should have a much broader macro-economic effect. Lastly, there may well be an element of re-organisation involved. I was already aware of co-evolution but maybe this enabled broader organisational change?

With this in mind, I started to explore macro economic scale effects on an assumption that a suitably connected technology should not only have micro economic impacts to its industry but wider impacts. I was aware that the economy exhibited cycles known as Kondratiev waves (thanks to my interest in economics) and the largest waves we described as Ages.  The first thing I noted was that these ages were not initiated by the genesis of some new activity but always by the industrialisation of a pre-existing activity that enabled higher order systems to develop. For example, the Age of Electricity was not caused by the introduction of electrical power which occurred with the Parthian Battery (sometime before 400 AD) but instead utility provision of A/C electricity with Westinghouse and Tesla, almost 1500 years later. Equally, the Mechanical Age was not caused by the introduction of the screw by Archimedes but by the industrialisation of standard mechanical components through systems such as Maudslay’s screw cutting lathe. The Age of the Internet did not involve the introduction of the first means of mass communication such as the Town Crier but instead industrialisation of the means of mass communication. 

Whilst born out of industrialisation, each of these Ages were associated with a major cluster of "innovations" (i.e. genesis of new activities) that are built upon the industrialised components. Each age therefore had a "time of Wonder".  The ages were also associated with a change in organisations. I started collecting approximate dates for these different ages, trying to identify the point of technology that may have initiated it and also the type of organisation structure that was dominant. A later version of this is provided in figure 113.

Figure 113 - Waves of organisational change


I still had no narrative linking it all together, instead it was a lose collection of almost connected concepts. Then in mid 2008, I came across Carlotta Perez's marvellous book - Technological Revolutions and Financial Capital.  Perez has characterised these K-Waves around technological and economic paradigm shifts. For example the Industrial Revolution included factory production, mechanisation, transportation and development of local networks whereas the Age of Oil and Mass Production included standardisation of products, economies of scale, synthetic materials, centralisation and national power systems. Carlotta had talked about the eruption of change, the frenzy of exploitation and later stages involving synergy and maturity (a more peaceful time of competition, of exploitation and conservation). It reminded me of Holling's Adaptive renewal cycle. It reminded me of peace, war and wonder. 
  • The wonder of eruption and frenzy of new ideas, the explosion of the new and the re-organisation of systems around it. A time of exploration and pioneers.
  • The exploitation and growth of these concepts, the synergy and the maturity of products in a more peaceful state of change. A time of settlers.
  • The eventual release of capital and tumultuous shift from one cycle to another, the lost of the old, the birth of the new, the time of war and creative destruction of the past. A time of industrialisation and town planners.
I've taken Carlota's description of K-waves and added onto it the overlapping stages of peace, war and wonder in figure 114.


 Figure 114 - Carlota Perez and Kondratiev waves


In my pursuit of discovering a way to test the peace, war and wonder cycle then I had accidentally stumbled upon a narrative for describing a system wide organisational change. How widespread a change would be depended upon how well connected the components that were industrialising are. They could be specific to an ecosystem (e.g. legal will writing) and a small set of value chains or they could impact many industries (e.g. computing) and many value chains.

The narrative would start with the birth of a new concept A[1] which would undergo a process of evolution through competition from its first wonder and exploration to convergence around a set of products (point 1 in figure 115 below). These products, after iterations crossing many chasms and following many diffusion curves would become more stabilised with well defined best practice for their use (point 2 in figure 115). Large vendors would have established, each with inertia to future change due to past success but the concept and the activity it represents will continue to evolve. 

Eventually the component would be suitable for industrialisation and new entrants (not suffering from inertia) would make the transition across that inertia barrier introducing a more commodity form of A[x+1]. This would trigger a state of war, a shift to industrialised forms, a release of capability and capital (point 3) enabling an explosion of new activities due to componentisation effects and new practices (point 4) through co-evolution. The underlying activity would continue its evolution to ever more industrialised forms until some form of stability is achieved with A [1+n], a long an arduous journey of n iterations from the first wonder of its first introduction. The past ways, the past forms of the activity, the past practice would have died off (point 5) and they would have done so quickly. 

Figure 115 - Understanding why


In 2008, this was exactly what was started to happen around me in cloud computing. But the vast majority of people seemed to be assuring me that the change would take many decades, it would be very slow.  Why would this be a slow progression? Why wouldn't the change happen quickly? To understand this, we need to introduce a climatic pattern known as the punctuated equilibrium.

Climatic pattern : punctuated equilibrium

Throughout history there have been periods of rapid change. The question should be when is change a slow progression and when is it rapid? Part of this caused by an illusion, an application of our bias to the concept of change. Let us consider an evolving act - A.  From figure 74 (Chapter 7), we know that the evolution of an act consists of the diffusion of many improving instances of that act. Let us assume that the activity quickly progress to a product - A[2] and evolves through a set of feature improvements - A[2] to A[x] as shown in figure 116 below. This will be the time a products, a constant jostle for improving features and though individual iterations will rapidly diffuse (e.g. the 586 processor replaced the 486 which replaced the 386 and the 286 in the x86 family), the characteristics of products (the x86) are broadly the same and the overall time of products appears to be long. This had happened with servers, a constant improvement and a long product run of 30 to 40 years.

Figure 116 - The illusion of speed


With the advent of more utility forms, you gain all the benefits of efficiency and agility, you're under pressure to adopt due to the Red Queen but invariably people suffer from a bias towards a slow change because this is what they've experienced with products. They forget that we've had successive iterations (286 to 386 to 486 etc) and label this all as one thing. They expect the progression to more utility forms will take equally long but the transition is not multiple overlapping diffusion curves and the appearance of slow but steady progress but a single rapid shift (see figure above). Rather than 30 to 40 years, the change can happen in 10 to 15 years. We are caught out by the exponential growth and the speed at which it moves. This form of transition is known as a punctuated equilibrium and invariably shift from product to utility forms exhibit it.

It's the exponential nature that really fools us. To explain this, I’ll use an analogy from a good friend of mine, Tony Fish.  Consider a big hall that can contain a million marbles. If we start with one marble and double the number of marbles each second, then the entire hall will be filled in 20 seconds. At 19 seconds, the hall will be half full. At 15 seconds only 3% of the hall, a small corner will be full. Despite 15 seconds having passed, only a small corner of the hall is full and we could be forgiven for thinking we have plenty more time to go, certainly vastly more than the fifteen seconds it has taken to fill the small corner. We haven’t. We’ve got five seconds.

Alas, these punctuated equilibriums are often difficult to see because we not only have the illusion of slow progress but confusion over what speed actually is. Let us assume that today it takes on average 20 to 30 years for an act to develop from genesis to the point of industrialisation, the start of the "war" which changes so much in industry. Organisations consist of many components in their value chains all of which are evolving. We can often confuse the speed at which something evolves with the simultaneous entrance of many components into the "war" state. For example, in figure 117, I've provided the weak signal analysis (from chapter 9) of many points of change. We can see that each component takes roughly 20 to 30 years to evolve (point 1). However, if you examine point 2 then we have many components from robotics to immersive technology to IoT that are embroiled in such a war. This can give us the impression that change is happening vastly more rapidly as everything around us seems to be changing. It's important to separate out the underlying pace of change from the overlapping coincidence of multiple points of change.


Figure 117 - The confusion of speed


Given this, it should be possible to test the punctuated equilibrium. By selecting a discrete activity we should be able to observe its rapid change along with the denial in the wider industry that such a change would be rapid. Cloud computing gave me a perfect example to test this. In 2010 (when I was at Canonical), I produced a forward revenue chart for Amazon. This estimated the forward revenue at the end of each year for AWS (Amazon web services) and was based upon what little data I could extract given that Amazon wasn't breaking out the figures. I've provided this estimate in figure 118.

By the end of 2014, I had anticipated AWS would have a forward revenue rate of $7.5 billion p.a. which means every year after 2014 it would exceed this figure. In fact, AWS clocked over $7.8 billion in 2015. Now, what's not important is the accuracy of the figures, that's more luck given the assumptions that I needed to make. Instead what matters is the growth, it's non linear nature and the general disbelief that it could happen. Back in 2010, telling people that AWS would clock over $7.5 billion in revenue some five years later was almost uniformly met by disbelief.

Figure 118 - The punctuated equilibrium


Finding the future organisation

In 2008, I had the narrative of how organisations change and though I still had to demonstrate aspects of this (by anticipating a punctuated equilibrium before it happened)  it did provide me with a path to test the concepts. I knew that if the concept was right then over the next decade we would see a rapid change to more industrialised computing, co-evolution of practice and a new form of organisation appearing. In the case of the rise of DevOps then this process had already started. Beyond just simply observing the growth of new practices and new activities along with the death of the past (see figure 119), I wanted a more formal method to evaluate this change.  What I wanted to know is could we catch this next wave? Would the shift of numerous IT based activities to more utility services create a new organisational form? Timing would be critical and unlike my earlier work in genetics where populations of new bacteria are grown rapidly, I had to wait. So wait, I did.

Figure 119 - the past and the future


By 2010, the signals were suggesting that this was happening and in early 2011, I had exactly the opportunity I needed. Being a geneticist, I was quite well versed in population characteristics and so as part of a Leading Edge Forum project (published in the same year) we decided to use such techniques to examine populations of companies, specifically a hundred companies in Silicon Valley. We were looking for whether a statistically different population of companies had emerged and their characteristics (phenotypes) were starting to diffuse. It was a hit or miss project, we’d either find a budding population or it was back to the drawing board.

We already knew two main categories of company existed in the wild - those that described themselves as traditional enterprise and those using the term "web 2.0". The practices from the web 2.0 were already diffusing throughout the entire environment. Most companies used social media, they thought about network effects, used highly dynamic and interactive web based technology and associated technology practices. The two populations were hence blurring through adoption of practices (i.e. the traditional were becoming more web 2.0 like) but also partially because past companies had died. But was there now a next generation budding, a new Fordism?

I interviewed a dozen companies that I thought would be reasonable examples of traditional and web 2.0 and where I hoped a couple of highly tentative next generation companies might be hiding. I developed a survey from those companies, removed them from the sample population to be examined and then interviewed over 100 companies divided roughly equally among those that described themselves as web 2.0 and those who called themselves more traditional. The populations all contained a mix of medium and huge companies. I examined over 90 characteristics giving a reasonable volume of data. From the cycle of change and our earlier interviews, we had guessed that our next generation was likely to be found in the self describing "web 2.0" group and in terms of strategic play they would tend to be focused on disruption (the war phase) rather than profitability (the peace phase). From our earlier interviews I had developed a tentative method of separating out into candidate populations. So, I divided the population sample out into these categories and looked at population characteristics - means and standard deviations. Were there any significant differences? Were the differences so significant that we could describe them as a different population i.e. in a sample of mice and elephants then there exist significant characteristics that can be used to separate out the two populations. 

I ran our analysis and waited. It was an edgy moment. Had we found something or as per many attempts before had we found nothing? I tend to assume nothing and when there is something, I tend to doubt it. Within our data set we found statistically significant population differences across a wide number of the characteristics but also significant similarities. I re-examined, looked through my work, tested, sought the advice of others and tested again - but the differences and similarities remained. For example, I examined each company’s view on open source and whether it was primarily something that means relatively little to them, a mechanism for cost reduction, something they relied upon, something they were engaged in or whether open source was viewed as a tactical weapon to be used against competitors. The result is provided in figure 120 with the subdivision by population type. 

Figure 120 - Views on open source


Whilst the traditional companies mainly viewed open source as a means of cost reduction and something they relied upon, this next generation viewed it as a competitive weapon and something they were heavily engaged in. The web 2.0 group had a broader view from cost to weapon. This difference in population was repeated throughout many characteristics spanning strategy, tactics, practice, activities and form. The odds of achieving the same results due to random selection of a single population were exceptionally low. We had found our candidate next generation.

To describe this next generation, it is best to examine them against the more traditional. Some of the characteristics show overlap as would be expected. For example, in examining the highest priority focus for provision of technology by a company whether it’s profitability, enhancement of existing products and services, innovation of new products and services, enabling other companies to innovative on top of their products and services or creating an engaged ecosystem of consumers then overlaps exists. In other areas, the differences were starker. For example, in an examination of computing infrastructure then traditional favoured enterprise class servers whereas the next generation favoured more commodity.  A good example of this similarity and yet difference was the attitude towards open source. When asked whether a company open sourced a source of differential advantage on a scale of strongly disagree to strongly agree then both traditional and next generation gave almost identical response (see figure 121).

Figure 121 - Finding similarity


However, when asked whether they would open source a technology to deliberately out manoeuvre a competitor then the answers were almost polar opposite (see figure 122).

Figure 122 - Finding difference



Using these populations, I then characterised the main differences between traditional and next generation. These are provided in figure 123 but we will go through each in turn. I've also added some broad categories for the areas of doctrine the changes impact.

Figure 123 - the phenotypic differences.



Development
Traditional companies tend to focus towards singular management techniques for development (e.g. Agile or Six Sigma) and often operate on a change control or regular process of updates. The next generation tends towards mixed methods depending upon what is being done and combine this with a continuous process of release.

Operations
Traditional organisations tend to use architectural practices such as scale –up (bigger machines) for capacity planning, N+1 (more reliable machines) for resilience and single, time critical disaster recovery tests for testing of failure modes. These architectural practices tend to determine a choice for enterprise class machinery. The next generation has entirely different architectural practices from scale-out (or distributed systems) for capacity planning, design for failure for resilience and use of chaos engines (i.e. the deliberate and continuous introduction of failure to test failure modes) rather than single, time critical disaster recovery test. These mechanisms enable highly capable systems to be built using low cost commodity components.

Structure
Traditional organisations used a departmental structure often by type of activity (IT, Finance, Marketing) or region with often a silo mentality and a culture that was considered to be inflexible. The next generation used smaller cell based structures (with teams typically of less than twelve) often with each cell providing services to others cells within the organisation. Each cell operated fairly autonomously covering a specific activity or set of activities. Interfaces were well defined between cells and the culture was viewed as more fluid, adaptable and dynamic.

Learning
Traditional organisations tend to use analysts to learn about their environment and changes that are occurring. They tend to also use big data systems which are focused primarily on providing and managing large sets of data. The next generation use ecosystems to more effectively manage, identify and exploit change. They also tend to not only use "big data" but to be run by it with extensive use of modelling and algorithms. The focus is not on the data per se but the models.

Leading
In traditional organisations, the use of open systems (whether source, data, APIs or other) is viewed primarily as a means of cost reduction. A major focus of the company tends to be towards profitability. In some cases technology or data is provided in an open means with an assumption that this will allow others to provide "free" resources and hence reduce costs.  In next generation, open is viewed as a competitive weapon, a way of manipulating or changing the landscape through numerous tactical plays from reducing barriers to entry, standardisation, eliminating the opportunity to differentiate, building an ecosystem and even protecting an existing value chain. Next generation are primarily focused on disruption of pre-existing activities (a war phase mentality) and exhibit higher levels of strategic play.

The LEF published the work in Dec 2011 and since then we have observed the diffusion of many of these changes as the traditional become more next generation. In the parlance of "Boiling Frogs" (an outstanding open sourced document on management provided by GCHQ) then we're seeing "less of" the traditional and "more of" the next generation over time. However, I very much don’t want you to read the above list and get the impression that - “this is how we create an advantage!” - instead be realistic. The above characteristics are already diffusing and evolving, tens if not hundreds of thousands of people and their companies are well aware of them today. You’ll need to adapt simply to survive. Any real advantage has already been taken and any remaining advantage will be over those who are slower to adapt. 

I do however what to expand the above figure 123 and include some specific examples of doctrine (see figure 124).  For example, the shift from single to multiple methods is just an refinement of the principle "to use appropriate methods".  There was a time when we thought that a single method was appropriate but as we've become more used to the concepts of evolution and change then we've learned that multiple techniques are needed. This doesn't stop various attempts to create a tyranny of the one whether agile or six sigma or some purchasing method but for many of us the way we implement that principle has changed. In other words, the principle of doctrine has remained consistent but our implementation has refined and become more nuanced. Equally our principle of "manage failure" has simply refined from one time disaster recovery tests to constant introduction of failure through chaos engines. Now, certainly the implementation has to be mindful of the landscape and purpose, for example constant failure through chaos engines is not appropriate for the generation components of a nuclear power plant. 

Figure 124 - The change from traditional to next generation


In other cases, the principle "Think small teams" is relatively young in management terms (i.e. less than forty years). The theory of management tends to move extremely slowly and its practices can take a considerable amount of time to evolve. The point that I want to emphasise is that when we talk about the evolution of organisation, this is normally reflected in terms of a change in doctrine and either evolution or addition of principles. However, not everything changes. There are many practices and concepts that are simply copied to the next generation. It should never be expected that there are no common characteristics or overlap but instead what you hope to find is significant difference in specific characteristics (i.e. Mice have two eyes, same as Elephants and hence there are some similarities along with huge differences). I've provided a small subset of the similarities in figure 125 but it should be remembered of the 90 odd characteristics I examined, only twelve showed significant change.

Figure 125 - Not everything changes


In 2008, I understood the cycle of change (peace, war and wonder) which had evolved from the concept of evolution and I had a hypothesis for the process of how organisations evolve. By 2011, we had not only anticipated this change but observed a budding next generation. I say "budding" because we had no real idea of whether they would be successful or not. It turns out that they are but that's a story for a later chapter. For now, there are a couple of refinements that I'd like to make to these models.

Notes on Peace, War and Wonder

There are number of patterns which are worth highlighting.

Climatic Pattern : Evolution of a communication mechanism can increase the speed of evolution overall.

In figure 117 above, I discussed the confusion of speed and how we often mix concepts about the underlying rate of change with the circumstantial overlapping of multiple points of industrialisation. Does this however mean the underlying rate of change is constant? The answer is no. There is another pattern here which deals with communication mechanisms.

On the 1st May, 1840 a revolution in industry was started by the introduction of the Penny Black. This simple postage stamp caused a dramatic explosion in communication from 76 million letters sent in 1839 to 350 million by 1850. It wasn't a case that postal services didn't exist before but the Penny Black turned the act of posting a letter into a more standard, well defined and ultimately ubiquitous activity. The introduction caused a spate of copy cat services throughout the world, with the US introducing their first stamps in 1847. The 125 million pieces of post sent through their system in that year mushroomed to 4 billion by 1890. From stamps to street letter boxes (1858) to the pony express, railway deliveries (1862), money order and even international money orders by 1869.

The humble stamp changed communication forever. But it wasn't alone. Telegraph lines which later enabled the telephone which later enabled the internet have all led to corresponding explosions of communication. In all cases it wasn't the invention of the system (the first stamp for example being created by William Dockwra in 1680) but instead the system becoming more standard, well defined and more of a commodity which created the explosion. Each time, we've experienced one of these communication changes we've also experienced significant industrial change. The growth in postal services and telegraph lines coincides with the Railway and Steam Engine era where diffusion of new machine concepts became rampant.

Of course, the origin of industrial steam engines started in the earlier first industrial revolution which itself arguably started with Maudslay's screw cutting lathe and the introduction of interchangeable mechanical components. By providing mechanical components as more of a commodity, we saw a growth in new machine industries and new manufacturing processes. From the Plymouth system for manufacturing which later became the Armory system in the US, an entirely new method of manufacturing was started by the humble nut and bolt.

Whilst this might appear to be nothing more than the peace, war and wonder cycle in action, there is something quite unique here. When we examine how things have evolved over time then nuts and bolts took over 2,000 years to industrialise,  electricity took 1,400 years, the telephone merely 60 to 80 years and computing some 60 to 70 years. What has changed during that time is industrialisation of communication mechanisms. As we move up the value chain (see figure 126)  then the speed at which things evolve across the landscape is impacted by industrialisation of communication mechanisms. The printing press, the postage stage, the telephone and the internet did more than just industrialise a discrete components in a value chain, they accelerated the evolution of all components.

Figure 126 - The speed of change


Do not however confuse this with how innovative we are as a species. Rather realise that the speed at which something evolves has accelerated. My best guess is the speed of change today now corresponds to about 20 to 30 years though the jury is out at the moment (i.e. I'm collecting more data) as to whether it really is that quick.

Climatic Pattern : Inertia increases with past success.
One of the subjects I've mentioned is inertia and our resistance to change. With any established value chain, there are existing interfaces to components along with accompanying practices. There is a significant cost associated with changing these interfaces and practices due to the upheaval caused to all the higher order systems that are built upon it e.g. changing standards in electrical supply impacts all the devices which use it. This cost creates resistance to the change. You also find similar effects with data or more specifically our models for understanding data. As Bernard Barber once noted even scientists exhibit varying degrees of resistance to scientific discovery. For example, the cost associated with changing the latest hypothesis on some high level scientific concept is relatively small and often within the community we see vibrant debate on such hypotheses. However changing a fundamental scientific law that is commonplace, well understood and used as a basis for higher level concepts will impact all those things built upon it and hence the level of resistance is accordingly greater. Such monumental changes in science often require new forms of data creating a crisis point in the community through unresolved paradoxes including things that just don’t fit our current models of understanding. In some cases, the change is so profound and the higher order impact is so significant that we even coin the phrase “a scientific revolution” to describe it.

The costs of change are always resisted and past paradigms are rarely surrendered easily – regardless of whether it is a model of understanding, a profitable activity provided as a product or a best practice of business. As Wilfred Totter said “the mind delights in a static environment”.  Alas, this is not the world we live in. Life's motto is "situation normal, everything must change" and the only time things stop changing is when they're dead.

The degree of resistance to change will increase depending upon how well established and connected the past model is. In figure 127, I've shown this as inertia barriers which increase in size the more evolved the component becomes.

Figure 127 - inertia increases with success



There are also many forms of inertia. In the example of co-evolution (provided in chapter 9) there were two forms of inertia. The first is due to the success of past architectural practice. The second is caused by the co-evolving practice being relatively novel and hence there existing high degrees of uncertainty over it. Both sources will create resistance due to adopting the change which in this case is the shift from product to utility of computing (see figure 128).


Figure 128 - Practices and inertia



So what makes up inertia and this resistance to change exist in business? That depends upon the perspective of the individual and whether they are a consumer or supplier.

The Consumer
From a consumer of an evolving activity, a practice or a model of understanding then inertia tends to manifest itself in three basic forms - disruption to past norms, transition to the new and the agency of new. I'll explain each using the example of cloud computing.

The typical concerns regarding the disruption to past norms include: -
  • Changing business relationships from old suppliers to potentially new suppliers.
  • A loss of in financial or physical capital through prior purchasing of a product e.g. the previous investment needs to be written off.
  • A loss in political capital through making a prior decision to purchase a product e.g. “what do you mean I can now rent the billion dollar ERP system I advised the board to buy on a credit card?”
  • A loss in human capital as existing skill-sets and practices change e.g. server huggers.
  • A threat that barriers to entry will be reduced resulting in increased competition in an industry e.g. even a small business can afford a farm of super computers.
The typical concerns regarding the transition to the new include: -
  • Confusion over the new methods of providing the activity e.g. isn’t this just hosting?
  • Concerns over the new suppliers as relationships are reformed including transparency, trust and security of supply.
  • Cost of acquiring new skill-sets as practices co-evolve e.g. designing for failure and distributed architecture.
  • Cost of re-architecting existing estates which consume the activity. For example, the legacy application estates built on past best practices (such as N+1, Scale-Up) and assume past methods of provision (i.e. better hardware) and will now require re-architecting.
  • Concerns over changes to governance and management.
The typical concerns regarding the agency of the new include: -
  • Suitability of the activity for provision in this new form i.e. is the act really suitable for utility provision and volume operations?
  • The lack of second sourcing options. For example, do we have choice and options? Are there multiple providers?
  • The existence of pricing competition and switching between alternatives suppliers. For example, are we shifting from a competitive market of products to an environment where we are financially bound to a single supplier?
  • The loss of strategic control through increased dependency on a supplier.
These risks or concerns were typical of the inertia to change I saw with cloud computing in 2008, however it wasn't just consumers that had inertia but also suppliers of past norms.

Suppliers of past norms.
The inertia to change of suppliers inevitably derives from past financial success. For example, the shift from product to utility services is a shift from high value model to one of volume operations and over time declining unit value. There is a transitional effect here which cause a high volume, high margin business for a period of time but we will cover that later.  In general, existing suppliers need to adapt their existing successful business models to this new world. Such a change is problematic for several reasons: -
  • All the data the company has demonstrates the past success of current business models and concerns would be raised over cannibalisation of the existing business.
  • The rewards and culture of the company are likely to be built on the current business model hence reinforcing internal resistance to change.
  • External expectations of the financial markets are likely to reinforce continual improvement of the existing model i.e. it’s difficult to persuade shareholders and financial investors to replace a high margin and successful business with a more utility approach when that market has not yet been established. 
For the reasons above, the existing business model tends to resist the change and the more successful and established it is then the greater the resistance. This is why the change is usually initiated by those not encumbered by past success. The existing suppliers, not only have to contend with their own inertia to change but also the  inertia their customers will have. Unfortunately, the previous peaceful model of competition (e.g. one product vs another) will lull these suppliers into a false sense of gradual change, in much the same way that our existing experience of climate change lulls us into a belief that climate change is always gradual. This is despite ample evidence that abrupt climate change has occurred repeatedly in the past, for example at the end of the Younger Dryas period, the climate of Greenland exhibited a sudden warming of +10°C within a few years. We are as much a prisoner of past expectations of change as past norms of operating.

Hence suppliers, with pre-existing business models, will tend to view change as gradual and have resistance to the change which in turn is reinforced by existing customers. This resistance of existing suppliers will continue until it is abundantly clear that the past model is going to decline. However, this has been compounded by the punctuated equilibrium which combines exponential change with denial. Hence by the time it has become abundantly clear and a decision is made, it is often to late for those past incumbents to survive. For a hardware manufacturer who has sold computer products and experienced gradual change for thirty years, it is understandable how they might consider this change to utility services would also happen slowly. They will have huge inertia to the change because of past success, they may view it as just an economic blip due to a recession and their customers will often try to reinforce the past asking for more “enterprise” like services.  Worst of all, they will believe they have time to transition, to help customers gradually change, to spend the years building and planning new services and to migrate the organisation over to the new models. The cold hard reality was that many existing suppliers didn't comprehend that the battle would be over in three to four years and for many the time to act was already passing. In 2008, they were in the last chance saloon and the tick was clocking towards last orders though they claimed this event was far in the future and they had plenty of time. Like the rapid change in climate temperature in Greenland, our past experience of change does not necessarily represent the future. 

In figure 129, I've classified various forms of inertia included tactics to be used to counter and various forms of messaging you might wish to consider in your struggle against it. When looking at a map, it's extremely helpful to identify the forms of inertia you will face and how to counter them before charging straight on into the battle. There's little worse than leading the charge into battle only to discover the rest of organisation is still getting dressed for a party and is convinced the war is sometime next decade.

Figure 129 - Classifying inertia


One of the more dangerous forms of inertia are financial markets. Despite the illusion of the future thinking world of finance, in most cases stability is prized. There is an expectation set by the market on past results and often significant discounting of the future. If anything, 2008 was a very visible reminder of this as the economy crumbled around us. The problem for a CEO of a hardware company at that time was the market comes to expect a certain level of profit, revenue, growth and return. There is only so much you can do to blame a change on general economic factors (e.g. a downturn) as the market expects you to return to norm and most executives are rewarded on short term measures based upon this. The result is one of the most peculiar aspects of the "war" stage of competition - the death spiral.

Climatic Pattern : Inertia kills
I mentioned previously in chapter 5 how Kodak had inertia which it finally overcame in order to invest in exactly the wrong part of the industry. We often think that companies die due to lack of innovation but this appears to be rarely the case. Kodak out innovated most (with digital still cameras, with online photo services and with photo printers) but it was inertia caused by past success in fulfilment and blindness to the environment that caused it to collapse. Equally Blockbuster out innovated most competitors with its early entrance into the web space, being first with ordering videos online and the first experiments with video streaming. Alas, it was wedded to a business model based upon late fees. There are many different examples of how inertia usually amplified by blindness to a change can cause a company to crumble but none is as common as the death spiral and the cause of it is something which at another time is perfectly sensible - cutting costs.

If your industry (i.e. the parts of value chain which you sell) are in a peace era then cutting costs through efficiency to increase profitability can be a good play, assuming you don't reduce barriers to entry into the space. There are many reasons why you would do this and often you can clear out a lot of waste in the organisation. However, if your industry has moved into the war then then cutting costs through staff to restore profitability due to declining revenue is often a terrible move. The problem is your revenue is eroding due to a change in the value chain and the industrialisation of the activity to more commodity forms. You need to respond by adapting and possibly moving up the value chain. However, by layoffs you're likely to get rid of those people who were seen to be less successful in the previous era.  That doesn't sound too bad but the result is you end up with a higher density of people successful in the past models (which are now in decline due to evolution) and hence you'll tend to increase your cultural inertia to change. In all likelihood, you've just removed the very people who might have saved you.

Revenue will continue to drop and you'll start a death spiral. You'll start scrambling around looking for "emerging markets" i.e. less developed economies for you to sell your currently being industrialised product into. The only result of this however is you're laying the ground work for those economies to be later industrialised once your competitors have finished chewing up your existing market. What you of course should be doing is adapting and realising that the tactics you play in one era are not the same as another (peace vs war etc). Now any large organisations has multiple different values chains in different evolutionary phases and you have to see this and know how to switch context between them in order to choose the right tactics. Naturally, most people don't manage to achieve this, nor do they effectively anticipate change or cope with industrialisation in the right way. This is why big companies often die but at least that keeps things interesting.

Of course, if you do embark on the death spiral then whilst it's appalling for those employed by the company, the executives are often rewarded. Why? Well, it comes back to the financial markets. If a market knows this transition is occurring then one tactic is to invest in the future industry (e.g. Amazon) whilst extracting as much short term value as possible from the past (e.g. existing hardware players). This requires a high expectation of share buy-backs, dividends and mergers in those past giants. It's not that you're expecting a long term gain from such investments but instead highly profitable short term wins which are balanced with your long term investment in the future. From a financial point of view then the death spiral is exactly what you want to see as you don't care about the long term survivability of the company (your investment will be gone by then) but you do want maximum extraction of value. If you're a canny executive then running a death spiral can bring big personal financial rewards as long as you're comfortable with the destruction you'll cause to people and companies alike. However, not all executives are canny. Often people find themselves in this position by accident. Which leads me to my next topic on the different forms of disruption.

The different forms of disruption

One of the more interesting discussions in recent times has been Professor Jill Lepore’s arguments against Clayton Christensen’s concept of disruptive innovation. In her now famous New Yorker 2014 article on "the disruption machine", Lepore argued that disruptive innovation doesn’t really explain change, but is instead mostly an artefact of history, a way of looking at the past and is unpredictable. Christensen naturally countered. For me, this really was a non-argument. What I had determined back in 2008 was there are many forms of disruption - some of which are predictable and some of which aren't. When the argument started then from my perspective, both Christensen and Lepore were right and wrong. The problem stems from the issue that they're not arguing over the same thing.

The three main forms of potential disruption that we will discuss are genesis, product to product substitution and product to utility business model substitution. The genesis of new acts are inherently unpredictable. If some novel activity appears that genuinely alters pre-existing value chains then there's little you can do to predict this, you have to simply adapt.

When product to product substitution occurs due to some new capability or feature then the predictability of when and what is low. The when depends upon individual actors actions and this is unknown. Equally, the addition of some new capability is also inherently unpredictable. Note, we know that things will evolve and the pathway for evolution (from genesis to commodity) but we don't know nor can we predict the individual steps such as this product will beat that product. This means a new entrant can at any time create a disruptive product that will substitute an existing market but a company will have no way of ascertaining when that will occur or what it will be. Though this does happen, in the time of peace, the time of product giants then such changes are less frequent than the rampage of sustaining changes. There are exceptions and Apple’s iPhone disrupting the Blackberry is a good example of this type of disruption. I'll note that Christensen quite famously dismissed the iPhone and has subsequently gone on to claim it's not an example of it, in any case this sort of substitution is unpredictable. Equally hydraulic vs cable excavators would fall into this category. They are easy to analyse post event but next to impossible to determine pre-event. In these instances, Lepore seems to be on firm footing.

With product to utility substitution the what and when can be anticipated. We know we're going to enter a state of war, an explosion of higher order systems, co-evolution of practice, disruption of past vendors stuck behind inertia barriers and so forth. Weak signals and the four condition (suitability, technology, concept and attitude) can give us an idea of when it will happen. In any case, even without the weak signals, the transition to more industrialised forms is inevitable if competition exists. So, we can be prepared. A new entrant can more effectively target this change to disrupt others. However, it also means an existing player can effectively mount a defence having prior knowledge of the change and time to prepare. Fortunately for the new entrants, the incredibly low levels of situational awareness that exists in most industry combined with the inertia faced by incumbents in terms of existing business models, developed practices, technological debt, behavioural norms, financial incentives, Wall Street expectations and self interest are often insurmountable and hence the start-ups often wins when they shouldn't. Whilst the change is entirely defendable against (with often many decades of prior warning) companies fail to do so. This form of disruption is entirely predictable and it is here where Christensen's theory excels. The more industrialised forms are considered lower quality, not meeting the performance requirements and usually dismissed by the incumbents.

Hence let us follow the evolution of an act. We start (in figure 130) with the appearance of some new activity A[1]. It is found useful and starts to diffuse with custom built examples.  As it evolves early products start to appear and we jump across one inertia barrier from custom built to products (point 1). Obviously those companies that have invested in their own custom solution argue that their solution is better but eventually pressure mounts and they adopt a product. The act continues to evolve with a constant stream of more "feature" complete products as we understand the space. Sometimes the progression is sustaining but sometimes a product appears that substitutes the previous examples. There's inertia to the change (point 2)  from customers and vendors invested in the existing product line. The thing is, we don't really know if this new product line is going to be successful, any more than Apple new it could beat Blackberry or others.  This form of disruption is unpredictable. Someone wins. The product giants continue to grow until eventually the act become suitable for industrialised provision. New entrants jump the barrier first (point 3)  and this barrier is significant. That act has become established in many value chains and it is highly connected with its own practices. There's a lot of dismissal of the industrialised version, claims it will take a long time but the punctuated equilibrium bites, the past vendors are struggling, practices have co-evolved and the old way is now legacy. Many past vendors start the death spiral in the hope of recapturing their glory days, their demise accelerates. This form of disruption was predictable but for most it wasn't. Of course, the world has moved onto to inventing novel and new things built upon these industrialised components (point 4), new forms of organisation appear based upon those co-evolved practices.  A next generation of future giants has arisen. Whether we notice them depends upon whether the cycle is localised at a micro economic scale to a specific industry or in other cases the component is so vastly connected it appears at a macro economic scale. In any case,  the cycle continues. Ba da boom. Ba da bing.

Figure 130 - Different types of disruption


Dealing with disruption

The problem is there isn't one form of disruption and hence there isn't one way to deal with it. The techniques and methods you need to use vary. Unfortunately, if you don't have a map of your landscape and you don't understand the basic climatic patterns then you don't have a great deal of chance in separating this. For most people, it's all the same thing and they end up facing off against highly predictable disruption without any preparation or planning.  In 2008, this was common in the computing industry. I'd end up with many arguments in boards pointing out the cloud computing (the shift from product to utility) was inevitable and not a question of "if" but "when" and that "when" was starting now. I'd explain the impacts and how they were going to be disrupted and people would retort with product examples, they'd start discussing the current situation with Apple vs Blackberry and how Blackberry could counter. These weren't even remotely the same thing. Don't mix the unpredictable world of product vs product substitution with the predictable world of product to utility substitution. In figure 131, I've provided three main types of disruption and the characteristics associated with each. 

Figure 131 - Dealing with disruption


From the above :-

The genesis of powered flight was with Félix du Temple de la Croix in 1857. What! No Wright Brothers? Well, this came later but since my American cousins get very sensitive on this topic, we will skip ahead to the much later. I'll just note Eliot Sivowitch's law of First - "whenever you discover who was first, the harder you look you'll find someone who was more first" - and hence the first electric lightbulb was Joseph Wilson Swan, the person who actually flew a kite into the night which was hit by lightning was Thomas-François Dalibard and when it comes to the telephone, bar shenanigans with patent clerks then we owe a debt to Elisha Gray. Cue endless arguments and gnashing off teeth.

Let us however stick with the Wright Brothers who invented powered flight to end all wars. The first planes sold to the US Army in 1909 were observation planes and the common idea of the time was “With the perfect development of the airplane, wars will be only an incident of past ages.” There was no existing practice associated with aircraft, there was some inertia to their adoption (similar to British concerns over the machine gun prior to World War I) and it was notoriously difficult to predict what would happen. Rather than airplanes ending all wars because no army could gain an advantage over another (all movements could be observed from the air), a rather different path of development occurred and bombs and machine guns were soon attached. With the genesis of such an act like powered flight, it's difficult to anticipate what might change and your only defence is to adapt quickly. In such circumstances a cultural bias towards action i.e. quickly responding to the change is essential. With the example of Apple vs RIM (i.e. Blackberry) then similar characteristics exist. There are existing practices but a different type of smart phone product does not significantly change this. Again, the main way to react is to spot the change quickly (through horizon scanning) and have developed a culture with a bias towards action. These sorts of change are notoriously difficult to defend against. In the case of cloud computing then there was high levels of inertia and co-evolution of practice to tackle. However, the change was highly predictable and trivial to defend against. Despite this, most failed to react.

Some final thoughts

We've covered a lot in this section from refinement of the peace, war and wonder cycle to the introduction of different climatic patterns, to the manner in which organisations evolve and the different forms of disruption. There are a couple of things I want to call out in particular.

Do the states of peace, war and wonder really exist? 
No, it's just a model and all models are wrong. The model appears to predict secondary effects such as organisational change, it is developed from first principles of competition and there seems to be historical precedent. However, it’s no more than appearance at the moment until such time as I can confirm future points of industrialisation and even then I still have the question of whether that was just luck.

Do the states just effect individual activities in industries or could they have a wider effect? 
The cycle’s effect depends upon how connected the components are. If they’re involved in many value chains then this can have a pronounced macro economic effect. When considering economic systems we have to look at them as we do with biological systems and consider how an ecosystem reacts to a change and how competition will drive that change throughout the system. 

Can we anticipate organisational change?
We know roughly when such changes should occur (from weak signals) though we cannot detail what the impact will be, as in whether new doctrine will appear or what doctrine will refine? Population dynamics on companies is a non trivial exercise due to inherent bias in questions and responses. However, we can at least say something reasonable about the process of change and its likelihood.

Is life getting faster?
Certainly the evolution of acts appears to have accelerated but don’t confuse that with a higher rate of innovation. It’s highly questionable whether we have become more innovative as a species though we certainly can’t rely on novel things to create a differential for long. Furthermore be careful to avoid confusing multiple and coincidental points of industrialisation with a general change of speed.

There are also a number of climatic patterns which I’ve mentioned that are worth noting.
  • Evolution of a communication mechanism can increase the speed of evolution overall.
  • Inertia increases with past success.
  • Inertia kills
  • Change is not always linear
  • Shifts from product to utility tend to exhibit a punctuated equilibrium
  • There are many different forms of disruption (two broad classes are predictable vs non predictable)
  • A point of “war” is associated with organisational change.

I’ve marked off all the patterns we’ve covered so far in orange in figure 132

Figure 132 - Climatic patterns


An exercise for the reader

The first thing I'd like you to do is to look at figure 124 - the change from traditional to next generation - and determine what type your organisation is? Are you adopting those principles or is there some context specific reason why you cannot? Have you challenged this?

The second thing I'd like you to do is looking at your maps, start to consider what sort of inertia you might face in changing the landscape. Use figure 129 - classifying inertia - as a guide.

Lastly, I want you to try and discover components in your value chains that are on the cusp of becoming industrialised and shifting from product forms. Ask yourself are you prepared for such a rapid change? What planning have you in place? How will you deal with the inertia?

---

Next Chapter in Series [to be published soon]
GitBook link [to be published soon]
More on mapping from the Leading Edge Forum.
First in series.

This post is provided as Creative commons Attribution-ShareAlike 4.0 International by the original author, Simon Wardley, a researcher for the Leading Edge Forum.




Tuesday, December 13, 2016

Anticipation

Chapter 9

[The more upto date version is kept on Medium]

My map of mapping that I produced in chapter 8 has multiple flows within it. My focus was on teaching people how to map and the flow from my purpose to my scope to the user and their desire to learn mapping and my desire to survive financially (see figure 94).

figure 94 - flows within maps.


The flaw in the above is it assumes that there is a market of users who have an inherent desire to learn mapping. Not only did I find this to be quite unlikely, it just wasn't the case. What people were aiming for is some way to create an advantage over others. Mapping was just a tool to achieve this.

If you look at the component for "advantage over competitors" then I've identified three areas of interest - the learning of context specific play (i.e. outsmarting others), the application of doctrine (i.e. being more effectively organised than others) and anticipation of change (i.e. seeing change before others).  Maybe you can think of more, if you do then by all means update the map and share. I've highlighted the flow through these components in figure 95.

figure 95 - three aspects of advantage


Naturally, the entire map is evolving and so the benefit of doctrine will decline as more companies adopt them. Fortunately we have that pipeline of context specific gameplay and lots more discovery. In this chapter however, we're going to turn our attention to anticipation. Back in early 2008, I had become quite a dab hand at using maps and common economic patterns to anticipate change. I was regularly invited to speak at huge events and published articles where I would declare with sleight of hand that over the next decade we would see :-
  • Rapid increases in the rate at of innovation on the web.
  • New entrants dominating IT
  • High rates of disruption in the IT markets
  • Radical changes in IT practices.
  • Higher levels of efficiency within IT.
  • Widespread adoption of cloud services.
  • Increasing organisational strain especially focused on IT creating a necessity for organisational change.
In 2016 we can see that this is happening but back in 2008 I was often greeted with a few gasps of wonder and a cacophony of derision and dismissal that things would change. I think I've been tagged with every label from "idiot" to "rubbish" to "gibberish" to "unrealistic". The most vociferous came from the world's of established vendors, enterprises, analysts and strategy consultants who had oodles of inertia to such changes. Fortunately, the gasps of wonder were enough to pick up some advisory work and keep booking a few gigs.  

I need to be clear. I don't claim to have mystical powers of anticipation, a time machine, some great intellect or a crystal ball. In fact, I'm a lousy prognosticator and a very normal sort of person. What I'm good at is taking pre-existing patterns that are in the wild and repeating them back to everyone. It's more of the "I predict that the ball you've thrown in the air will fall to the ground" or the "I predict the army currently walking off the cliff will lose the battle" kind. A basic understanding of the landscape can be used to remarkable effect with an audience of executives that lacks this. To begin our journey into anticipation we're going to have to start with areas of predictability.

Not all parts of the map are equally predictable.

Every component that inhabits the uncharted space is uncertain by definition. As it evolves then our understanding and certainty about it grows until it becomes familiar. At the same time it becomes more widespread in its ubiquitous market and therefore any differential value it creates declines. When we talk about the uncharted space, we're discussing things about which we don't really know what we need. They are inherently uncertain and risky but at the same time they're are the sources of future value and difference. As this component evolves over an unspecified amount of time (evolution can't be measured over time directly) then it becomes more defined, more certain and less risky. We increasingly know what we need.

When it comes to predictability, there are three aspects we need to consider - the what, the when and the who. From the above, the predictability of what is not uniform, it varies from genesis (a low predictability of what) to commodity (which has a high predictability). In figure 96, I've take a single activity A from its early appearance A[1] to some future version A[1+x] that has evolved through x  iterations and number of diffusion curves. The same activity but with different characteristics. You could pick electricity or computing, they all followed this path.

Figure 96 - predictability of what


So, we know the predictability of what is not constant across our map. How about who and when? Unfortunately when it comes to actors actions then the predictability of who is going to take a specific action is notoriously low. There are ways to cheat the system but this uses weak signals.

Cheating the system

Back around 2008, I was asked whether the growing field of social media could be used to identify which companies were interested in acquiring others. The idea was very simple, if there were lots of increasing connections between companies on a growing service such as LinkedIn, does that mean the companies are talking to each other? The problem is that such connections could be a signal of people wanting to jump ship or some conference that companies employees met up at. What we really wanted to know is when executives were talking to each other and unfortunately in those days, few executives were using social media and tools like LinkedIn. They certainly weren't linking up with competitor CEOs prior to an acquisition.

Fortunately, executives also like private jets. The tailplane numbers of private jets and company ownership were easily accessible and so were the flight plans. By monitoring the movement of private jets and looking for disturbances in the data i.e. the repeated landing of the jet of one company in the same area at a close proximity in time to the jet of another company, ideally in a location where neither had headquarters (the attempt to met "off site") then it would indicate that executives were meeting. This is an example of a weak signal which turned out to be surprisingly effective. Companies tend to spend an awful lot of time and money trying to secure corporate M&A information and then leak the same information like a sieve through some form of weak signal.

Weak signals can be used to anticipate an actors action (e.g. before tumble dryers then Russian sailors hanging out clothes for drying used to be a signal that the Russian fleet was about to set sail) but it's often time consuming and demanding work. You usually need to examine a single or small sample of actors rather than an entire market. In general, you have to accept that the predictability of who is going to take a specific action is low. However, though you cannot easily predict individual actors actions, we do know that there are aggregated effects caused by all actors. Evolution itself is a consequence of demand and supply competition and the Red Queen forcing us to adapt. We do know that if there is competition then components will evolve. We might not be able to say who will produce the more evolved form but we can say what will happen - it will evolve! This leads to the final aspect - when? 

Unfortunately, evolution cannot be anticipated over time or adoption. Hence at first glance, the predictability of when things will happen would seem to be low. Fortunately there are conditions, weak signals and patterns that can help us cheat this a bit. 

Conditions, weak signals and patterns

Let us consider the evolution of an act from a product to a commodity. In order to achieve this, a number of conditions need to be met. The concept of providing the act as a commodity must exist. The technology to achieve this must be available. The act must be suitably well defined and widespread. Finally, you need a willingness or attitude amongst consumers to adopt a new model. This latter part is normally represented by dissatisfaction with existing arrangement e.g "this product is costly". The four conditions - concept, suitability, technology and attitude - are essential for any change of state whether custom built to product or product to commodity.  In 2008, the idea of utility compute had been around since the 1960s. The technology to achieve utility compute was clearly available, I had been running my own private version years earlier. Compute itself was suitable for such a change being widespread and well defined. Finally, there was the right sort of attitude with clear concerns and dissatisfaction with the expense of existing systems. The four conditions clearly indicated a change was possible.

There are also weak signals. In chapter 7, I talked about the use of publication types to help elucidate the evolution curve. Those publication types form the basis of a weak signal. By examining the wording change in publications then you can estimate whether we're likely to be approaching a state change or not. For example a rapid increase in publications focused on use (point 1 in figure 97 below) and a decline in publications on operation, maintenance and feature differentiation (point 2) implies that we're approaching the point of stability and a cross over into the more commodity world.

Figure 97 - weak signals and evolution


Lastly, there are known patterns which can help us to predict when things will change. For example, in chapter 3 we discussed how efficiency enables innovation through componentisation effects. When a component evolves to more of a commodity (or a utility service) we can anticipate that this will cause a rapid rise in novel things built upon it i.e. the genesis of new acts. We won't be able to say what those novel things are but we can say (in conjunction with the weak signal above) when we're likely to see a rapid increase in this genesis.  So, let us put these lessons on anticipation onto a map containing a single activity that is evolving. Starting with figure 98 then:-

Figure 98 - Anticipation on a map


Point 1 - activities in the uncharted space are highly uncertain in terms of what is needed. They have a low predictability of what - a low p(what). Despite the risk due to a low p(what), they also have the highest future potential value. It's a space you to gamble and experiment in but represents future opportunity.

Point 2 - activities will evolve. The path of evolution can be described hence p(what) is high. We know that custom built systems under competition will lead to products. However when this will happen, the predictability of when is low - or low p(when). It depends upon individual actors actions.

Point 3 - there are weak signals we can use to cheat p(when) such as publication types. Whilst the signals won't give us a definitive answer (the two execs travelling to the same location in their corporate jets might just be friends going on holiday) it can give us an indication.

Point 4 - there are conditions that need to be met before something can evolve - concept, suitability, technology and attitude.

Point 5 - activities in the industrialised state are well defined (in terms of our interface to them such as the plug and the socket for electricity). They give the appearance of being well known - a high p(what) - low risk and have little differential value.

Point 6 - the introduction of industrialised forms will encourage new activities to be built upon them - genesis begets evolution begets genesis. The predictability of what will happen - the appearance of new things -  is high. However,  as noted in point 1, the predictability of what those new things will be is low.  We can refine our estimate of when this will happen through weak signals.

The point of the above is to show that not everything that occurs is quite as random as some would make out. There are things we can anticipate. I use the terms p(what) and p(when) when discussing our ability to predict something. A high p(what) means we can accurately anticipate what a change will be. A low p(what) means we can't but we still might get lucky. We're now to going to build on this by introducing two more economic patterns - co-evolution and the cycle of peace, war and wonder.

Climatic Pattern : Co-evolution

In 2016, the current rage is about "serverless" computing. I'm going to exploit this fortuitous circumstance to explain the concept of co-evolution but to begin with we need to take a hike back through time to the 80s/90s. Back in those days, computers were very much a product and the applications we built used architectural practices that were based upon the characteristics of a product, in particular mean time to recovery (MTTR)

When a computer failed, we had to replace or fix it and this would take time. The MTTR was high and architectural practices had emerged to cope with this. We built machines using N+1 (i.e. redundant components such as multiple power supplies). We ran disaster recovery tests to try and ensure our resilience worked. We cared a lot about capacity planning and scaling of single machines (scale up). We cared an awful lot about things that could introduce errors and we had change control procedures designed to prevent this. We usually built test environments to try things out before we were tempted to alter the all important production environment.

But these practices didn’t just magically appear overnight, they evolved through trial and error. They started as novel practices, then more dominant but divergent forms emerged until we finally started to get some form of consensus. The techniques converged and good practice was born. Ultimately these were refined and best architectural practice developed. In such confident days, you’d be mocked for not having done proper capacity planning as this was an expected norm.

Our applications needed architectural practices that were based upon (needed) compute which was provided as a product. The architectural norms that became “best practice” were N+1, scale up, disaster recovery, change control and testing environments and these were ultimately derived from the high MTTR of a product. I’ve shown this evolution of practice in the map below. 

Figure 99 — Evolution of Architectural Practice



Normally with maps I just use the description of evolution for activities, it’s exactly the same with practice but with slightly different terms e.g. novel, emerging, good and best rather than genesis, custom, product and commodity. For background on this, see figure 10 (Chapter 2)

The thing is, compute evolved. As an activity then compute had started back in the 1940s in that uncharted space (the genesis of the act) where everything is uncertain. We then had custom built examples (divergent forms) and then products (convergence around certain characteristics with some differentiation between them). However, compute by the early 2000s had started to transform and become more commodity like with differentiation becoming far more constrained, the activity itself becoming far more defined. In this world a server was really about processor speed, memory, hard disk size, power consumption and how many you could cram in a rack. In this world we built banks of compute and created virtual machines as we needed them. Then we got public utility forms with the arrival of AWS EC2 in 2006.

The more industrialised forms of any activity have different characteristics to early evolving versions. With computing infrastructure then utility forms had similar processing, memory and storage capabilities but they had very low MTTR. When a virtual server went bang, we didn’t bother to try and fix it, we didn’t order another, we just called an API and within minutes or seconds we had a new one. Long gone were the days that we lovingly named our servers, these were cattle not pets.

This change of characteristics enabled the emergence of a new set of architectural principles based upon a low MTTR. We no longer cared about N+1 and resilience of single machines, as we could recreate them quickly if failure was discovered. We instead designed for failure. We solved scaling by distributing the workload, calling up more machines as we needed them — we had moved from scale up to scale out. We even reserved that knowing chortle for those who did “capacity planning” in this world of abundance.

Figure 100 — Emergence of a new practice



We started testing failure by the constant introduction of error — we created various forms of chaos monkeys or masters of disasters that introduced random failure into our environments. One off disaster recovery tests were for the weak, we constantly adapted to failure. With a much more flexible environment, we learned to roll back changes more quickly, we became more confident in our approaches and started to use continuous deployment. We frowned at those that held on to the sacred production and less hallowed testing environments. We started to mock them.

These novel practices — scale out, design for failure, chaos engines and continuous deployment amongst others — were derived from an increasingly low MTTR environment and such practices were simply accelerated by utility compute environments. Our applications were built with this in mind. The novel practices spread becoming emergent (different forms of the same principles) and have slowly started to converge with a consensus around good practice. We even gave it a name, DevOps. It is still evolving and it will in turn become best architectural practice.

What happened is known as co-evolution i.e. a practice co-evolves with the activity itself. This is perfectly normal and happens throughout history. Though steel making itself industrialised, we can still produce swords (if we wish) but we have in most part lost the early practice of forging swords. One set of practices has been replaced with another. I’ve shown the current state of co-evolution in compute in the map below. The former best architectural practice we now call “legacy” whilst the good (and still evolving) architectural practice is called “devops”.

Figure 101 - Co-evolution of DevOps



This transformation of practice is also associated with inertia i.e. we become used to the “old” and trusted best practice (which is based upon one set of characteristics) and the “new” practice (based upon a more evolved underlying activity) is less certain, requires learning and investment. Hence we often have inertia to the underlying change due to governance. This was one of the principle causes of inertia to cloud computing.

Furthermore any application we had which were based upon the “old” best practice lacks the benefits of this new more evolved world. These benefits of industrialisation always include efficiency, speed of agility and speed of development in building new things. Our existing applications became our legacy to our past way of doing things. They needed re-architecting but that involves cost and so, we try to magic up ways of having the new world but just like the past. We want all the benefits of volume operations and commodity components but using customised hardware designed just for us! It doesn’t work, the Red Queen eventually forces us to adapt. We often fight it for too long though.

This sort of co-evolution and the inevitable dominance of a more evolved practice is highly predictable. We can use it to anticipate new forms of organisations that emerge as well as anticipate the changes in practice before they hit us. It’s how in Canonical in 2008, we knew we had to focus on the emerging DevOps world and to make sure everyone (or as many as possible) that were building in that space were working on Ubuntu - but that's a later chapter.  It's enough to know that we exploited this change for our own benefits. As one CIO recently told me, one day everyone was talking about RedHat and the next it was all Cloud plus Ubuntu. That didn’t happen by accident.

Complicating the picture a bit more - the rise of Serverless

Of course, the map itself doesn’t show you the whole picture because I’ve deliberately simplified it to explain co-evolution. Between the application and the architectural practice we used for computing infrastructure layer is another layer — the platform. Now platform itself is evolving. At some point in the past there was the genesis of the first platforms. These then evolved to various divergent but still uncommon custom built forms. Then we had convergence to more product forms. We had things like the LAMP stack (Linux, Apache, MySql and Perl or Python or PHP — pick your poison).

Along with architectural practice around computing infrastructure, there was also architectural practices around the platform. These were based upon the characteristics of the platform itself. From coding standards (i.e. nomenclature) to testing suites to performance testing to object orientated design within monolithic program structures. The key characteristic of the platform was how it provided a common environment to code in and abstracted away many of the underpinnings. But it did so at a cost, that same shared platform.

As I've mentioned before, a program is nothing more than a high level function which often calls many other functions. However, in general we encoded these functions altogether as some monolithic structure. We might separate out a few layers in some form of n-layer design — a web layer, a back end, a storage system — but each of these layers tended to have relatively large programs. To cope with load, we often replicated the monoliths across several physical machines. Within these large program we would break them into smaller functions for manageability but we would less frequently separate these functions onto a different platform stack because of the overhead of all those different platform stacks. You wouldn’t want to have machine sitting there with an entire platform stack to run one function which was rarely called. It was a waste! In the map below I’ve added the platform and the best practice above the platform layer.

Figure 102 — Evolution of Architectural Practice (platform)



In 2005, the company I ran was already using utility like infrastructure. We had evolved early DevOps practices — distributed systems, continuous deployment, design for failure — and this was just the norm for us. However, we had also produced the utility coding platform known as Zimki, which happened to allow developers to write entire applications, front and back end in a single language — JavaScript. As a developer you just wrote code, you were abstracted away from the platform itself, you certainly had no concept of servers. That every function you wrote within your program could be running in a different platform stack was something you didn’t need to know. From a developer point of view you just wrote and ran your program and it called other functions. However, this environment enabled some remarkable new capabilities from distribution of functions to billing by function. The change of platform from product to utility created new characteristics that enabled new architectural practices to emerge at this level. This is co-evolution. This is normal. These new practices, I’ve nicknamed FinDev for the time. The “old” best architectural practices, well, that’s legacy. I’ve drawn a map to show this change.

Figure 103 — Co-Evolution of Architectural Practice (platform)



The more mundane of these architectural changes is it encourages componentisation, the breaking down of complex systems into reusable discrete components provided as services to others. In Zimki, every function could be exposed as a web service through a simple “publish” parameter added to the function. Today, we use the term micro services to describe this separation of functions and provision as web services. We’re moving away from the monolith program containing all the functions to a world of separated and discrete functions. A utility platform just enables this and abstracts the whole underlying process from the developer.

The next mundane point is it encourages far greater levels of re-use. One of the problems with the old object orientated world was there was no effective communication mechanism to expose what had been built. You’d often find duplication of objects and functions within a single company let alone between companies. Again, exposing as web services encourages this to change. That assumes someone has the sense to build a discovery mechanism such as a service register.

Another, again rather trivial point is it abstracts the developer further away from the issues of underlying infrastructure. It’s not really “serverless” but more “I don’t care what a server is”. As with any process of industrialisation (a shift from product to commodity and utility forms), the benefits are not only efficiency in the underlying components but acceleration in the speed at which I can develop new things. As with any other industrialisation there will be endless rounds of inertia caused by past practice. Expect lots of gnashing of teeth over the benefits of customising your infrastructure to your platform and … just roll the clock back to infrastructure as a service in 2007 and you’ll hear the same arguments in a slightly different context.

Anyway, back to Old Street (where the company was) and the days of 2005. Using Zimki, I built a small trading platform in a day or so because I was able to re-use so many functions created by others. I didn’t have to worry about building a platform and the concept of a server, capacity planning and all that “yak shaving” was far from my mind. The efficiency, speed of agility and speed of development are just a given. However, these changes are not really the exciting parts. The killer, the gotcha is the billing by the function. This fundamentally changes how you do monitoring and enables concepts such as worth based development (see chapter 8). Monitoring by cost of function changes the way we work — well, it changed me and I’m pretty sure this will impact all of you. Serverless will fundamentally change how we build business around technology and how you code. Your future looks more like figure 104 (simply take the Co-Evolution of Architectural Practice map from above and remove the legacy lines).

Figure 104 - the future of platform


So given our knowledge of this climatic pattern, let us add co-evolution onto our map of anticipation - see figure 105 - adding in point 7 for co-evolution. I've generalised the map for any activity A, starting from an early version A[1] to some later more evolved act A[1+x] after x iterations each with their own diffusion curve. This leads to both co-evolved practice B and new forms of activities C.

Figure 105 - expanding anticipation with co-evolution


The above is remarkably powerful and allows us to introduce our first economic cycle known as peace, war and wonder.

Climatic Pattern : Peace, War and Wonder

Let us consider the path by which something evolves. We first start with the appearance of this novel thing, its genesis. The component is highly uncertain, of potential future value and risky. We don't know who will introduce it, whether it will go anywhere or what it will transform into. But, it's a potential source of Wonder. It may well disappear into the bin of history along with refrigeration blankets or become a soaring success. We just don't know. If it does find a use then supply and demand competition will start to cause its evolution. We will see custom built examples in other companies and eventually products introduced when the act becomes ubiquitous and well defined enough to support this. 

The nature of competition will now shift to suppliers of products with constant feature improvement. It's no longer about exploration of the uncharted space but about defining, refining and learning about the act. This evolution will continue with constant release of ever more improved versions of the act - a better phone, a better television.  It is a time of high margin, increasing understanding of customer needs, the introduction of rental services and relative competition i.e. a jostle for position between giant competitors. Disruptive change caused by new entrants will occur  but such product vs product substitution is in the minority as most change is gradual and sustaining of those competing companies.  Because of their success, inertia to change builds up within those giants whilst the activity itself continues to evolve becoming more widespread, better understood and declining in differential value. In the latter stages customers can even start to question whether they are getting a fair benefit for what they are paying but overall, this is a time of Peace in that industrial ecosystem. Whilst we cannot say who will win or when things will evolve from one version to another, we can say that evolution will continue if there is competition. We have a high predictability of "what" will happen with evolution ... it will evolve from product to commodity!

The successful activity has now become commonplace and "well understood". It is now suitable for more commodity or utility provision. Assuming that concept and technology exists to achieve this then the likelihood of more industrialised forms increases. However, the existing giants have inertia to this change and so it is new entrants that are not encumbered by pre-existing business models that introduce the more commodity form. These new entrants may include former consumers who have gained enough experience to know that this activity should be provided in a different way along with the skills to do it.  In this case of computing infrastructure, it was an online bookseller which heavily used computing.

This more commodity form (especially utility services) is often dismissed by most existing customers and suppliers of products who have their own inertia to change. Customers see it as lacking what they need and not fitting in with their norms of operating i.e. their existing practice. However, new customers appear and take advantage of the new benefits of high rates of agility, speed of genesis of new higher order activities and efficiency. Novel practices and norms of operating also co-evolve and start to spread. 

Customers who were once dismissive start to trial out the services, pressure mounts for adoption due to the Red Queen. A trickle rapidly becomes a flood. Past giants who have been lulled into a sense of gradual change by the previous peaceful stage of competition see an exodus. Those same customers who were only recently telling these past giants that they wouldn’t adopt these services, that it didn’t fit their needs and that they needed more tailored offerings like the old products have adapted to the new world. The old world of products and associated practices are literally crumbling away.  The new entrants are rapidly becoming the new titans. The former giants have old models that are dying and little stake in this future world. There is little time left to act. The cost to build equivalent services at scale to compete against the new titans is rapidly becoming prohibitive. Many past giants now face disruption and failure. Unable to invest, they often seek to reduce costs in order to return profitability to the former levels they experienced in the peace stage of competition. Their decline accelerates. This stage of competition is where disruptive change exceeds sustaining, it has become a fight for survival and it is a time of War with many corporate casualties. This period of rapid change is know as a punctuated equilibrium.

The activity that is now provided by commodity components has enabled new higher order activities. Things that were once economically unfeasible now spread rapidly. Nuts and bolt beget machines. Electricity beget Television. These new activities are by definition novel and uncertain. Whilst they are a gamble and we can’t predict what will happen, they are also potential sources of future wealth. Capital rapidly flows into these new activities. An explosion of growth in new activities and new sources of data occurs. The rate of genesis appears breathtaking. For an average gas lamp lighter there is suddenly electric lights, radio, television, tele-typing, telephones, fridges and all manner of wondrous devices in a short time span. We are back in the stage of Wonder.

There’s also disruption as past ways of operating are substituted – gas lamps to electric lights. These changes are often indirect and difficult to predict, for example those that are caused by reduced barriers to entry. The fear that the changes in the previous stage of war (where past giants fail) will cause mass unemployment often lessens because the new industries (built upon the new activities we could not have predicted) will form. Despite the maelstrom it is a time of marvel and of amazement at new technological progress. Within this smorgasbord of technological delights, the new future giants are being established.  They will take these new activities and start to productise them. We're entering into the peace phase of competition and many are oblivious to the future war. The pattern of peace, war and wonder continues relentlessly. I've marked this onto figure 106. At this point you might go "but that's like the pioneer, settler and town planner diagram" - yes it is. There's a reason I use those terms and call the Town Planners the "war makers".

Figure 106 - Peace, War and Wonder



Now, in this cycle, the War part is the most interesting because we can say an awful lot about it, it has a very high p(What). We know we're likely to see  :-
  • Rapid explosion of higher order systems and the genesis of new acts
     e.g. an increase at the rate at which innovative services and products are released to the web.
  • New entrants building these commodity services as past giants are stuck behind inertia barriers caused by past success
    e.g. New entrants dominating IT
  • Disruption of past giants
    e.g. High rates of disruption in the IT markets
  • Co-evolution of practice
    e.g. Radical changes in IT practices.
  • Higher levels of efficiency in provision of underlying components
    e.g. Higher levels of efficiency within IT.
  • Widespread shifts to the new model driven by the Red Queen effect
    e.g. Widespread adoption of cloud services.

Wait, aren't those my predictions! Yes, I told you I was cheating and giving cowardly custard predictions of the kind "the ball that was thrown will fall to the ground". However, not only do we have a high predictability of what we can also use weak signals from publication types and conditions to give us a pretty decent probability of when. This is what makes the "War" state of change so remarkable. We can anticipate what's going to happen and  have a reasonable stab at when well in advance.

Figure 107 - The War state of economic competition


I've been using this peace, war and wonder cycle in anger for about eight years. There's many things it helps explain from how organisations evolve to the different types of disruption. However, we will cover that in the next chapter. For now, I just want to share the last time I ran the cycle. This was more recently in a piece of work for the Leading Edge Forum in 2014. The points of war are the points which the signals indicate that these particular activities will become more industrialised. Of course, there's a world of product competition beforehand but at least we have an idea of when the changes will hit.

Figure 108 - future points of war


From the above, we can take an example such as intelligent software agents and see the weak signals indicate a world of developing products but quite a long period until the formation of industrialised forms, sometime around 2025 - 2030. However, there will be a future when intelligent software agents will become industrialised and the intelligent agent driving your car will become the same one that powers your future mobile device or your home entertainment system. This will cause all forms of disruption to past giants along with changing practices. Closer to home, we can see that Big Data systems have already entered the war phase and sure enough we have growing utility services in this space. That means product vendors that have dominated that space are in real trouble but probably don't realise it. They will have plenty of inertia to deny that the change will happen.

Predictability and Climatic Patterns

It's worth at this point just using the above example (figure 107) to show how many common climatic patterns can be involved.  Some of these patterns you will be already familiar with, others we will dive into more detail on as we go through the book. Whilst there are many areas of uncertainty in a map, there's an awful lot of things we can say about change. From figure 109, then :-

Figure 109 - Climatic patterns and predictability.



Point 1 - everything evolves. Any novel and therefore uncertain act will evolve due to supply and demand competition if it creates some form of success.

Point 2 - success breeds inertia. It doesn't matter what stage of evolution we're at, along with past success comes inertia to change.

Point 3 - inertia increases the more successful the past model is. As things evolve then our inertia to changing them also increases. 

Point 4 - no choice over evolution. The Red Queen effect will ultimately force a company to adapt unless you can somehow remove competition or create an artificial barrier to change.

Point 5 - inertia kills. Despite popular claims, it's rarely lack of innovation that causes companies problem but inertia caused by pre-existing business models. Blockbuster out innovated most of its competitors through the provision of a web site, video ordering online and video streaming. Its problem was not lack of innovation but pass success caused by a 'late fees' model.

Point 6 - shifts from product to utility tend to demonstrate a punctuated equilibrium. The change across different stages tend to be rapid exponential changes.

Point 7 - efficiency enables innovation. A standard componentisation effect.

Point 8 - capital flows to new areas of value. A shift from product to more industrialised forms will see a shift of capital from past product companies to utility forms along with investment in those building on top of these services.

Point 9 - coevolution. The shift from product to more industrialised forms is accompanied with a change of practice.

Point 10 - higher order systems create new sources of worth. The higher order systems created though being uncertain are also the largest sources of future differential value.

As you develop skill in understanding the landscape and climatic patterns involved, you will find yourself being able to increasingly anticipate common forms of change.

Categorising predictability

Now I've introduced the concepts of anticipation, I'd to refine the terms p(What), p(When) and p(Who). When I'm talking about predictability, I am talking about how accurately we can predict a change. If we assign a 10% probability to something then a high level of predictability means our 10% assignment is roughly right. A low level of predictability means we just don't have a clue. It could be 10%, 0.1% or 99%. We literally have no idea. A low predictability of what - a low p(What) - means we have no clue what's going to happen. You can still assign a precise probability to the change but it's going to be wildly inaccurate. You're in the land of crystal balls and tarot cards.

When it comes to anticipating change then at a market level it's extremely difficult to identify who is going to make a change. This requires exceptionally focused effort but in general p(Who) is always low. That doesn't mean that you can't prepare for changes especially points of war or industrialisation of components. Cloud computing was highly anticipatable and could be prepared for well in advance despite us not knowing who was going to lead the charge. There is a broad spectrum of changes from those which are known to unknowable to unknown. I've characterised these in figure 110 using p(What) and p(When) as the axis.

Figure 110 - characterisation of change


An exercise for the reader

By now I've hopefully given you a basic introduction into anticipation. This is a topic that is worthy of its own book and there are many methods and techniques to be used here. However, as with the whole cycle of strategy this is an area which with practice, learning of common patterns and understanding of the landscape that you will refine. The main purpose of a map is a learning and communication tool and by applying common patterns to it, you can discuss your anticipation of change with others and allow for that all important challenge. There are still lots of areas of uncertainty on a map and in fact the more you use it, the more you'll find yourself embracing that uncertainty. There are many mechanisms to exploit it.

I've covered quite a bit in this chapter and we've got a bit further to go on this subject however for the time being, I'd like you to take some of your maps and try to anticipate change. Look for shifts from product to commodity. Think about the inertia you might, the co-evolution of practice that may occur and how it will expose new worlds of wonder.

----

Next Chapter in Series [to be published soon]
GitBook link [to be published soon]