Tuesday, December 13, 2016

Anticipation

Chapter 9

[The more upto date version is kept on Medium]

My map of mapping that I produced in chapter 8 has multiple flows within it. My focus was on teaching people how to map and the flow from my purpose to my scope to the user and their desire to learn mapping and my desire to survive financially (see figure 94).

figure 94 - flows within maps.


The flaw in the above is it assumes that there is a market of users who have an inherent desire to learn mapping. Not only did I find this to be quite unlikely, it just wasn't the case. What people were aiming for is some way to create an advantage over others. Mapping was just a tool to achieve this.

If you look at the component for "advantage over competitors" then I've identified three areas of interest - the learning of context specific play (i.e. outsmarting others), the application of doctrine (i.e. being more effectively organised than others) and anticipation of change (i.e. seeing change before others).  Maybe you can think of more, if you do then by all means update the map and share. I've highlighted the flow through these components in figure 95.

figure 95 - three aspects of advantage


Naturally, the entire map is evolving and so the benefit of doctrine will decline as more companies adopt them. Fortunately we have that pipeline of context specific gameplay and lots more discovery. In this chapter however, we're going to turn our attention to anticipation. Back in early 2008, I had become quite a dab hand at using maps and common economic patterns to anticipate change. I was regularly invited to speak at huge events and published articles where I would declare with sleight of hand that over the next decade we would see :-
  • Rapid increases in the rate at of innovation on the web.
  • New entrants dominating IT
  • High rates of disruption in the IT markets
  • Radical changes in IT practices.
  • Higher levels of efficiency within IT.
  • Widespread adoption of cloud services.
  • Increasing organisational strain especially focused on IT creating a necessity for organisational change.
In 2016 we can see that this is happening but back in 2008 I was often greeted with a few gasps of wonder and a cacophony of derision and dismissal that things would change. I think I've been tagged with every label from "idiot" to "rubbish" to "gibberish" to "unrealistic". The most vociferous came from the world's of established vendors, enterprises, analysts and strategy consultants who had oodles of inertia to such changes. Fortunately, the gasps of wonder were enough to pick up some advisory work and keep booking a few gigs.  

I need to be clear. I don't claim to have mystical powers of anticipation, a time machine, some great intellect or a crystal ball. In fact, I'm a lousy prognosticator and a very normal sort of person. What I'm good at is taking pre-existing patterns that are in the wild and repeating them back to everyone. It's more of the "I predict that the ball you've thrown in the air will fall to the ground" or the "I predict the army currently walking off the cliff will lose the battle" kind. A basic understanding of the landscape can be used to remarkable effect with an audience of executives that lacks this. To begin our journey into anticipation we're going to have to start with areas of predictability.

Not all parts of the map are equally predictable.

Every component that inhabits the uncharted space is uncertain by definition. As it evolves then our understanding and certainty about it grows until it becomes familiar. At the same time it becomes more widespread in its ubiquitous market and therefore any differential value it creates declines. When we talk about the uncharted space, we're discussing things about which we don't really know what we need. They are inherently uncertain and risky but at the same time they're are the sources of future value and difference. As this component evolves over an unspecified amount of time (evolution can't be measured over time directly) then it becomes more defined, more certain and less risky. We increasingly know what we need.

When it comes to predictability, there are three aspects we need to consider - the what, the when and the who. From the above, the predictability of what is not uniform, it varies from genesis (a low predictability of what) to commodity (which has a high predictability). In figure 96, I've take a single activity A from its early appearance A[1] to some future version A[1+x] that has evolved through x  iterations and number of diffusion curves. The same activity but with different characteristics. You could pick electricity or computing, they all followed this path.

Figure 96 - predictability of what


So, we know the predictability of what is not constant across our map. How about who and when? Unfortunately when it comes to actors actions then the predictability of who is going to take a specific action is notoriously low. There are ways to cheat the system but this uses weak signals.

Cheating the system

Back around 2008, I was asked whether the growing field of social media could be used to identify which companies were interested in acquiring others. The idea was very simple, if there were lots of increasing connections between companies on a growing service such as LinkedIn, does that mean the companies are talking to each other? The problem is that such connections could be a signal of people wanting to jump ship or some conference that companies employees met up at. What we really wanted to know is when executives were talking to each other and unfortunately in those days, few executives were using social media and tools like LinkedIn. They certainly weren't linking up with competitor CEOs prior to an acquisition.

Fortunately, executives also like private jets. The tailplane numbers of private jets and company ownership were easily accessible and so were the flight plans. By monitoring the movement of private jets and looking for disturbances in the data i.e. the repeated landing of the jet of one company in the same area at a close proximity in time to the jet of another company, ideally in a location where neither had headquarters (the attempt to met "off site") then it would indicate that executives were meeting. This is an example of a weak signal which turned out to be surprisingly effective. Companies tend to spend an awful lot of time and money trying to secure corporate M&A information and then leak the same information like a sieve through some form of weak signal.

Weak signals can be used to anticipate an actors action (e.g. before tumble dryers then Russian sailors hanging out clothes for drying used to be a signal that the Russian fleet was about to set sail) but it's often time consuming and demanding work. You usually need to examine a single or small sample of actors rather than an entire market. In general, you have to accept that the predictability of who is going to take a specific action is low. However, though you cannot easily predict individual actors actions, we do know that there are aggregated effects caused by all actors. Evolution itself is a consequence of demand and supply competition and the Red Queen forcing us to adapt. We do know that if there is competition then components will evolve. We might not be able to say who will produce the more evolved form but we can say what will happen - it will evolve! This leads to the final aspect - when? 

Unfortunately, evolution cannot be anticipated over time or adoption. Hence at first glance, the predictability of when things will happen would seem to be low. Fortunately there are conditions, weak signals and patterns that can help us cheat this a bit. 

Conditions, weak signals and patterns

Let us consider the evolution of an act from a product to a commodity. In order to achieve this, a number of conditions need to be met. The concept of providing the act as a commodity must exist. The technology to achieve this must be available. The act must be suitably well defined and widespread. Finally, you need a willingness or attitude amongst consumers to adopt a new model. This latter part is normally represented by dissatisfaction with existing arrangement e.g "this product is costly". The four conditions - concept, suitability, technology and attitude - are essential for any change of state whether custom built to product or product to commodity.  In 2008, the idea of utility compute had been around since the 1960s. The technology to achieve utility compute was clearly available, I had been running my own private version years earlier. Compute itself was suitable for such a change being widespread and well defined. Finally, there was the right sort of attitude with clear concerns and dissatisfaction with the expense of existing systems. The four conditions clearly indicated a change was possible.

There are also weak signals. In chapter 7, I talked about the use of publication types to help elucidate the evolution curve. Those publication types form the basis of a weak signal. By examining the wording change in publications then you can estimate whether we're likely to be approaching a state change or not. For example a rapid increase in publications focused on use (point 1 in figure 97 below) and a decline in publications on operation, maintenance and feature differentiation (point 2) implies that we're approaching the point of stability and a cross over into the more commodity world.

Figure 97 - weak signals and evolution


Lastly, there are known patterns which can help us to predict when things will change. For example, in chapter 3 we discussed how efficiency enables innovation through componentisation effects. When a component evolves to more of a commodity (or a utility service) we can anticipate that this will cause a rapid rise in novel things built upon it i.e. the genesis of new acts. We won't be able to say what those novel things are but we can say (in conjunction with the weak signal above) when we're likely to see a rapid increase in this genesis.  So, let us put these lessons on anticipation onto a map containing a single activity that is evolving. Starting with figure 98 then:-

Figure 98 - Anticipation on a map


Point 1 - activities in the uncharted space are highly uncertain in terms of what is needed. They have a low predictability of what - a low p(what). Despite the risk due to a low p(what), they also have the highest future potential value. It's a space you to gamble and experiment in but represents future opportunity.

Point 2 - activities will evolve. The path of evolution can be described hence p(what) is high. We know that custom built systems under competition will lead to products. However when this will happen, the predictability of when is low - or low p(when). It depends upon individual actors actions.

Point 3 - there are weak signals we can use to cheat p(when) such as publication types. Whilst the signals won't give us a definitive answer (the two execs travelling to the same location in their corporate jets might just be friends going on holiday) it can give us an indication.

Point 4 - there are conditions that need to be met before something can evolve - concept, suitability, technology and attitude.

Point 5 - activities in the industrialised state are well defined (in terms of our interface to them such as the plug and the socket for electricity). They give the appearance of being well known - a high p(what) - low risk and have little differential value.

Point 6 - the introduction of industrialised forms will encourage new activities to be built upon them - genesis begets evolution begets genesis. The predictability of what will happen - the appearance of new things -  is high. However,  as noted in point 1, the predictability of what those new things will be is low.  We can refine our estimate of when this will happen through weak signals.

The point of the above is to show that not everything that occurs is quite as random as some would make out. There are things we can anticipate. I use the terms p(what) and p(when) when discussing our ability to predict something. A high p(what) means we can accurately anticipate what a change will be. A low p(what) means we can't but we still might get lucky. We're now to going to build on this by introducing two more economic patterns - co-evolution and the cycle of peace, war and wonder.

Climatic Pattern : Co-evolution

In 2016, the current rage is about "serverless" computing. I'm going to exploit this fortuitous circumstance to explain the concept of co-evolution but to begin with we need to take a hike back through time to the 80s/90s. Back in those days, computers were very much a product and the applications we built used architectural practices that were based upon the characteristics of a product, in particular mean time to recovery (MTTR)

When a computer failed, we had to replace or fix it and this would take time. The MTTR was high and architectural practices had emerged to cope with this. We built machines using N+1 (i.e. redundant components such as multiple power supplies). We ran disaster recovery tests to try and ensure our resilience worked. We cared a lot about capacity planning and scaling of single machines (scale up). We cared an awful lot about things that could introduce errors and we had change control procedures designed to prevent this. We usually built test environments to try things out before we were tempted to alter the all important production environment.

But these practices didn’t just magically appear overnight, they evolved through trial and error. They started as novel practices, then more dominant but divergent forms emerged until we finally started to get some form of consensus. The techniques converged and good practice was born. Ultimately these were refined and best architectural practice developed. In such confident days, you’d be mocked for not having done proper capacity planning as this was an expected norm.

Our applications needed architectural practices that were based upon (needed) compute which was provided as a product. The architectural norms that became “best practice” were N+1, scale up, disaster recovery, change control and testing environments and these were ultimately derived from the high MTTR of a product. I’ve shown this evolution of practice in the map below. 

Figure 99 — Evolution of Architectural Practice



Normally with maps I just use the description of evolution for activities, it’s exactly the same with practice but with slightly different terms e.g. novel, emerging, good and best rather than genesis, custom, product and commodity. For background on this, see figure 10 (Chapter 2)

The thing is, compute evolved. As an activity then compute had started back in the 1940s in that uncharted space (the genesis of the act) where everything is uncertain. We then had custom built examples (divergent forms) and then products (convergence around certain characteristics with some differentiation between them). However, compute by the early 2000s had started to transform and become more commodity like with differentiation becoming far more constrained, the activity itself becoming far more defined. In this world a server was really about processor speed, memory, hard disk size, power consumption and how many you could cram in a rack. In this world we built banks of compute and created virtual machines as we needed them. Then we got public utility forms with the arrival of AWS EC2 in 2006.

The more industrialised forms of any activity have different characteristics to early evolving versions. With computing infrastructure then utility forms had similar processing, memory and storage capabilities but they had very low MTTR. When a virtual server went bang, we didn’t bother to try and fix it, we didn’t order another, we just called an API and within minutes or seconds we had a new one. Long gone were the days that we lovingly named our servers, these were cattle not pets.

This change of characteristics enabled the emergence of a new set of architectural principles based upon a low MTTR. We no longer cared about N+1 and resilience of single machines, as we could recreate them quickly if failure was discovered. We instead designed for failure. We solved scaling by distributing the workload, calling up more machines as we needed them — we had moved from scale up to scale out. We even reserved that knowing chortle for those who did “capacity planning” in this world of abundance.

Figure 100 — Emergence of a new practice



We started testing failure by the constant introduction of error — we created various forms of chaos monkeys or masters of disasters that introduced random failure into our environments. One off disaster recovery tests were for the weak, we constantly adapted to failure. With a much more flexible environment, we learned to roll back changes more quickly, we became more confident in our approaches and started to use continuous deployment. We frowned at those that held on to the sacred production and less hallowed testing environments. We started to mock them.

These novel practices — scale out, design for failure, chaos engines and continuous deployment amongst others — were derived from an increasingly low MTTR environment and such practices were simply accelerated by utility compute environments. Our applications were built with this in mind. The novel practices spread becoming emergent (different forms of the same principles) and have slowly started to converge with a consensus around good practice. We even gave it a name, DevOps. It is still evolving and it will in turn become best architectural practice.

What happened is known as co-evolution i.e. a practice co-evolves with the activity itself. This is perfectly normal and happens throughout history. Though steel making itself industrialised, we can still produce swords (if we wish) but we have in most part lost the early practice of forging swords. One set of practices has been replaced with another. I’ve shown the current state of co-evolution in compute in the map below. The former best architectural practice we now call “legacy” whilst the good (and still evolving) architectural practice is called “devops”.

Figure 101 - Co-evolution of DevOps



This transformation of practice is also associated with inertia i.e. we become used to the “old” and trusted best practice (which is based upon one set of characteristics) and the “new” practice (based upon a more evolved underlying activity) is less certain, requires learning and investment. Hence we often have inertia to the underlying change due to governance. This was one of the principle causes of inertia to cloud computing.

Furthermore any application we had which were based upon the “old” best practice lacks the benefits of this new more evolved world. These benefits of industrialisation always include efficiency, speed of agility and speed of development in building new things. Our existing applications became our legacy to our past way of doing things. They needed re-architecting but that involves cost and so, we try to magic up ways of having the new world but just like the past. We want all the benefits of volume operations and commodity components but using customised hardware designed just for us! It doesn’t work, the Red Queen eventually forces us to adapt. We often fight it for too long though.

This sort of co-evolution and the inevitable dominance of a more evolved practice is highly predictable. We can use it to anticipate new forms of organisations that emerge as well as anticipate the changes in practice before they hit us. It’s how in Canonical in 2008, we knew we had to focus on the emerging DevOps world and to make sure everyone (or as many as possible) that were building in that space were working on Ubuntu - but that's a later chapter.  It's enough to know that we exploited this change for our own benefits. As one CIO recently told me, one day everyone was talking about RedHat and the next it was all Cloud plus Ubuntu. That didn’t happen by accident.

Complicating the picture a bit more - the rise of Serverless

Of course, the map itself doesn’t show you the whole picture because I’ve deliberately simplified it to explain co-evolution. Between the application and the architectural practice we used for computing infrastructure layer is another layer — the platform. Now platform itself is evolving. At some point in the past there was the genesis of the first platforms. These then evolved to various divergent but still uncommon custom built forms. Then we had convergence to more product forms. We had things like the LAMP stack (Linux, Apache, MySql and Perl or Python or PHP — pick your poison).

Along with architectural practice around computing infrastructure, there was also architectural practices around the platform. These were based upon the characteristics of the platform itself. From coding standards (i.e. nomenclature) to testing suites to performance testing to object orientated design within monolithic program structures. The key characteristic of the platform was how it provided a common environment to code in and abstracted away many of the underpinnings. But it did so at a cost, that same shared platform.

As I've mentioned before, a program is nothing more than a high level function which often calls many other functions. However, in general we encoded these functions altogether as some monolithic structure. We might separate out a few layers in some form of n-layer design — a web layer, a back end, a storage system — but each of these layers tended to have relatively large programs. To cope with load, we often replicated the monoliths across several physical machines. Within these large program we would break them into smaller functions for manageability but we would less frequently separate these functions onto a different platform stack because of the overhead of all those different platform stacks. You wouldn’t want to have machine sitting there with an entire platform stack to run one function which was rarely called. It was a waste! In the map below I’ve added the platform and the best practice above the platform layer.

Figure 102 — Evolution of Architectural Practice (platform)



In 2005, the company I ran was already using utility like infrastructure. We had evolved early DevOps practices — distributed systems, continuous deployment, design for failure — and this was just the norm for us. However, we had also produced the utility coding platform known as Zimki, which happened to allow developers to write entire applications, front and back end in a single language — JavaScript. As a developer you just wrote code, you were abstracted away from the platform itself, you certainly had no concept of servers. That every function you wrote within your program could be running in a different platform stack was something you didn’t need to know. From a developer point of view you just wrote and ran your program and it called other functions. However, this environment enabled some remarkable new capabilities from distribution of functions to billing by function. The change of platform from product to utility created new characteristics that enabled new architectural practices to emerge at this level. This is co-evolution. This is normal. These new practices, I’ve nicknamed FinDev for the time. The “old” best architectural practices, well, that’s legacy. I’ve drawn a map to show this change.

Figure 103 — Co-Evolution of Architectural Practice (platform)



The more mundane of these architectural changes is it encourages componentisation, the breaking down of complex systems into reusable discrete components provided as services to others. In Zimki, every function could be exposed as a web service through a simple “publish” parameter added to the function. Today, we use the term micro services to describe this separation of functions and provision as web services. We’re moving away from the monolith program containing all the functions to a world of separated and discrete functions. A utility platform just enables this and abstracts the whole underlying process from the developer.

The next mundane point is it encourages far greater levels of re-use. One of the problems with the old object orientated world was there was no effective communication mechanism to expose what had been built. You’d often find duplication of objects and functions within a single company let alone between companies. Again, exposing as web services encourages this to change. That assumes someone has the sense to build a discovery mechanism such as a service register.

Another, again rather trivial point is it abstracts the developer further away from the issues of underlying infrastructure. It’s not really “serverless” but more “I don’t care what a server is”. As with any process of industrialisation (a shift from product to commodity and utility forms), the benefits are not only efficiency in the underlying components but acceleration in the speed at which I can develop new things. As with any other industrialisation there will be endless rounds of inertia caused by past practice. Expect lots of gnashing of teeth over the benefits of customising your infrastructure to your platform and … just roll the clock back to infrastructure as a service in 2007 and you’ll hear the same arguments in a slightly different context.

Anyway, back to Old Street (where the company was) and the days of 2005. Using Zimki, I built a small trading platform in a day or so because I was able to re-use so many functions created by others. I didn’t have to worry about building a platform and the concept of a server, capacity planning and all that “yak shaving” was far from my mind. The efficiency, speed of agility and speed of development are just a given. However, these changes are not really the exciting parts. The killer, the gotcha is the billing by the function. This fundamentally changes how you do monitoring and enables concepts such as worth based development (see chapter 8). Monitoring by cost of function changes the way we work — well, it changed me and I’m pretty sure this will impact all of you. Serverless will fundamentally change how we build business around technology and how you code. Your future looks more like figure 104 (simply take the Co-Evolution of Architectural Practice map from above and remove the legacy lines).

Figure 104 - the future of platform


So given our knowledge of this climatic pattern, let us add co-evolution onto our map of anticipation - see figure 105 - adding in point 7 for co-evolution. I've generalised the map for any activity A, starting from an early version A[1] to some later more evolved act A[1+x] after x iterations each with their own diffusion curve. This leads to both co-evolved practice B and new forms of activities C.

Figure 105 - expanding anticipation with co-evolution


The above is remarkably powerful and allows us to introduce our first economic cycle known as peace, war and wonder.

Climatic Pattern : Peace, War and Wonder

Let us consider the path by which something evolves. We first start with the appearance of this novel thing, its genesis. The component is highly uncertain, of potential future value and risky. We don't know who will introduce it, whether it will go anywhere or what it will transform into. But, it's a potential source of Wonder. It may well disappear into the bin of history along with refrigeration blankets or become a soaring success. We just don't know. If it does find a use then supply and demand competition will start to cause its evolution. We will see custom built examples in other companies and eventually products introduced when the act becomes ubiquitous and well defined enough to support this. 

The nature of competition will now shift to suppliers of products with constant feature improvement. It's no longer about exploration of the uncharted space but about defining, refining and learning about the act. This evolution will continue with constant release of ever more improved versions of the act - a better phone, a better television.  It is a time of high margin, increasing understanding of customer needs, the introduction of rental services and relative competition i.e. a jostle for position between giant competitors. Disruptive change caused by new entrants will occur  but such product vs product substitution is in the minority as most change is gradual and sustaining of those competing companies.  Because of their success, inertia to change builds up within those giants whilst the activity itself continues to evolve becoming more widespread, better understood and declining in differential value. In the latter stages customers can even start to question whether they are getting a fair benefit for what they are paying but overall, this is a time of Peace in that industrial ecosystem. Whilst we cannot say who will win or when things will evolve from one version to another, we can say that evolution will continue if there is competition. We have a high predictability of "what" will happen with evolution ... it will evolve from product to commodity!

The successful activity has now become commonplace and "well understood". It is now suitable for more commodity or utility provision. Assuming that concept and technology exists to achieve this then the likelihood of more industrialised forms increases. However, the existing giants have inertia to this change and so it is new entrants that are not encumbered by pre-existing business models that introduce the more commodity form. These new entrants may include former consumers who have gained enough experience to know that this activity should be provided in a different way along with the skills to do it.  In this case of computing infrastructure, it was an online bookseller which heavily used computing.

This more commodity form (especially utility services) is often dismissed by most existing customers and suppliers of products who have their own inertia to change. Customers see it as lacking what they need and not fitting in with their norms of operating i.e. their existing practice. However, new customers appear and take advantage of the new benefits of high rates of agility, speed of genesis of new higher order activities and efficiency. Novel practices and norms of operating also co-evolve and start to spread. 

Customers who were once dismissive start to trial out the services, pressure mounts for adoption due to the Red Queen. A trickle rapidly becomes a flood. Past giants who have been lulled into a sense of gradual change by the previous peaceful stage of competition see an exodus. Those same customers who were only recently telling these past giants that they wouldn’t adopt these services, that it didn’t fit their needs and that they needed more tailored offerings like the old products have adapted to the new world. The old world of products and associated practices are literally crumbling away.  The new entrants are rapidly becoming the new titans. The former giants have old models that are dying and little stake in this future world. There is little time left to act. The cost to build equivalent services at scale to compete against the new titans is rapidly becoming prohibitive. Many past giants now face disruption and failure. Unable to invest, they often seek to reduce costs in order to return profitability to the former levels they experienced in the peace stage of competition. Their decline accelerates. This stage of competition is where disruptive change exceeds sustaining, it has become a fight for survival and it is a time of War with many corporate casualties. This period of rapid change is know as a punctuated equilibrium.

The activity that is now provided by commodity components has enabled new higher order activities. Things that were once economically unfeasible now spread rapidly. Nuts and bolt beget machines. Electricity beget Television. These new activities are by definition novel and uncertain. Whilst they are a gamble and we can’t predict what will happen, they are also potential sources of future wealth. Capital rapidly flows into these new activities. An explosion of growth in new activities and new sources of data occurs. The rate of genesis appears breathtaking. For an average gas lamp lighter there is suddenly electric lights, radio, television, tele-typing, telephones, fridges and all manner of wondrous devices in a short time span. We are back in the stage of Wonder.

There’s also disruption as past ways of operating are substituted – gas lamps to electric lights. These changes are often indirect and difficult to predict, for example those that are caused by reduced barriers to entry. The fear that the changes in the previous stage of war (where past giants fail) will cause mass unemployment often lessens because the new industries (built upon the new activities we could not have predicted) will form. Despite the maelstrom it is a time of marvel and of amazement at new technological progress. Within this smorgasbord of technological delights, the new future giants are being established.  They will take these new activities and start to productise them. We're entering into the peace phase of competition and many are oblivious to the future war. The pattern of peace, war and wonder continues relentlessly. I've marked this onto figure 106. At this point you might go "but that's like the pioneer, settler and town planner diagram" - yes it is. There's a reason I use those terms and call the Town Planners the "war makers".

Figure 106 - Peace, War and Wonder



Now, in this cycle, the War part is the most interesting because we can say an awful lot about it, it has a very high p(What). We know we're likely to see  :-
  • Rapid explosion of higher order systems and the genesis of new acts
     e.g. an increase at the rate at which innovative services and products are released to the web.
  • New entrants building these commodity services as past giants are stuck behind inertia barriers caused by past success
    e.g. New entrants dominating IT
  • Disruption of past giants
    e.g. High rates of disruption in the IT markets
  • Co-evolution of practice
    e.g. Radical changes in IT practices.
  • Higher levels of efficiency in provision of underlying components
    e.g. Higher levels of efficiency within IT.
  • Widespread shifts to the new model driven by the Red Queen effect
    e.g. Widespread adoption of cloud services.

Wait, aren't those my predictions! Yes, I told you I was cheating and giving cowardly custard predictions of the kind "the ball that was thrown will fall to the ground". However, not only do we have a high predictability of what we can also use weak signals from publication types and conditions to give us a pretty decent probability of when. This is what makes the "War" state of change so remarkable. We can anticipate what's going to happen and  have a reasonable stab at when well in advance.

Figure 107 - The War state of economic competition


I've been using this peace, war and wonder cycle in anger for about eight years. There's many things it helps explain from how organisations evolve to the different types of disruption. However, we will cover that in the next chapter. For now, I just want to share the last time I ran the cycle. This was more recently in a piece of work for the Leading Edge Forum in 2014. The points of war are the points which the signals indicate that these particular activities will become more industrialised. Of course, there's a world of product competition beforehand but at least we have an idea of when the changes will hit.

Figure 108 - future points of war


From the above, we can take an example such as intelligent software agents and see the weak signals indicate a world of developing products but quite a long period until the formation of industrialised forms, sometime around 2025 - 2030. However, there will be a future when intelligent software agents will become industrialised and the intelligent agent driving your car will become the same one that powers your future mobile device or your home entertainment system. This will cause all forms of disruption to past giants along with changing practices. Closer to home, we can see that Big Data systems have already entered the war phase and sure enough we have growing utility services in this space. That means product vendors that have dominated that space are in real trouble but probably don't realise it. They will have plenty of inertia to deny that the change will happen.

Predictability and Climatic Patterns

It's worth at this point just using the above example (figure 107) to show how many common climatic patterns can be involved.  Some of these patterns you will be already familiar with, others we will dive into more detail on as we go through the book. Whilst there are many areas of uncertainty in a map, there's an awful lot of things we can say about change. From figure 109, then :-

Figure 109 - Climatic patterns and predictability.



Point 1 - everything evolves. Any novel and therefore uncertain act will evolve due to supply and demand competition if it creates some form of success.

Point 2 - success breeds inertia. It doesn't matter what stage of evolution we're at, along with past success comes inertia to change.

Point 3 - inertia increases the more successful the past model is. As things evolve then our inertia to changing them also increases. 

Point 4 - no choice over evolution. The Red Queen effect will ultimately force a company to adapt unless you can somehow remove competition or create an artificial barrier to change.

Point 5 - inertia kills. Despite popular claims, it's rarely lack of innovation that causes companies problem but inertia caused by pre-existing business models. Blockbuster out innovated most of its competitors through the provision of a web site, video ordering online and video streaming. Its problem was not lack of innovation but pass success caused by a 'late fees' model.

Point 6 - shifts from product to utility tend to demonstrate a punctuated equilibrium. The change across different stages tend to be rapid exponential changes.

Point 7 - efficiency enables innovation. A standard componentisation effect.

Point 8 - capital flows to new areas of value. A shift from product to more industrialised forms will see a shift of capital from past product companies to utility forms along with investment in those building on top of these services.

Point 9 - coevolution. The shift from product to more industrialised forms is accompanied with a change of practice.

Point 10 - higher order systems create new sources of worth. The higher order systems created though being uncertain are also the largest sources of future differential value.

As you develop skill in understanding the landscape and climatic patterns involved, you will find yourself being able to increasingly anticipate common forms of change.

Categorising predictability

Now I've introduced the concepts of anticipation, I'd to refine the terms p(What), p(When) and p(Who). When I'm talking about predictability, I am talking about how accurately we can predict a change. If we assign a 10% probability to something then a high level of predictability means our 10% assignment is roughly right. A low level of predictability means we just don't have a clue. It could be 10%, 0.1% or 99%. We literally have no idea. A low predictability of what - a low p(What) - means we have no clue what's going to happen. You can still assign a precise probability to the change but it's going to be wildly inaccurate. You're in the land of crystal balls and tarot cards.

When it comes to anticipating change then at a market level it's extremely difficult to identify who is going to make a change. This requires exceptionally focused effort but in general p(Who) is always low. That doesn't mean that you can't prepare for changes especially points of war or industrialisation of components. Cloud computing was highly anticipatable and could be prepared for well in advance despite us not knowing who was going to lead the charge. There is a broad spectrum of changes from those which are known to unknowable to unknown. I've characterised these in figure 110 using p(What) and p(When) as the axis.

Figure 110 - characterisation of change


An exercise for the reader

By now I've hopefully given you a basic introduction into anticipation. This is a topic that is worthy of its own book and there are many methods and techniques to be used here. However, as with the whole cycle of strategy this is an area which with practice, learning of common patterns and understanding of the landscape that you will refine. The main purpose of a map is a learning and communication tool and by applying common patterns to it, you can discuss your anticipation of change with others and allow for that all important challenge. There are still lots of areas of uncertainty on a map and in fact the more you use it, the more you'll find yourself embracing that uncertainty. There are many mechanisms to exploit it.

I've covered quite a bit in this chapter and we've got a bit further to go on this subject however for the time being, I'd like you to take some of your maps and try to anticipate change. Look for shifts from product to commodity. Think about the inertia you might, the co-evolution of practice that may occur and how it will expose new worlds of wonder.

----

Next Chapter in Series [to be published soon]
GitBook link [to be published soon]