Sunday, June 11, 2017

To Infinity and Beyond

Chapter 17
(very rough draft)

Meeting a spaceman

I was working on the use of printed electronics within a paper book (think of an interactive book which looks like a normal book) when I got that phone call from a friend about "this guy who really wants to meet you". I was curious, so I went along to meet someone called Mark at Canonical. I didn't know what to expect. The first few minutes were certainly interesting.

Shuttleworth : "I'm Mark. I've been told you're a good UX designer."
Me : "I don't know anything about design."
... silence.

It was an awkward pause. Then Mark realising the next hour was probably a waste of his time asked me to tell him what I did know about. I talked about evolution, the changes in the industry and before long we were into graphs, maps and cloud computing. The time flew by. We kept talking. I was introduced to others and in what seemed like lightning speed, I was working at Canonical. I had one job, to bring Ubuntu into the cloud. I called my friend, asked him what had happened. Steve just responded "I knew you'd get along".  Life is full of pleasant trouble makers like Steve.

The first day I arrived for work, I was all excited and had the usual confused look of a rabbit staring at headlamps. My boss, who also happened to be another Steve, did the usual rounds of introductions. That was an interesting moment. Whilst I delighted in the warmth of the people I met, the first five responses to my role of bringing Ubuntu into the cloud were negative - it's a fad, why are we doing that etc. I knew I was going to have to build a cabal pretty quickly and create some momentum. However my first official task was to look at the virtualisation strategy that had been written. It was one of those "oh, what have I done" moments. Fortunately it didn't take long to find others with common interests - Rick Clark, Soren Hansen, Nick Barcet and many others. Steve George was one of the most supportive people I've worked for, a good friend and then there was Mark. Without Mark none of this would have happened. 

The problem to begin with was Canonical was focused on the server and desktop market. It was up against huge giants such as RedHat and Microsoft. It was making valiant, almost heroic efforts but Canonical was small. Many wanted to focus on the server OS, to generate revenue from support licenses and to a few then the Cloud was a distraction. The problem was one of focus and what I needed to do was change the mindset. To explain this issue and why it mattered I'm going to cover a number of concepts from the Three Horizons to Porter.

The Three Horizons

The three horizons was a model put forward in the Alchemy of Growth, 1999. It discussed three views that any corporation had to take.

Horizon 1 : the core business which provides the greatest profits and cash flows that need to be extended and defended.

Horizon 2 : are the emerging opportunities and businesses that will drive medium term growth. These may include new ventures that you are investing in which are expected to generate substantial future profits.

Horizon 3 : These are ventures that should ensure the company long term future.  They can include research projects or pilot programs or even investment in startups.

For Canonical, horizon one was the core support licensing revenue.  Horizon two included new concepts such as online storage, the app store and extending onto more devices. Horizon three ... well, I'm quite convinced a few would have thought that included myself. Whilst this model of three horizons is a reasonable way of examining a company, I personally find it inadequate. I often find that some confuse it with the pioneer - settler - town planner model of organisation by associating town planners with horizon one and pioneers with horizon three.  To explain the weakness with the model, I'm going to use the map of mapping that I introduced earlier. To save you scrambling back through past chapters, I've provided that map here in figure 213.

Figure 213 - The Map of Mapping.


Let us now assume that we decide to use the map of mapping to build a new business. I'm going to take a part of the above map and concentrate around the provision of forecasting (i.e. anticipation of know changes) to others. I could have quite easily built a comfortable life around the weak signals I developed for forecasting change and built myself a small boutique consultancy providing market and technological forecasts. The premise behind such a business is provided in figure 214. My purpose with such a business is to simply survive (i.e. make money), the user wants an advantage over competitors, they measure this by the return on capital invested in a space, I enable this through anticipation services based upon known climatic (economic) patterns that use maps of the industry. It would be a relatively trivial business to create had I the desire. 

Figure 214 - Forecasting Service


Horizon one would be that boutique consultancy business. I'd have been protecting (i.e. not making creative commons) the twenty odd common economic patterns that I know about which impact the environment. I'd probably use a worth based mechanism (or outcome based as it is called today) for charging. I could also extend this map to cover in more detail the social capital components of trust, the activities needed to perform the analysis and run the company. Remember you can make all forms of capital whether data, practice, activity, knowledge or social. Let us hypothesise that I had done this and by hook or by crook turned it into a small success. What would my horizon two be?

In this case, the diffusion of knowledge and evolution caused by supply and demand competition would drive many of those components to a more industrialised space. At some point, I'd have to prepare for my boutique consultancy entering a world where products did the same thing. I would know in advance that we'd have inertia to that, any shift from one stage of evolution to another (e.g. custom to product) has inertia caused by past success. It's one of the those climatic patterns. I've mapped this in figure 215.

Figure 215 - Horizon two


But, with foresight - and I'd hope that I'd be using mapping on myself - then it would be relatively trivial to anticipate and overcome the inertia. How about horizon three? In this case, we get a divergence. I could for example focus on further industrialisation to a more utility service exposed through some form of API - Anticipation as a Service or AaaS for short. Of course, such as change along with mirth over the name would come with significant inertia created by any existing product based business model. Alternatively, I could expand into something new such as the use of doctrine for competitor analysis or the arms sale of context specific gameplay or even some novel, uncharted, higher order system that I haven't even considered. I've shown these divergent horizon threes in figure 216.

Figure 216 - Horizon three


Now let us add the pioneer - settler - town planner model onto the horizon three map (see figure 217). Remember each team has different attitudes (which is what pioneer, settlers and town planners represent) and each not only build but operate their own work. The important thing to note is that horizon three consists of town planners or settlers or pioneers or all of the depending upon what I choose to do.

Figure 217 - PST added to horizon three.


The first thing to note are the horizons are context specific. You cannot simply overlay them onto a PST model or even the concept of evolution (e.g. by saying that genesis is horizon three) as it depends upon where you are and the landscape surrounding you. The second thing to note is that the horizons can often be broadly anticipatable. This is the thing I find inadequate with the horizon model because without a map and the learning of common economic (aka climatic) patterns then it becomes all too easy to miss the obvious. It is why I find the three horizons useful as a high level concept but overall weak in practice on its own. It also fails to help me adequately deal with inertia or legacy.

The issue of legacy

In chapter 9, we examined the climatic patterns of co-evolution i.e. practices can co-evolve with the evolution of an activity. There is usually some form of inertia to a changing activity and this can be compounded by a co-evolution of practice. In figure 218, I've taken the original diagram from chapter and added some inertia points for the shift from product to utility for both compute and also platform.

Figure 218 - Change of Compute and Platform


As previously discussed, there are many forms that inertia can take. However, the question I want us to consider is what represents legacy in this map.  The two obvious areas are those trapped behind inertia barriers e.g. compute as a product and platform as a product (i.e. platform stacks). The next obvious include the related practices i.e. best architectural practice associated with compute as a product. What is not so obvious to begin with is the issue that as components evolve enabling higher order systems to appear then the lower order systems become less visible and for most of us legacy. The departments that ran switchboards in most companies were once a highly important and often visible aspect of communication. For many companies, that activity has been consumed into either reception or call centres in much the same way that email has consumed the postal room. We still send letter to each other (more than ever before) but they are digital. In this case, the role of the components underneath the platform layer are going to become less visible. Dealing with and managing infrastructure will become as legacy to most companies as the switchboard is today.

Hence another area of legacy would be the practices and activities below the platform layer which includes concepts such as DevOps. In 2017, such a statement tends to receive a strong negative reaction. Most react with the same forms of inertia as those who reacted against cloud in 2006. Many will claim DevOps is more than infrastructure as it's about development and culture. Depending upon how far in the future you're reading this from, you'll probably be quite surprised by this and even more likely have never heard of DevOps. As with all such things, DevOps was a child and reaction against the prevailing methods of management. It co-opted concepts from earlier school of thought (e.g. ITIL) including iterative approaches, use of components, configuration management, services approach, a focus on users and measurement whilst simultaneously distancing itself from them. It added its own dogma and sort to create a separate tribe. The same will happen in platform, a new school of thought will emerge that will copy and build upon DevOps but deny it has any relationship to it. DevOps will become "what my mum and dad does" as the rebellious child declares its independence and denies any association to the former.  If you think of concepts as genes, then many of the genes of DevOps will be found in the new generation (though they will rarely admit it, painting DevOps as some strawman version of itself), some of the genes will become redundant and others will emerge. 

I've marked on the main area of legacy onto our map in figure 219.  To do this, I've used the concepts of inertia and how industrialised components enable not only higher order systems but become less visible themselves. I've also added on a typical PST structure. As we can see, many of the legacy areas exist within the settlers and the town planning teams. 

Figure 219 - adding legacy (a consumer perspective)


Obviously there is a perspective to be considered here. I'm looking from the point of view of someone who consumes compute. If I'm a major provider, whether platform in the future or utility compute today then much of this is definitely not legacy any more than power generation systems are to electricity providers. From the perspective of a major provider then legacy would look more like figure 220 i.e. it will consist of activities (and related practices) that are stuck behind inertia barriers but not the impact of lower order systems becoming less visible. What becomes increasingly invisible to others (i.e. consumers) is still very visible to providers.

Figure 220 - legacy from a provider perspective.


There is an unfortunate tendency of people to associate the town planning groups with legacy. As should be clear from above, then that's not the case. The recent future of computing has been industrialisation by town planners to utility services. The legacy has been past product models, a realm of settlers.  If we take the consumer perspective from figure 219, then the future is a mix settlers building application, pioneers discovering emerging practices that combine finance and development (whilst denying any inheritance from DevOps) and town planners busily create the empires of scale around platform utility services. I've shown this future in figure 221 and it's where companies should be investing in 2017.

Figure 221 - the future, from a consumer perspective


It's important to note that legacy can be anywhere. It can be caused by a custom built activity which has failed to evolve or a product based business in a utility world. Legacy is simply a consequence of a failure to evolve and it is not associated with one group such as pioneers, settlers or town planners but instead all.  When it comes to managing legacy then it's really important to understand those points of change and the impact of co-evolution. This should become second nature to you and it's worth practicing. There's another perspective beyond the three horizons, beyond inertia and legacy that we also need to discuss. It's the perspective of Porter's forces.

On Porter

For those unfamiliar with Porter's five forces, these are rivalry within the industry, threats of new entrants, threats of substitution and the bargaining power of suppliers vs consumers. In this section we're going to examine these five forces through the lens of the peace, war and wonder cycle (see chapter 9).

In the time of wonder, it is a battle to become established. The field is not yet developed and there are no "new entrants" as there are no established figures. Everything is new, uncertain and uncharted. The consumers hold the power and it is they who decide whether this industry will succeed or not.

In the time of peace, there is a constant tug of war between supplier and consumer power over the products produced. The developing giants are normally well protected from new entrants in a game of relative competition, except against the occasional threat of substitution. It is this substitution by a different product which is the dominant factor.

In the time of war, new entrants providing a more industrialised form of the act threaten the existing giants that are stuck behind inertia barriers. It becomes a fight for survival for these giants and they are often poorly equipped. It is not a case of a product becoming substituted by another product but instead an entire industry is being changed to more industrialised forms.  It is often assumed that the shift towards utility provision means centralisation but this is not the case. 

Whilst the interaction of all consumers (demand competition) and all suppliers (supply competition) drives the process of evolution, the question of whether a specific activity or data set centralises or decentralises depends upon the actions of individual actors (suppliers and consumers) in this market.  For example, it would have been relatively trivial for the hardware manufacturers to create Amazon clones and a price war in the IaaS space around 2008-2010 in order to fragment the market by increasing demand beyond the capability of one vendor to supply due to the constraint of building data centres. I had these exact conversations with Dell, IBM and HP throughout 2008 and 2009. I even told them their own inertia would fight against this necessary change and they would deny the existence of the punctuated equilibrium until it was too late. The fact they didn't act and lost their own industry is entirely the fault of their own executives and also one of the major factors why have seen  centralisation in the IaaS space. Centralisation depends upon the actions of specific actors (in this case the inaction of hardware suppliers and hosting companies). In the future, this may in fact yo-yo from centralised to decentralised or find a balance between the two (as with electricity provision and self generation). Of course this is a change in the means of production and the interfaces themselves are unlikely to change i.e. a shift from central to self-generation does not mean a change in voltage / frequency for domestic power provision.

The point to remember is the balance between these forces tends to change as anything evolves. It also isn’t static within a stage of evolution. For example, when an activity becomes more of a commodity or provided as a utility we will often experience a yo-yo between centralisation and decentralisation (with a corresponding yo-yo between Supplier and Consumer bargaining power). However as a general guide, I provided in figure 222 the most dominant forces you're likely to encounter.

Figure 222 - Porter's forces and evolution


Examining Canonical

With a basic understanding of horizons, Porter's forces and legacy, we can now examine the business of Canonical. The horizon one (core business) was related to selling support on the server OS (operating system). However, compute was evolving to more utility provision, Hence, with the exception of large cloud providers then the server OS support was likely to become a legacy business. Instead, we'd needed to focus on horizon two and the commercial use of guest OS on top of these large virtualised computing environments. We understood that companies would have inertia to these changes and being a shift from product to more industrialised forms it was likely to be a punctuated equilibrium (period of rapid change). We also understood that the biggest threats into this space would be new entrants and given the state of strategic play in many companies then we were likely to see centralisation. I've drawn these concepts onto the map in figure 222.

Figure 223 - the changing market


We also understood that co-evolved practices would emerge, that Jevon's paradox meant we were unlikely to see significant savings in IT but instead increased development activity and that a further horizon, the shift of platform from product to utility was possible. I've marked up these horizons onto figure 224.

Figure 224 - the horizons.


In terms of play, we understood that moving fast and land grabbing the guest OS was essential. To help in this, we also needed to support those developing applications or building tooling around those co-evolved spaces. If we found examples of platforms plays in this space we needed to be invested in this. We also understood that many potential customers would have inertia hence we'd have to provide some forms of transitional / private cloud offer. We also knew our competitors had inertia. As soon as I discovered Red Hat salespeople were rewarded bonuses based upon satellite subscriptions (used for security updates) then I quickly set about promoting a message that security should be "free" in the cloud.  There's nothing like threatening someone's bonus to get them to turn against and spread fear, uncertainty and doubt around a change. Our focus was clear within my cabal. Mark did an amazing job of making the focus on cloud the focus of the entire company. Rick and others set about putting in engineering effort to make it happen. Steve gave me all the firepower and cover I needed. For my part, I mainly focused on promoting Ubuntu's cloud message, being involved in the community, highlighting targets to bring on board and trying to stop people rebuilding or getting in the way of things that the community was doing. An outline of the play is provided in figure 225 and the result in figure 226. Within eighteen months, Ubuntu went from a small part of the operating system to dominating the cloud guest OS. My part was a minor but instrumental role and I have to applaud the marvellous teams at Canonical and within the community for making it happen. A small company of three hundred took on the might of two giant hordes but unlike the Spartans, this time we won. My proudest moment came from hearing a CIO talk about how "the future was all RedHat and then suddenly it was all Ubuntu". I played a small part in that.

Figure 225 - our focus


Figure 226 - the results

I often hear people talk about Canonical was lucky, well there's always some element of luck but the moves were deliberate. Obviously, people can just say the timing was lucky but they'd be wrong on that as well. I had a helping hand with timing thanks to Gartner. They probably don't even realise but I think it's worth explaining.

On the question of timing

I'm not a big fan of Gartner but figure 227 is one of the most useful graphs they've ever produced. It's a hype cycle of emerging technologies created in 2008.  It uses the earlier y-axis of visibility which later on became expectations. How does the axis change whilst the graph remain the same? Ah, that's the beauty of it but first, a bit more background.

Figure 227 - Gartner emerging technologies, 2008


During my time in the wilderness prior to Canonical, I had been looking at various ways of measuring impacts from evolution. One of the issues with this is when we look at opportunity then the evolution of any single act creates different waves of opportunity. One of these waves is focused on differential value (i.e. it's something you have but I don't) and the second wave is around operational value (i.e. we both provide this but you do so more efficiently). Both the waves appear to have a learning element and then a sharp decline as the change diffuses and evolves further. I've provided examples of these waves in figure 228.

Figure 228 - An example of different waves of value.


Of course, opportunity is only part of the equation. There's also the cost involved, particularly in development of something novel. There's also risk as the uncharted space is by its very nature uncertain. However, I developed a generalised benefit curve which for differential value is shown in figure 229. An almost identical benefit curve exists for operational value but that occurs much later in evolution and is related to co-evolved practices that emerge.

Figure 229 - A benefit curve for differential value


From the benefit curve, the early stages of genesis are all about investment. As it evolves, the cost of production reduces and we start to realise some of the benefit. We're still in the custom build stage, others are starting to copy but in general the cost of production is reducing fast enough to overcome any differential loss due to copying. Alas, at some point the cost of production is low enough and the activity defined enough that someone produces a product. On the upside the cost to implement is plummeting but alas, the differential value is declining faster as more companies do actually implement. The models I developed all had variations of this shape, so think of it more as a mental model.

What I then became fascinated by - I like to explore - was timing issues. Let us say we've recently read a whitepaper on a marvellous new activity. That activity is described as having some benefit but it also involves cost. By the time I get around to implementing the activity it may well have evolved. It might provide a different benefit to what I was expecting i.e. it costs less because it's a product but there's little differential value as everyone else is doing this. I've superimposed the evolution of an act onto the benefit curve in figure 230 to highlight this point.

Figure 230 - Changing benefit with evolution and implementation


I then modelled this delta between what I was expecting to get and what I got over time. The model I used made lots of horrible assumptions and it's about as solid as a tower of jelly.  At some point in the future, I might go and revisit this but I don't normally mention this little side journey in mapping. However, there was one remarkable thing about the delta expectation curve over time - it somewhat resembles a Gartner hype cycle - see figure 231.

Figure 231 - delta expectation over time (the expectation curve).

We have the same peak of inflated expectation, the same trough of delusion. My first reaction was horror. The evolution curve on which mapping is built uses ubiquity versus certainty. If I can model from Gartner's hype cycle to evolution then I can take the points on a hype cycle and measure precisely where something is on the certainty axis of evolution. For things that are uncertain then this should be impossible. My first reaction was Gartner's hype cycle proved evolution was wrong. I was a bit glum at that point especially since I had found mapping so useful. Fortunately, I met with a friend who pointed to a hole in my argument. I was assuming that Gartner's hype cycle was based upon a measurement of some physical property. If it wasn't, if it was just aggregated opinion (of consultants, analysts or industry) then there's no measurement of the uncertain as it's just opinion. This turns out to be the case, the hype cycle is just opinion. For interest, Gartner now uses expectation on that y-axis.

Along with being quietly relieved that I hadn't yet disproved what I was finding useful, it also opened up a new opportunity. I have two benefit curves - one for differential value and one for operational value. They both shared a common expectation versus time pattern. For example, if I look at an evolving component then where it appears in the early stages on the expectation curve for differential value can be the same place it appears on the expectation curve for operational value when it's more evolved. See figure 232

Figure 232 - Evolution of an act on differential and operational expectation curves.


Now, I already have a weak signal using publication types that could identify when things are likely to start to industrialise and enter a war (see chapter 9). I've reprinted the last analysis on this that I undertook in 2014 in figure 233. What I'd like you to notice is that the shift from product to utility for infrastructure was well into a war in 2014. Whereas the war for 3d printing and the use of commoditised 3d printers is some way off.

Figure 233 - When is the war likely


Now, in 2008, I already knew (from my weak signals) that we were entering the war phase for computing infrastructure whereas 3d printing had a long time to go before it started to industrialise. I also suspected that both a relatively novel activity (e.g. 3d printing) and an industrialising activity (cloud) could appear at the same place on two different expectation curves - one for differential value and one for operational value (figure 232 above). So, let us look at that Gartner hype cycle again and highlight two components - cloud computing and 3d printing.

Figure 233 - Cloud computing and 3D printing.


They both appeared at roughly the same place. This told me something which I've subsequently found quite useful. The Gartner hype cycle doesn't distinguish between differential and operational value as both are on the same curve. So, why does that matter? Well, in the case of cloud computing, which was the industrialisation of computing and all about operational value then you'd want to be going all in during 2008. Being in the early stage of this expectation curve just reinforces the point that people are learning about a change which you absolutely want to be a first mover to. The last thing you'd want to do is wait until it reach the plateau of productivity by which time the war would be well and truly over. If you're a vendor, this would be curtains. Gartner even calls out that this is moving fast with its time to mainstream adoption for cloud (light blue circle).

However, in the case of 3D printing then you do want to wait or be a fast follower. It has a long long way to go before it industrialises and you've got an entire product stage it has to evolve through. In fact 3D printing will reach the plateau of productivity and see relatively widespread adoption as a product long before it industrialises. At some future time, as it starts to industrialise then it'll probably reappear in the technology trigger (usually under a slightly different meme). When it comes to 3D printing then you could wait a bit and get involved in the product space or wait much longer until the "war" is upon that industry at which point you'd need to go all in.  Two points - cloud computing and 3D printing - on almost exactly the same point of the hype cycle required radically different approaches to investment and strategy. One was "all in", the other was "wait and see".

Being aggregated opinion, I do find the hype cycle quite useful as long as I separate out what stage of evolution something is in first.  I often talk to CIOs who tell me they invest when something is in the stage of enlightenment. That's a guaranteed way of losing every major technological war in business.  For me in 2008, this hype cycle helped reinforce the message that we had to go all in, it was a land grab for this territory. I also took comfort that many of my competitors probably read exactly the same hype cycle and thought they had time. Thank you Gartner, you have no idea how much you've helped me take out companies over the years. Better luck next time IBM, HP, Dell, RedHat ... assuming they survive what is to come. Anyway, the gameplay above was 2008 to early 2010. It's also worth looking at another part of my journey at this time into Government but that I'll leave for the next chapter.

Thursday, May 25, 2017

Blue pill or red pill?

The strategy cycle is one of those simple mental devices which hides a world of complexity. On the surface, it's all about observing the environment (the landscape and climatic patterns which impact it), orientating around it (the doctrine or principles we might use), deciding where to attack (leadership) and then acting. It's a combination of OODA (Boyd) and Sun Tzu in a easy to understand cycle. 

Figure 1 - The strategy cycle


However, dig beneath the surface and you start to discover layers of complexity.  To understand the landscape you need to map it and there as many maps as there are industrial landscapes. There's also not just one climatic pattern but many and these are useful in anticipation. There's not just one principle to follow but many patterns for doctrine that are universally useful for any organisation. There are also many forms of context specific gameplay which are used in scenario planning various plays. Finally you have the speed at which you loop around the cycle.  

However, take one small area - doctrine - and the complexity expands.

Figure 2 - Components


There are least forty different forms of universally useful doctrine from focusing on user needs to a bias towards action to using appropriate methodology. Each of these in turn expands. For example managing inertia has 16 different types of inertia and different tactical plays for each.

Figure 3 - Doctrine


Figure 4 - Managing inertia


Of course, how you implement doctrine is also context specific. Thinking small as in team structure (e.g. Amazon two pizza or Haier's cell based teams) might be universally useful doctrine but the teams within Amazon will be different from the teams in Haier whether in size, composition or number.

With practice much of this becomes second nature, you learn to map, you to learn where inertia will exist in the map and the types of inertia to change that you're likely to face. You learn how to constrain complex spaces by dividing large scale businesses into small contracts and small teams. You learn how to constrain maps themselves creating an atlas for an industry with each node representing an entire map itself. It also all blends together with gameplay and anticipation of change to become part of the craft. You learn how to exploit the inertia of others or to design an organisation to cope with constant evolution. But there are layers of subtlety and many unexplored areas.

For example, I've long used doctrine as a way of examining competitors to determine how adaptable they are and hence what sort of gameplay I might use against them. I've provided two doctrine tables that have been completed by others - one for a US web giant that is tearing up industry after industry and one for a US investment bank. I'll let you decide who is who and where you fit between them and which you would prefer to face off against.

Figure 5 - Doctrine table 1


Figure 6 - Doctrine table 2


The problem of course, is whilst the list of doctrine has been developed from mapping many industries and finding patterns that are universally useful, I cannot actually say which components of doctrine matter more. Is transparency more important than a bias to action or is using appropriate methods more important than a focus on user needs. Sorting this out will take decades of data collection and as organisations evolve we will find that more universally useful principles emerge. However, as a rough guide the doctrine table appears useful enough for the time being.

Of course, without the priority order it becomes difficult to say which you should adopt first. Certainly some of these doctrine appeared significantly important in our Learning from Web 2.0 report published in 2012 but are they the most important? Also is there an order, are some dependent upon others?

Using experience, I can make an educated guess about which should be implemented in what order (as a discrete set of phases) but it's only a guess. For example, I know that implementation of a pioneer - settler - town planner structure (a topic of another post) should happen well after an organisation has increased its situational awareness, got used to applying different methodologies, started to appreciate the difference between aptitude and attitude whilst having implemented a cell based structure. You can't just charge in with pioneer - settler - town planner. There is an order to these things.

Figure 7 - Doctrine phases


This of course is just one small aspect of mapping. There are over 27 different forms of economic pattern used in anticipation with various weak signals. There over 70 different forms of context specific gameplay that I'm aware of - there are different types of disruption, even Porter's five forces have a context specific element. Mastering mapping is a daunting task and not one that I expect to achieve in my lifetime.

However, the good news is that you can learn in small steps. Just the ability to map an environment will get you to challenge assumptions (a form of doctrine), focus on user needs (doctrine), provides a systematic mechanism of learning (the map) and helps you appreciate that everything evolves (a climatic pattern). Loop around the cycle one more time and with two maps you'll start learning how to remove bias and duplication (another form of doctrine) by comparing them.

Every action you take, every loop around the cycle will dive you deeper into the subject and you will get faster in return. The speed at which I can map an industry today outstrips my early attempts to map a single line of business in 2005. But once started, be warned, it's hard to go back. As Chris would say "What is seen, cannot be unseen".  

So I give you the choice. The blue pill means you go no further, you wake up in your land of SWOTs, stories, gut feel and magic secrets of success learned from the great and good of the management consultancy industry. The red pill ... ah ... start with chapter 1.

Friday, May 12, 2017

Is my diagram a map?

Let us take a systems map


It is visual and context specific (being a system diagram of an online photo service and not a self driving car). It has components and the relationship between components. We also have the flow of information (blue line) between components. But does this make it a map? I've shaded one box (CRM) in grey and moved it.


The components and the relationship remain the same. It's still visual, it's still context specific specific and nothing alters with flow. The diagram is in essence identical but yet I've moved a box. Let us compare this with a road map of major roads in the UK.

It's visual and it's context specific (being the UK not France). The diagram has components, relationships between them and also flow (e.g. the movement of traffic). But it has more than this. The compass acts as an anchor, the components have position relative to each other (London is North East of Southhampton) and we have a concept of movement. If I wanted to walk from London to Exeter (not travelling along the major road) then I could head South South West (roughy). The diagram is not very accurate, it lacks scale but is this a map? Let us move the component I've marked in grey (Nottingham).


By moving the component I've fundamentally changed the meaning of the map. Nottingham is no longer North of London but SSW of London which of course, a quick flight across the territory will tell you is wrong. The map (and this diagram is a map) helps you explore the territory. It does so because it has the characteristics of navigation e.g. anchor, position and movement. Without such characteristics you cannot learn about the territory.

Whilst the road map is a map, the systems map is in fact not a map. It lacks those navigational characteristics which enable us to learn about the territory. We might call it a map, in the same way I might call myself a Jedi but it's not the name but the characteristics that define what something is. I still lack the powers of the force no matter how many times I tell myself that I am a Jedi.

A map must be visual and context specific. It must have components but more than this it requires an anchor, position and movement. The quickest way I know to determine if something is a map or not is to move a component (i.e. use movement) and see if this changes the meaning of what is being looked at. If it doesn't then it's not a map and hence it is of little use for learning about a space. 

Almost everything we call a map in business - systems maps, business process maps, mind maps, digital maps, product road maps, strategy maps - are not in fact maps. They are diagrams.  If you want to map a business then I've written the best part of a book (all creative commons) on how to actually do this.

Friday, April 21, 2017

The book so far

Writing is not something I find comfortable. Lately I've been consumed with putting together a book on mapping. It's getting there. Slowly.

Chapter 1 — On being lost
An introduction into the concept of situational awareness and the strategy cycle.

Chapter 2 — Finding a path
How I built my first map of business.

Chapter 3 — Exploring the map
The shock of discovering common economic patterns.

Chapter 4 — Doctrine
The even bigger shock of discovering that some patterns are universal whilst others are context specific.

Chapter 5 — The play and a decision to act
Using maps to make a choice including some basic strategic play such as ecosystem models.

Chapter 6 — Getting started yourself
How to start mapping and some useful books to read.

Chapter 7Finding a new purpose
Using mapping on itself and discovering a new purpose. The underlying work on evolution.

Chapter 8Keeping the wolves at bay
The dangers of simplicity and the concept of flow.

Chapter 9 —Charting the future
The use of weak signals and economic cycles.

Chapter 10 — I wasn’t expecting that!
The evolution of organisations and different forms of disruption

Chapter 11 — A smorgasbord of the slightly useful
A collection of useful mapping topics that by pure coincidence might be needed for the following scenario.

Chapter 12 — The scenario
Something for you to get your teeth into.

Chapter 13 —Something wicked this way comes
Analysis of that something from chapter 12.

Chapter 14 —To thine own self be true
The path you probably should have taken in chapter 12.

Chapter 15 — On the practice of scenario planning
On scenario planning and the concept of roles. Diving a little more deeply into financial modelling.

Chapter 16 — Super Looper
A walk through of severals loops of the strategy cycle.

I've around 4-5 more chapters to go and then this first pass, this introduction into mapping, should finally be finished. Except of course for the rewriting, editing, rejigging, frustration, burying in soft peat for 6 months in triplicate, tearing up, cursing etc etc.

Thursday, April 20, 2017

Round, round, get around, I loop around

Chapter 16

(draft)

The LFP example is based upon a real-world event. I say “based” because I usually take time to disguise the actual event to protect any guilty parties. In this case, the haphazard and stumbling CEO was ... me. The variations to the real world include it was I, not the client, that proposed this concept of worth based development and I put in the effort to build those numerous financial models. True to form however is I also fought plenty of internal battles over inertia to make this project happen. In this case, I’m going to use that LFP scenario to examine mapping in practice. I’m very wary that my long experience with mapping means that I tend to gloss over parts through assumption. In much the same way, I spent six years assuming everyone already knew how to map and it wasn’t until 2011 that I started to realise they didn’t. With that in mind, I’m going to go into excessive detail in the hope that I don’t miss anything useful to you. To keep it relevant and not just a history lesson, I’m going to go through the steps of how you would tackle the LFP scenario as if it was happening today.  
To begin, I always start with the strategy cycle. To me, it doesn’t matter whether I’m looking at nation states, industry, corporates, systems or even individuals – the strategy cycle applies. For completeness, I have provided this cycle in figure 198.

Figure 198 – the strategy cycle





Our initial purpose for this LFP system is to help create leads for our client. That is what they need and it is also how we will be measured. We don’t have to agree to the proposal but if we choose to accept it then our focus must start here. Of course, we have our own needs – to survive, to make a profit, to have fun - which we could choose to map. In this case, I’ll elect not to.

We know we also have a “why of movement” question in the scenario – do we build the entire system in-house or do we use elements of a public platform? Do we go here or there? Why? Before we can answer this, we need to understand the landscape a bit more. Fortunately, in the LFP scenario a map has been kindly provided by engineering along with the more common financial models. As tempting as it is to dive straight into the financials, start with the landscape. I do love a good spreadsheet, I’ve spent years of my life immersed in cashflows, GAAP, chart of accounts, options analysis, business models and all manner of delightful things. However, a word to the wise, put these to the back of your mind for the moment. The financials can often be skewed by a bias to the present. 

With the map provided, one immediate thing I’m going to note is that we have inertia against using the public platform space via both security and the systems group. I’m going to mark that onto the map in figure 199.

Figure 199 – adding inertia.


Now let us focus on that platform change, the shift from product to a more industrialised form which in this case means utility. As noted many times before we have a common economic pattern of co-evolution i.e. as an act evolves we often see a corresponding co-evolution of practice. Let us concentrate here, remove all the other bits of the map and add in co-evolution. I’ve done this in figure 200

Figure 200 – co-evolution


By applying that basic pattern to our map, we can anticipate that as the world shifts towards more utility like code execution platforms, some new-fangled practice (a sort of DevOps 2.0) will emerge. We don’t know what those practices will be as they emerge in the uncharted space. We don’t know when precisely this will occur. But we know that we will have inertia to this change. We also know that such changes tend to be rapid (another common economic pattern known as the punctuated equilibrium). We can also go a bit further. 

The nodes on the maps are stocks of capital with the lines representing flows of capital between them. With evolution from product to a more industrialised form then we normally expect to see flows of capital away from the past industry into more industrialised providers and / or new higher order systems and / or new practices. I’ve marked on these flows on capital and were to invest and what will become legacy onto figure 201.

Figure 201 – flows of capital


Capital flows to the more industrialised components along with the new higher order systems that these enable - collectively we can this the new industry. There will also be new practices (i.e. co-evolved) that will replace those past practices. The new higher order systems will themselves enable new needs (technically, they expand the adjacent possible, the realm of new things we can do) which means news customers. The past ways stuck behind inertia barriers, increasingly devoid of capital will die off. If this sounds familiar, then it should. This is what Joseph Schumpeter termed “Creative Destruction”. The question is when will this happen. For that I should turn to weak signals and examine those four conditions – does the concept of utility platform exist, is the technology there, is it suitable and do we have the right attitude?  See figure 202.

Figure 201 – do the factors exist?


In this case, someone is providing such a platform hence the concept and technology exist. We have services like AWS Lambda. In the scenario, there’s obviously some sort of dissatisfaction with the current models otherwise the client wouldn’t be looking for a new way of doing things. The attitude seems to be there, maybe this platform space will help? But is it really suitable? I tend to use weak signals to help determine that but you can also use the cheat sheet. When you examine an activity, it often has characteristics from more than one stage of evolution e.g. it might be strongly product and a little bit commodity or vice versa. You can use this to help you refine your understanding of where something is. In this case, I’m looking for product characteristics with the emergence of commodity.

I’ve published a more advanced cheat sheet in figure 203, with each stage (I to IV), the terms used for different types of components (activities, practices, data and knowledge) plus the general characteristics. 

Figure 203 – The Cheat Sheet


So, let us examine the platform space today in 2017. What we’re focused on is a code execution environment which in the product world is normally described as some form of stack (e.g. LAMP or .NET) or in the utility space where we have the emergence of systems such as Lambda. It’s importance to focus on the “code execution environment” as unfortunately platform is one of those hand wavy terms which gets used to mean any old tripe – see also ecosystem, innovation, disruption and almost anything in management that is popular. Don’t get me started on this one as I’m not a fan of the field I work in. I’m sure along with strategy consultants talking about “earlobes for leadership” (HBR, Nov, 2011) then it wouldn’t take me long to find a bunch of them talking about how a “cup of tea is a innovative platform” such is the gibberish which has invaded management.

From the cheat sheet, comparing stage III (product) and IV (commodity), then: -

Ubiquity? Is the platform space rapidly increasing OR widespread in the applicable market? I think it’s fair to say that this is very widespread. It’s not a case that you normally have to suggest to a developer that they consider using a platform to build something, they often have their favourite stack whether it’s LAMP or something else. We can give a tick for commodity here. 1/1

Certainty? Are we seeing a rapid increase in use (i.e. rapid diffusion in all companies) with platforms that are increasingly fit for purpose OR are they already commonly understood, just an expected norm? I think we can say most developers would be surprised to walk into a company that was excited about its platform roll-out. They’d expect some sort of platform to exit. Strike two for commodity. 2/2

Publication types? Are trade journals dominated by articles covering maintenance, operations, installation and comparison between competing forms of platforms with feature analysis e.g. merits of one model over another? OR are trade journals mainly focused on use, with platforms becoming increasingly an accepted almost invisible thing. We, if we go back to 2004 then journals were dominated by this platform or that platform – LAMP vs .NET and the best way to install. Today, this is much less and most of the discussion is about use. Strike three for commodity. 3/3

Market? When we examine the platform market are we talking about a growing market with consolidation to a few competing but more accepted norms? OR are we talking about a mature, stabilised market with an accepted form? Well, the platform market seems mature and stable with an accepted form – .NET, Java, NodeJS, LAMP etc. Commodity wins. 4/4

Knowledge management? Are we mainly learning about how to operate a platform, starting to develop and verify metrics for performance OR is this field established, well known, understood and defined? In this case, platform probably wobbled on the side of product rather than commodity. Hence, product wins and it’s now 4/5 for commodity.

Market Perception? Do we have increasing expectation of use of some form of platform and the field is considered to be a domain of “professionals” OR are platforms now considered trivial, almost linear in operation and a formula to be applied? Again, with this though we’re getting there, product still wins and hence it’s now 4/6.

User perception? When it comes to platforms are they increasingly common, a developer would be disappointed if it was not used or available, there is a sense of feeling left behind if your company is not using it OR are they standard, expected and there would be a feeling of shock if you went to a company that didn’t use some form of standard platform (whether .Net, LAMP or other). I think I can probably say that commodity wins this one, it would be shocking to find a company that didn’t use some form of platform approach and it’s that “shock” which tells you it’s in the commodity space. 5/7.

Perception in Industry? Advantage in platform is now mainly seen through implementation and features (i.e. this platform is better than that platform) rather than an actual difference in creates OR platform is now considered a “cost of doing business”, it’s accepted and there are specific defined models. It would be difficult to imagine a software house today that didn’t view a platform as a “cost of doing business”, so whilst there’s some wobble, I’d argue that commodity edges this. 6/8

Focus of value? Are platforms considered to be areas of high profitability per unit and a valuable model? Do we feel that we understand platforms and vendors are focused on exploiting them? OR are platforms more in the high-volume space, mass produced with reducing margin. Are platforms important but increasingly invisible and an essential component of something more complex? In this case, especially with provision of utility like services then commodity wins again. 7/9

Understanding? In the platform space are we focused on increasing our education of them with a rapidly growing range of books and training combined with constant refinement of needs and measures? OR do we believe platforms and the concepts around them to be well defined, almost stable and with established metrics. This is a tough one, I steer to the side of commodity but can easily see a case for it being still in product. However, I’m going to give this to commodity 8/10.

Comparison? Do we have competing models for platforms with feature difference? Are authors publishing some form of evidence based support for comparison i.e. why this platform is better than that because of this feature and why you should use them over not use them? OR are platforms just considered essential, an accepted norm and any advantage is discussed in terms of operations – this is cheaper or faster than that? This is a tough one but in this case, I’d edge towards product. We’re not quite at the pure operational comparison. Product wins. 8/11

Failure modes? When it comes to a platform is failure not tolerated? By this, I don’t mean there is no failure - a distributed environment based upon principles of design for failure copes with this all the time. But do we have an expectation that the entire platform system won’t fail? Are we focused on constant improvement, we assume that the use of such a platform is the right model and there exists some resistance to changing it? OR have we gone beyond this, are we now genuinely surprised if the platform itself fails? Is our focus on operational efficiency and not stopping the fires? Whilst there will be many companies with the home-grown platform effort and inevitable out of control fires, as an industry we’ve moved into the commodity space. 9/12

Market action? Is the platform space entrenched in market analysis and listening to customers? What kind of blue do you want that fire to be? OR has it become more metric driven and building what is needed? Commodity wins here, just. 10/13

Efficiency? When it comes to platforms are we focused on reducing the cost of waste and learning what a platform is OR are we focused on mass production, volume operations and elimination of deviation. Again, especially since utility services such as Amazon Lambda now exist then I’d argue commodity edges this. 11 to commodity out of 14 – 11/14.

Decision Drivers? When making a choice over what platform to use, do we undertake a significant analysis and synthesis stage, gathering information from vendors and analysts on its suitability OR do we just pick the platform based upon previous experience? Tough one, but again I view that commodity just edges this in the market overall though some companies love their requests for tender. 12/15

Overall, we can safely say that the platform space (as in code execution) is firmly in stage IV (commodity + utility) in 2017.  It’s also fair to say that platform isn’t quite yet the industrialised commodity that electricity is but it’s jumped from one stage (product) to the next. There’s a bit further to go. Hence, what do I know from my map and the basic patterns so far? Platform is moving into commodity (stage IV) with provision of utility services. This will happen rapidly (a punctuated equilibrium) with such a shift (known as the “war”) normally taking 10-15 years. There will be a co-evolution of practice associated with. Many companies will have inertia. Capital will flow into the more industrialised platform space and those higher order systems built upon it – there is going to be lots of future opportunity here. Capital will also flow out of those spaces stuck behind inertia barriers, not exactly where you want to be. Or is it?

At this point, we need to think about our purpose. My goals as a “retiring” CEO might be very different from the “upstart warrior” CEO. Let us assume I’m more Queen Boudica than Victor Meldrew and I want to fight for a bold future for my “people” rather than exploit and surrender to the past. My cultural heritage is more inclined to investing in the new space rather than just exploiting the legacy. In 2017, I’m not yet in a position where I’m forced to exploit the legacy as the change is only just starting in earnest. I’m a little late but not that late.

But, hang on, aren’t I deciding here? I haven’t gone through doctrine yet and I’m already talking about how to play the game. The strategy cycle is a cycle which you will loop around many times in coming to your decision. Each time you loop around, new information and biases form that will change your purpose, your view of the landscape and ultimately your choice. This is all normal. It’s not a rigid linear path. It’s a guide. At this point, let us peek at those financial models.

Getting messy with numbers
The first thing to note is that numbers are not reality. Just because it’s written in a spreadsheet doesn’t mean it is going to happen any more than a Gantt chart tells you what the future really holds. In this case, the CFO has had the good sense to provide a range of outcomes for two variants (the build in-house, the use a public platform) and then complain about the lack of probability provided. I like this CFO.

Let us assume that after some badgering we have managed to tease out some probability figures for the outcomes from marketing and sales. I’ll explain a little more on how to do this later. In figure 204, I’ve added probability onto the financial models for each of the variants – variant 1 (build in-house) and variant 2 (use the public platform play). Let us go through the terms.

Probability: the likelihood of this outcome occurring according to sales and marketing.

Total investment: the total amount of capital we’re putting into this effort.

Total return: the amount of capital being returned (after repayment of investment). This is the annual net cash flow including any disposals.

Opportunity loss: the return I would have expected had I spent the capital on other projects. In the LFP scenario our standard return on investment (ROI) is 40%

Net Benefit / Loss: How did this investment do compare to my standard expected return? i.e. total return – opportunity loss.

Expected return: the net benefit / loss * the probability of this occurring.

Figure 204 – Options analysis

Given this probability profile then the best expected return comes from variant 1 i.e. building in-house. But wait, didn’t we say this building in-house was the future legacy? Well, as I did point out, most financial models have a bias to the present and hence they discount the future. The problem is that by following this path we’re are building up the legacy practice (and related inertia) and not positioning ourselves to build a future market. Can we somehow financially account for inertia and future position? Yes. The essential question between variant 1 and variant 2 is the following – are we prepared to gamble $435k of expected return to explore and potentially secure a more lucrative but undefined future? To analyse this is very complex. So, what do we do? Well, I will build monstrous complexities for navigation but you can SWOT it. 

SWOT? But isn’t SWOT the curse of simplistic management? Yes, but it also has its uses particularly if we understand the landscape. The problem with SWOT isn’t that it is useless but instead we apply it to landscapes we don’t understand.

We have two variants – build in-house (1) and public platform (2). The strength of build in-house is we’re familiar with this approach within our teams and it provides the greater expected return. Its weakness is we build up our legacy position which comes with the threat of increased inertia and future inability to change. On the other hand, using a public platform play (2) has different characteristics. Its strength is we build up experience in the future space and though it has a less expected return it provides an opportunity to develop skills and explore new opportunity. The weakness is we’re unfamiliar with this and the threat is that it fails we lose face with the customer but also potentially political capital with the board. The path you decide really depends upon you. The “retiring CEO” will plummet for variant 1, the “warrior CEO” will go for variant 2. 

At this point questions such as “But what if those probabilities are wrong?” and “What if the options I’m looking at aren’t right?” should be racing through your mind. So, let us tackle that bit.

Getting probability probably nearly right-ish.
As with most things in life, there exists huge amounts of uncertainty over which outcome will occur only exceeded by a willingness of people to tell you that they would have chosen a different outcome if in fact you pick the wrong one. Fortunately, you can exploit this. First up is to use the Marquis De Condorcet’s work and get everyone familiar with the business to assign probabilities and take the average of the lot. A more refined version is to use an information market.

Information markets are simple concepts but fiendishly difficult in practice because of unintended consequences. A basic example of one is as follows. Let us assume we want to know from the company whether a project X will fail to deliver or succeed? We create a bond (called project X) which will pay a certain return (e.g. $200) if the project is successful at a specified date but will return $0 if it is not. We give everyone in the company one bond and $200 as a bonus. We then let them trade the bond in our own internal market.

Along with the nice “thank you” for a $200 gift (which has its own secondary benefits), the bond itself maybe worth upto $200 or might be nothing at all. So, people will tend to trade it with others. If I expect the bond is 90% likely to fail then I’ll be over the moon to sell it to someone else for $40 and a bit gutted if it succeeds. The price on the internal market will reflect the likelihood or not of the bond i.e. the question asked. The use of such information markets is well over a decade old but there can be lots of political complications in practice particularly if you get an individual starting to make a small fortune on this. There’s nothing wrong with that, they’re somehow providing you accurate information on the future but it can cause difficulties.

I mention this more to point out that there are lots of ways of skinning Schrodinger’s cat and finding probability. The question is always how much that information is worth to you? The cheapest way is to guess yourself, the slightly more expense is to aggregate other people’s guesses and the far more expensive (but also far more accurate) tends to be the use of an information market. But let us assume our probabilities are “right”. This doesn’t mean one outcome will happen, it’s just a probability. We still must roll the dice. However, what we know so far is that we have this opportunity to build an LFP system, there are two variants (one in-house, one using a platform play) and whilst the in-house variant gives a greater expected short term return, the platform play prepares us for the future and the co-evolution of practice that will happen. Let us get back to our strategy loop and start looking at doctrine especially the topic of “managing inertia”.

Managing inertia
We have the map, we can anticipate certain change and we can already see there is inertia. The question now becomes, what sort of inertia do we have? Back in 2008, I use to categorise inertia into four basic types with numerous subtypes. I’ve tidied this up since then. The basic forms of inertia are provided in figure 205 including tactics to counter and counter points. 

Figure 205 – inertia


All forms of inertia relate to some loss of capital whether physical, social, financial or political. We know that two groups (security and systems) are exhibiting inertia, those are usually however not the problem as we’re aware of it and hence it can be managed. The danger is always the group that haven’t quite made themselves clear.

In the case of security, the inertia is probably related to two types. First, we have uncertainty over the use of a platform play and any co-evolved practices that might emerge. This will require “Investment in knowledge capital”. We can overcome this with either training or providing time and resources to develop the skills necessary. We can certainly provide an argument that if we fail to do this then the future cost of acquiring these skills will be higher and we will also miss out on shorter-term motivation for staff. The second type of inertia is “Changes to governance, management and practices”. Co-evolution is always difficult for people to get to grips with as it means that existing and perfectly valid best practice (for a product world) becomes no longer relevant. We can only overcome this by explaining co-evolution usually by pointing to past examples. Both types of inertia are relatively simple to manage.

Slightly trickier is the Systems groups. Along with the two types of inertia mentioned above, we’re likely to have two additional types especially since the group builds and controls the underlying infrastructure behind any home-grown platform efforts. These are “Loss of political capital” and “Change of business relationship (loss of social capital)”

The “Loss of political capital” includes fear over being relevant in the future, loss of status and loss of past empire. Don’t underestimate or dismiss this as it’s very uncomfortable for those who face it. You counter by giving people are path to the future and relevance in it. Start by acknowledging what has been achieved and move onto modernisation. You need to emphasise the importance of future agility, efficiency, importance to the business and how we must build for the future. You also must include them in it. At this stage, such action is relatively trivial. The practices haven’t been developed and so there’s plenty of time for training, reskilling and the recreation of essential system concepts in a more utility platform world from configuration to management to operation to monitoring. It’ll be different from the past but someone should develop that capability, no-one yet has those skills and why shouldn’t it be your Systems team? Unfortunately, what often happens is companies don’t anticipate obvious changes and leave it late. This creates an added complication which I’ll discuss in a moment.

The “Change of business relationship (loss of social capital)” is the second additional type of inertia you must contend with. There’s often a pre-existing relationship with vendors who might be supplying products or services. In normal circumstances, you can deal with this through normal vendor management approaches. You can emphasise that the time is right for a change, that the past has evolved and we need to re-evaluate the vendor’s offering. However, there’s the complication mentioned above. If you’ve left it late then the vendor of a product may well be spreading huge amounts of fear, uncertainty and doubt over the more utility form to your own team. They will probably have tried to convince your own team (e.g. in this case Systems) that they have no future in this “future world”. If they’re canny, they would have encouraged articles in related trade press spreading the same message. This is all designed to get your own people buying the vendor’s product rather than adopting to the new world. It’ll make it much harder for you to overcome any “loss of political capital” if you’re late to the conversation. You can try and say, “don’t worry but will invest in retraining” but this is also where any past Machiavellian efforts or brutal corporate action will bite you in the bottom. If there exist doubt in your trustworthiness then they won’t follow but will resist. Whatever you do, as annoying as it is to be confronted by this – remember one thing. They are behaving perfectly rationally. You are the wally who left it late to deal with a highly anticipatable change and therefore caused the mess. If you want someone to blame, buy a mirror. Unfortunately, we all make mistakes. This is also why you must always consider not only our action today but the future consequences of such action. Having that trust can bail you out of your own facepalms.

However, we’re not in that position with the LFP scenario yet. We shall assume we have a team who can have an open and honest conversation with. We can anticipate where the future is heading with the map and we’re going to share this. We’re going to have that discussion and invest time and money in bringing our systems and security teams into this new world with new skills and new capabilities. We leave no-one behind and we certainly don’t turn up five years late to the battle.

Alas, we might still have a problem. There’s potentially another source of inertia and it’s a powerful one. The board. We know they have a concern but aren’t going to raise an objection ... yet. Now that can either be just a general question on the change or could be hiding something else. We need to explore that. It could be as simple as “Data for past success counteracts” i.e. they’re used to us operating in one way and we’ve not been down this path. It could be concerns over “Loss of existing financial or physical capital” because we’ve invested in data centres. It could be a question of political capital or that one board member has looked at the model and wants to focus on short term expected return rather than building a future. Whatever the cause, you need to find it and to fix it. That’s one of your many jobs as the CEO. There are also many other forms of inertia and so for completeness, though not necessarily relevant in the LFP scenario, we will quickly run through the other types of inertia: -

“Threat to barriers to entry”, the fear that a change will enable new competitors. Whilst that fear may be justified it is often unavoidable Change, that is already happening in the market and outside of your control. You cannot ignore it.

“Cost of acquiring new skillsets” is one of the more bizarre sources of inertia because the cost will often increase especially in a punctuated equilibrium where a shortage of skills is a common consequence. There are many ways to counter this and mitigate the cost - assuming this is done in a timely fashion - from developing in-house, use of conferences to creating centres of gravity to attract talent. 

“Suitability”, one reasonably common form of inertia comes in the form of questions over whether it’s ready e.g. ready for production, is the market ready for this, are customers ready? The best way to counter is through weak signals and examination of the components (e.g. using the cheat sheet). 

“Lack of second sourcing options” is often a valid concern but can be used to disguise other forms of inertia. Back in 2008, it was not uncommon to hear a company say without irony something of the form - “We’re an Oracle shop. We’ve thought about using public cloud but were worried about the potential for getting locked in with Amazon. We want to see more choice”. If you can overcome the irrational side of the debate then this is all about supply chain management, trade-offs and use of standards where appropriate. There are a wide range of techniques to mitigate it.

“Lack of pricing competition” is another reasonable concern which really points to how well functioning is the market. Do we have single or multiple vendors? What are the switching costs? 

“Loss of strategic control” is usually wrapped up with fears of letting go and in the cloud space led to the idea of “server huggers”. However, there are some valid aspects to the concern around buyer vs supplier relationship, assuming you have a market that is industrialising to a commodity. Most of this can be overcome with strategic planning and examination of different scenario i.e. what should we do if the supplier rapidly increases price etc.

“Declining unit value” is usually a business concern related to a desire to maintain the past. The only way to counter is through awareness of evolution and how markets aren’t static. You need to look at alternatives opportunities, think Charles Handy’s 2nd curve and try to avoid the spiral of death through endless cost cutting to recreate the past.

"Data for Past Success counteracts", an extremely common form of inertia particularly if the company has been successful. Most companies build up a significant stock of data that informs them how successful the past was. This will often be used to argue that the future will be more of the same. You need to take a leaf out of portfolio management and realise that your portfolio will change over time. Options analysis and risk management approaches can be useful here to avoid having all your eggs in one “past” basket.

“Resistance from rewards and culture”, hugely problematic for most companies and easily exploitable by competitors. Performance bonuses linked to selling an existing product set can be a significant source of inertia and weakness. You can manage this through HR by using higher rewards for adaptation, education, longer term thinking and promoting greater situational awareness.

“External financial markets reinforce existing models”, another common but tricky form of inertia to deal with. As discussed in the previous chapter, it’s important to understand your context and the role being played by others such as fund managers. There are certain techniques that can be deployed here to overcome market inertia including spinning a future story. 

Where are we?
We have a map of the landscape, we’ve applied basic economic patterns to anticipate change, we can see opportunity in co-evolved practice and obstacles in inertia to the change, we have financial models and understand how we can go for a higher short term expected return or trade some of this for building a future position. Though we have inertia, we have an idea of the types and how to deal with it. Our awareness of the situation is expanding. This is good. This is how it should be.

In the above, I specifically state “anticipate change” because we cannot predict evolution over time (see chapter 7, section “the trouble with maps”). We must use characteristics or weak signals to give us an idea (a probability) of when the change will happen or even if it’s occurring today. Mapping is all about probability rather than time; the uncharted space is uncertain and the industrialised space is more known. To predict over time would mean we could say “in 453 days this [activity or practice or business model] will change from A to B”. As far as I’m concerned that is straying into the realm of charlatans, crystal ball fanatics and soothsayers. 

I often hear people counter with vague notions of time e.g. “at some point in the future”. That is not predicting over time as time requires a “when”. I cannot, nor have I ever been able to predict evolution over time. In over a decade of using mapping to explore economic systems then as far as I’m aware you can only anticipate the change and refine the “when” of evolution using characteristics (as above), weak signals and probability (including information markets). Of course, I’m fully aware that I have my own inertia caused by my past success with mapping and that the subject itself will evolve. Someone else may well find a way to map over time. I will no doubt dismiss it and be proved wrong. I do hope I have the wit to use my own tool on myself at that time. “When” will this happen? As I said, I can’t predict over time and the weak signals aren’t even strong enough for me to guess.

In terms of the strategy cycle, we’ve observed the environment and moved onto orientating around it with doctrine such as “manage inertia”. However, let us explore the cycle a bit further.

Getting Primitive
In this section, I’m going to look at how we organise around the LFP scenario and put down a few markers for strategic play that we might consider. Once I have a general outline, I’ll often loop around this several times with others to refine, to create alternative scenarios, to alter course before finally deciding upon a choice of action. When it comes to organisation then I use not only use a self-contained cell based structure (i.e. small teams) with the right aptitudes (finance, engineering, marketing) but also for the last decade I’ve been using attitude (pioneers – settlers – town planners). 

I note recently that Kent Beck has been discussing a model called 3X – eXplore, eXpand and eXploit. This is excellent as there’s nothing like independent discovery to give a bit more substance to a topic. Pioneers eXplore, Settlers eXpand our understanding and Town Planners eXploit by industrialising with each group operating and maintaining its own space. This all deserves a good hat tip to Robert Cringely and his marvellous book “Accidental Empires”. Anyway, back to the map and we will focus on the platform change as we’ve been previously building our own systems and I’ll assume that we know how to do this. In figure 206, I’ve outlined the two obvious cells that we need to consider.

Figure 206, The structure


One cell refers to town planning around the platform. Obviously, someone else is providing the platform as a utility service to us but we still need to make sure we create highly industrialised process around monitoring the platform, access control and how much we’re getting billed. This is not something new and chances are that provider will be offering tools to make it easy. However, there are a new set of practices that will develop around the financial cost of a function, re-use of functions and how we monitor the code itself. This is not so much related to the platform itself but how we use it. In much the same way, the practices that changed industry were not so much about whether we paid the right electricity bill but how we used it to do other things. What those new practices will be is somewhat uncertain. I can guess based upon experience of running a code execution platform (i.e. serverless environment) with Zimki in 2005. But it’s no more than a guess.

We can also at this point start adding some primitive gameplays. For example, we could - if we have decided to play a legacy game and not build for the future market – spread fear, uncertainty and doubt over the utility platform. Alternatively, we might play an open play around the co-evolved practices to help them evolve more quickly. We might do this to create a name for ourselves in this space, to build a “centre of gravity” around the skillsets needed in anticipation that this will become a lucrative market for us. I’ve outlined these two very simple plays in figure 207.

Figure 207 – Two basic plays


So, complying with my natural bias, I’m going to focus on creating a future position and market rather than exploiting a legacy position. I can do this because I haven’t yet left it too late to make that choice. I’m going to try and own those future co-evolved practice, build a centre of gravity and use open source to achieve this. I’ll accept the lower expected return in exchange for a stronger future position and not building up my legacy. Now going to add my structure around the platform space onto my LFP map. See figure 208.

Figure 208 – Future orientated LFP map


The first thing is the map is a bit messy and things seem to be in the wrong position i.e. somehow my emerging architectural practice is above my microsite in terms of user needs but to be honest the client hasn’t mentioned anything about this changing world. This is fine. All maps are imperfect representations and with a bit of fiddling around and moving pieces then I can create something which appears to represent the situation more clearly. See Figure 209.

Figure 209 – A clearer map.


This fiddling around with maps is all part of exploring a space. It allows us to challenge assumptions with others, to collaborate across multiple aptitudes (finance, engineering etc) and even attitudes (pioneers, settlers etc), to apply past lessons learned and come up with a common understanding. We can now flesh out the space a bit more and being mindful of our current capabilities (that’s assuming you know how many pioneers, settlers and town planners you have – most don’t) create the structure we’re going to use – figure 210.

Figure 210 – the structure.


Looping around and common problems
We now understand the landscape, the trade-off between short term expected return and future position, the structure needed, the main sources of inertia and some basics on the gameplay. Our situational awareness is constantly improving. The next thing we do is loop around the strategy cycle again and refine it. But isn’t that time consuming? Yes.

With experience, for a business that has a map then a single loop (what we’re covering in this chapter) could take anywhere up to 30 mins. Add a couple of loops, discussions between people and you could have easily blown an hour or two before you commit to the choice. Add to that the additional hour or so it might take to create that first map and the financial models and yes, you could be looking at half a day. That is of course an incredibly long time to go from concept to decision to act. 

To be honest, I can’t think of many examples where it has taken anywhere near that long. There are a few M&A activities (covering hundreds of millions) where I have taken a day or so but that is the exception and only occurs in fields that I’m not familiar with. Being locked in a room or given people to interview and asked the question “should we buy this company” often involves extracting information from others. Most of the time was spent developing an understanding of the landscape because very little existed. However, we should acknowledge that mapping does take some time and I don’t know how to make it faster. It’s one of the obvious weaknesses of mapping versus gut feel which can just be instant.

Another problem is complexity. First, mapping exposes the complexity of what exists. In the example of Themistocles SWOT, it’s usually obvious to everyone that you should use a map not a SWOT to run a battle. We understand this because we’re familiar and comfortable with geographical maps in much the same way that people in business are comfortable with SWOTs. However, there is a downside which is a map is inherently more complex than a 2x2 such as a SWOT and this makes management more challenging and requires more thought. But what if you’re not familiar with maps.

Let us consider how Vikings use stories for navigation. Put yourself in the role of a Viking Navigator having spent 20 years learning epic tales and being trusted with steering the boat. Imagine someone says to you that you don’t need a story but you could use a map. The first time someone shows you a map or you will see is diagram with dots on it. You will have difficulty in understanding how can such a thing replace your twenty years of epic tales. You’ll tend to react negatively because of experience i.e. you know the stories work. You’ll have a natural human bias to that which is comfortable and previously experienced. The map will be unfamiliar even alien and its complexity will overwhelm you. It will take many points of exposure and realisation that a map would have been better than a story before most will put the effort and thought necessary into using it. 

Go back to the Themistocles SWOT. Imagine if battles had been run with SWOTs and someone came up and said, I’ve got a map thing which might help. The reaction will be overwhelmingly negative to begin with because it’s unfamiliar (not a SWOT) and complex. It can also threaten those who have spent 20 years learning how to “Battle with SWOTs” or “Navigate with stories” because at its heart, it is basically saying that they’ve been meme copying all this time without understanding. Into this mix you can throw in the issue that exposing the complexity also exposes assumptions made and opens decisions to more challenge - another thing people don’t tend to like. You’ve got quite a mountain to climb with mapping. Which is probably why those with a military experience (and some familiarity with situational awareness) have an easier path to mapping. The worst cases are normally those who have no military background, 20 years or so of “strategy” experience and an MBA.

However, let us assume you persevere, you create a map, you loop around the strategy cycle and over time (and hour or two, possibly more) through the application of thought then a context specific path becomes clear. What now? I tend to double check it as a final step. I find that using a business model canvas is brilliant for this as by that stage you should have everything you need to fill it in. Let us assume you decide to play the future game and roll the dice.

Opportunities multiply as they are seized.
You’ve decided to build the LFP system using it as a springboard to develop a future position around the co-evolved practice that will emerge in the platform space. You’ve overcome your internal inertia through discussion, formed the teams and explained this to the board. You’ll sacrifice some short term expected return for a future position with an eye to repackaging the solution and selling it to others along whilst developing a new practice in the co-evolved space. You roll the dice and it comes up ... outcome 2. Oh, damn.

The LFP system isn’t going quite as well as we might hope. Fortunately for us, we didn’t build in the in-house variant otherwise we’d be losing money right now and our discussions with the board might be getting more complex. The problem with our options analysis is we didn’t price in any variability and risk appetite. The in-house variant was riskier because it not only had the highest expected return but the lowest - there was a wide spread. In this case outcome 2 is a net loss. We can chalk that up as a future learning lesson (or in my case – past painful lesson). However, let us compare what happens with outcome 2 in both variants. Let us say that despite things not going so well both marketing and engineering have dived in and come up with proposals. There are two options on the table. So, which, if any, do we choose? 


1) Engineering says they could improve code efficiency by 75% for $350K

2) Marketing say they could add 400k extra microsite visitors for $150K each month

Let us go through each variant. In figure 211, I’ve added the financial impact for the proposals on the in-house variant.

Figure 211 – Financial Impact on in-house variant


I’ve started with outcome 2 (what is happening) as the base case and simply added the change. The first thing to notice is that the development proposal doesn’t make the case better, it makes the finances worse. Why? Because the cost is already sunk and spending money on refactoring doesn’t improve the financial case as there is nothing to be recovered through code efficiency. The only possible saving grace would be through releasing some hardware to get a quicker sale of it and less depreciated value. That’s in the realm of wishful thinking in most cases. As said as it is to say, it’s often difficult to justify spending more money on a refactoring effort in such circumstances. The marketing proposal gives us some uplift. At least it recovers some of the pain. Our final return is still below our normal expected return but we’re saving a bit of face. The combination of both development and marketing gives us the benefits of marketing combined with the loss of development. It’s far better to just do the marketing.

Ok, so let us repeat this exercise but now look at variant 2 – the public platform play. I’ve created the model in figure 212.

Figure 212 – Financial Impact on public platform variant


The first thing to note is we’re in much better shape because we didn’t have that initial sunk cost of investment. But then something odd happens. If you look at the development option, by spending money on refactoring then we make a better return! A huge return! Hang on, how’s that possible? Well simply put, we’re paying for consumption of our utility code execution environment (such as AWS Lambda) based upon use. You make the code more efficient then you pay less. There is suddenly a financial reason for refactoring code. There are many other benefits with such platforms around consuming services and code re-use but the changes to the way we write, refactor and monitor code are significant. This is what co-evolution is all about and in this case, it’s the collision between development and finance.

The second thing to note is that marketing is a net loss. How is that possible when in the in-house variant its positive? On a consumption basis, the cost (including not only marketing but operation) for each new user marketing acquires significantly exceeds the revenue they create and so it’s a loss at this price. But in the first variant, then most of the costs have already been spent in the initial upfront investment. In which case given we’ve already spent most of the money, we may as well spend a little bit more to get the revenue. Hence the divergence here. The marketing proposal makes sense in the in-house variant because you’ve already blown most of the cost but it doesn’t in the second because there’s direct linkage of actual cost against revenue.

But hang on, the third option of both marketing and development looks better than all of them. How can that be? In this case, the reduced cost of each user on the service (because of refactoring i.e. the development effort) means that the total cost per user (i.e. marketing plus operational) is now less than the revenue they create. Hence the last option gives us the best choice and that’s where we invest. This shift towards this utility platforms and billing at the functional level fundamentally changes your entire investment approach in projects. Refactoring suddenly becomes a financial consideration. The true costs (not just acquiring but operating) of marketing are exposed. Where you invest changes. Hence, we’re already starting to experience some of those co-evolved practices and this looks a big change. In fact, I know it’s going to be enormous which is why I created that first platform back in 2005 but as you’ll come to learn, these opportunities jump at you when you embrace the future.

But, why didn’t I continue and rebuild the platform after the parent company decided it wanted to go elsewhere? Well, I spent a bit of time working on printed electronics and then met an astronaut but that’s the next chapter. The one thing I want you to remember from this discussion is that spreadsheets are wonderful but they’re not a substitution for situational awareness. Loop through the cycle, understand your landscape, anticipate change, manage inertia, structure around it and then apply tools, choices and biases to help you decide where to act. Maps aren’t a substitution for thought, they’re an enabler of it. By now you should be thinking of how you can use maps to communicate across finance, engineering, operations, purchasing and strategy from anticipation of change to organisational structure. As you'll discover soon enough, this is only the beginning,