Wednesday, September 07, 2016

Keeping the wolves at bay

Chapter 8

[Draft version. The completed version is on Medium]

To keep funding my research, I took a few more paid gigs which basically meant becoming a gun for hire.  Cloud computing was colliding with the technology scene and there was lots of confusion about.  This meant a constant stream of conferences - including some that actually paid - along with plenty of opportunity for piecemeal work.  This wild west also resulted in some fairly shady practices and exploitation.  I tried not to cause harm and instead gave time to community events like Cloud Camp London.  Hang on, don’t you either decide to do harm or not?  Where’s the ambiguity?  My problem was simplicity.

Making it simple

One of the issues with mapping is people found it complex.  This is not really that surprising because you’re exposing a world that people are unfamiliar with.  It takes time and practice.  However, confusion over cloud computing had also created plenty of opportunity for a new way of thinking.  Alas piling on complexity onto a confused person doesn’t help them hence I looked at ways to simplify, to make it more palatable and more familiar.  I started spot painting.

Spot painting companies

To help people understand maps, I produced mini-cheat sheets (see figure 73) for each of the stages of evolution and then rather than produce a map, I used to paint on whatever existing diagrams they had.  I’ll use the following figures from a Butler Group conference in 2008 to explain.  I’ve taken the liberty of updating the older terms of innovation & bespoke to use the modern terms of genesis and custom built.

Figure 73 — A cheat sheet for commodity


The cheat sheet had a number of features.  It had the evolution scale (point 1) to provide you an idea of what we were talking about (e.g. commodity) and how it fitted into evolution.  It had a basic list of primary characteristics (point 2) and a benefit curve (point 3).  This benefit curve was a reflection of the changing differential benefit of an activity to an organisation e.g. research was an overall cost, commodity provided little difference and the differential value of products declined as they became more widespread.  I then used these mini cheat sheets to annotate existing business process diagrams – see figure 74.

Figure 74 — Annotated diagram



These diagrams have certain advantages and disadvantages over maps.  First, they are more familiar and unthreatening i.e. you can take an existing process diagram and colour it.  They certainly (in combination with the cheat sheets) help you question how you’re building something.  But, as they have no anchor, there is no position relative to a user which means people don’t tend to focus on the user.  Also movement cannot be clearly seen but has to be implied through the changing of colours.  These diagrams enabled me to introduce the concepts of evolution but without position and movement then they were unhelpful for learning economic patterns and forms of gameplay.  However, the simplicity made them moderately popular with a few clients.

Taking it too far

Unfortunately, I didn't stop there.  The next diagram, I’m a bit loathe to show.  I wasn’t trying to cause harm and I hope it hasn’t.  In order to teach mapping then I simplified the evolution curve and focused on the extremes – see figure 75 from the Butler Group ADLM conference 2008.

Figure 75 — Polar opposite IT


The idea conveyed was one of IT consisting of polar opposite extremes, which is perfectly reasonable (Salamon & Storey Innovation paradox, 2002).  These extremes of the uncharted (chaotic) and industrialised (linear) domains exist.  But there’s also the transition in the middle which has different characteristics.  The graph certainly helped introduce concepts of evolution such as why one size doesn’t fit all and also it proved quite popular due to its simplicity.  The danger occurs if you take it too literally and start organising in this manner.  In my defence, I did publish articles during the time that emphasised that you needed to deal with the transition, though it’s fair to say I could have been much clearer often reducing entire concepts to single lines (see Exhibit 1).

Exhibit 1 : Lack of clarity crimes committed by the Author.
Butler Group Review, Dec 2007, The Sum of all Fears

While the use of utility services removes obstacles to new IT innovations, it may create an organisational challenge for IT departments. Innovations are dynamic problems and require dynamic methodologies such as agile development and a more worth-focused VC-like approach to financing. By contrast, the use of utility services requires a focus on cost, standards, and static methodologies. Unless you intend to stop innovating and give up on the long-term source of profit for any organisation, then the IT department must find a way to manage both of these extremes. As commoditisation is ongoing, you’ll also need to continuously deal with the transition between these two extremes. To make matters worse, as “X as a service” grows, barriers to participation in many industries will reduce, causing greater competition and accelerating the rate of new innovations into the marketplace. 

Overall, the pattern suggests reductions in non- strategic costs, more competition for information businesses (including competition from consumers), a faster rate of release in the marketplace, and increasing pressure on IT as it deals with the conflicting demands of two polar opposites in focus. 

The problem with simple

If I believed these simple versions to be unhelpful then why did I use them?  It’s a question of balance and a trade-off.  The problem is Ashby’s law of requisite variety.  Basically, the law states that in a stable system then the number of states in its control mechanism must be greater than or equal to the number of states in the system being controlled i.e. the controlling mechanism must represent the complexity in what is being controlled.  Organisations are very complex things and whilst mapping provides you a window onto this, you need to have a management capability able to cope with the complexity. 

There is unfortunately another solution to Ashby’s Law.  Rather than cope with complexity, you pretend that what is being managed is simple.  We tend to like things such as 2x2 diagrams not because they represent reality but because they obscure it and hence are simple to understand.  We trade-off our ability to learn and to understand the environment for an illusion of simplicity and easy manageability.  This is why we use one size fits all or apply KPI (key performance indicators) across an entire organisation even when they are not appropriate. When it comes to the KISS principle, do remember that keeping it simple can make us act stupidly. 

Eventually, I was faced with a choice.  Do I keep it simple thereby making it more accessible and just accept the flaws or do I take a slower path and try to push organisations towards a higher understanding of position and movement?  This opened the door to another compromise.  I could also do the heavy lifting for others!  I could just give them the result!  However, this would make them dependent upon me, the normal consultant path.   My purpose was to free people from the shackles of consultants and not to chain them up even more.  This compromise was out of the question.  I’d like to think that I stood my ground here but with almost no-one mapping, bills mounting and clients taking an interest in the simplified concepts then it’s fair to say that I was starting to wobble.

Finding my mojo

My salvation was a piece of paid work that I’m particularly fond of. It concerned the issue of efficiency versus effectiveness and to have any hope of explaining it then we need to introduce three concepts – worth based development, pricing granularity and flow.

Worth based development

In 2003, the company that I ran built and operated small sized systems for others.  There were no big systems, these were more of the £100k - £2M scale covering a few million users.  Our clients usually wanted to write a detailed specification of exactly what they needed to ensure we delivered.  That doesn’t sound too bad but even at this small scale then some of the components in these projects would be in the uncharted space and hence no-one knew exactly what was wanted.  Unfortunately, back then, I didn’t have the language to explain this.  Hence we built and operated the systems and inevitably we had some tension over change control and arguments over what was in or out of a particular contract. 

During one of these discussions, I pointed out to the client that we were sitting around a table arguing over what was in or out of a piece of paper but not one of us was talking about what the users of the system needed.  The contract wasn’t really the customer here; the client’s end users were.  We needed to change this discussion and focus on the end user.  I suggested that we should create a metric of value based upon the end user, something we could both work towards.  The idea fell on deaf ears as the client was pre-occupied with the contract but at least the seed was planted.  It wasn’t long after this that another project provided an opportunity to test this idea.  The client gave me a specification and asked how much would it cost to build a system to do this?  I replied – “How does free sound?” 

They were a bit shocked but then I added “However, we will have to determine a measure of value or worth and I’ll get paid on that”. There was a bit of um and ah but eventually we agreed to try out this method of worth based development.  In this case, the goal of the system was to provide leads for an expensive range of large format printers (LFPs).  The client wanted more leads.  Their potential end users wanted a way of finding out more on these printers along with a way of testing them.  I could build something which would marry the two different set of needs. But rather than the client paying up front and taking all the risk, I would build it for free and take a fee on every new lead created.  

We (as in the client and my company) were no longer focused on what was in or out of a contract but on a single task of creating more leads.  We both had an incentive for this.  I also had a new incentive for cost effectiveness because the more efficient I made system then the more profit I retained. We agreed and so I built and operated a system which enabled people to upload an image, test it on a large format printer and get delivery of their print plus information on the kit’s performance plus a sales call. The system soared.  

In three months we had generated more leads than the client normally had in a year and this was accelerating.  It was stunning.  The client’s revenue was rocketing but so was my revenue as the system was based upon a metric of leads.  The more success they had, the more success I had.  It was a win-win situation.  Alas, this actually created two problems and one headache.

The problems were caused by the client being unprepared for this level of interest and internal budgeting systems that weren’t designed to cope with such a variable success.  What has budgeting got to do with this?  Well, the client’s success was more leads which translated into more revenue. This was good from a budgeting point of view.  But the more success the client had then the more my fee increased as it was also based on leads.  This was bad from a budgeting point of view.  The system became so successful that it exceeded an internal budget figure the client had set for costs and this caused an internal conflict with demands to switch off the system until new budget was allocated (a very lengthy process).  Switch off a revenue generating system because it’s doing better than expected and passed some arbitrary budget figure?  This is what happens when an inflexible one size fits all approaches hits reality. 

Before you go “this is daft”, actually it’s not.  Over time companies tend to build up a body of work and processes – the corporate corpus - designed to stop past failure.  It’s all done with reasonable intentions.  The desire to spend money effectively and the desire to know resources are being well used.  That mass of good intentions is often the cause of many problems when you try to change the system.  That corpus can become a zombie, killing off innovation whenever it is found.  I had attempted to change the system by introducing a worth based approach and I should have known that this would cause tensions with the corpus.  I learned that lesson quickly.

Today, these worth based techniques are normally called “outcome” based or something of that ilk.  I’ve used them many times over the last decade, in fact I prefer them.  Whilst they tend to solve the issue of an excessive focus on contracts, they have invariably hit other roadblocks such as a client not being able to describe a value or the purpose of the system or  even conflict and politics within internal processes.  You need to be aware of this and to mitigate it. 

Those were the problems - lack of preparation, the corporate corpus - but the headache that worth based approaches caused was always mine.  There was some financial risk associated with these projects and some investment needed.  I had to be concerned with not only the development but operations.  This included lots of capital intensive investment along with costs that weren’t either truly variable or ones that we could only guestimate at.  To minimise the risk we shared data centres and other common components but in a large heterogeneous application environment then this just complicates allocation of costs.  How much would a user visiting our application cost us in terms of compute, power and data centre usage was an incredibly tough question. 

In my risk models, we also had no clear way of determining operational costs as it scaled.  We had to make lots of estimates on stepwise changes and how much compute resources would be used by an application we hadn’t built.  The financial model was more akin to art than any form of science.  Some of that uncertainty certainly ending up as “padding” in the metric e.g. the price per lead that I would charge.  Other areas had better costs models.  In the LFP example above then distribution systems and even printing were more variable (i.e. price per print or price per package) because we had experience of running an online photo and printing service.  This brings me to the next topic of pricing granularity.

Pricing granularity

With a worth based approach, I have a strong incentive to: -

reduce the operational cost of the project because the cheaper it is then the more profit I make. 
provide reliability because if the system went down, I wasn’t making any money.
ensure the system maximises the value metric which in the LFP case was "generating leads".

But I also had questions on where to invest.  In the case of LFP, it was doing very well and so I decided to invest an additional $100K.  But where do I best put the money?  Improving the site reliability?  Reducing the operational cost of the application through better code?  Maximising the number of users through marketing?  Improving conversion of users to leads?  Which choice brings me the better return?  This is particularly tough to answer if you can’t effectively determine operational cost of an application beyond hand waving or if other data is also guessed at.

One of the huge benefits of Zimki (our platform as a service play) was not only its serverless nature and how you could simply write code through an online IDE but also its pricing granularity was down to the function.  Any application is nothing more than a high level function that calls other functions.  If I developed a function in Zimki, then whenever that function was called then I could see exactly how much it had cost me.  I was charged on the network, storage and compute resources used by that function.  This was quite a revelation.  It changed behaviour significantly because suddenly in the sea of code that is my application, I could find individual functions that disproportionately cost me more.  I’ll talk more about this change of practice in the next chapter but for now, just being aware of this is enough.

So, for a developer on Zimki, I had price granularity down to running a single function.  As far as I know this was unparalleled in the world of IT and we haven’t seen the likes of this since then until AWS Lambda.  Now, obviously I was also the provider of Zimki and behind the scenes was a complex array of basket of goods concepts and all manner of financial instruments to be able to provide those cost figures.  But this was abstracted from the developer.  All they saw was a cost every time their function ran no matter how much it scaled.  There was no capital investment and this turned the operational cost of an application into a manageable variable.

Flow

What we’re now going to do is combine the ideas of worth based (outcome) development and pricing granularity to introduce an idea known as flow.  In order to do this, we’re also going to have to use scope and how a map can have multiple users as introduced in chapter 7.  After this, I’ll show you how flow was used to question efficiency vs effectiveness and why those simplified maps (e.g. the spot diagrams) are ok but not ideal.

Revisiting LFP

To begin with, we’re going to revisit the LFP project but with a map and the knowledge of what a utility platform can bring.  In figure 76, I’ve created a map of the worth based LFP project.  Back when we were working on the project, I hadn’t developed the mapping concept fully and so this is post event analysis.  I won’t mark-up points on this map, hopefully you’ve enough experience now to start reading them.

Figure 76 — Map of the worth based project


The map begins with our client who has a need for more leads and ultimately companies buying their product.  The conversion from lead to actually purchasing a printer is beyond the scope of this project as that was within client’s sales organisation, we’re focused solely on generating leads.  The other type of user in this map is the consumer who hopefully will buy one of these expensive printers.  They have different needs, they want to find out about the right sort of printer for their commercial operations and to test it before buying something they will use.  At that time, this was all done through onsite or showroom visits or glitzy brochures. We aimed to provide an online mechanism for the consumer to find out about the printer (a microsite) and to test it (the testing application).

The test would be a high resolution image that the potential customer would upload which is then printed out on the printer of their choice.  Their poster (this was large format) would be distributed to the potential consumer along with a standard graphical poster (showing the full capabilities), relevant marketing brochures and a sales call arranged.  Each of the components on the map can expand into more detail if we wish e.g. platform needs compute which needs a data centre but this map is good enough for our purpose.  The platform space was the source of my headaches due to my inability to provide a variable operational cost for an application.  But the platform space was evolving towards more of a utility service – in fact, I was the person causing this.

So, let us look at the map again but move further into the future in which a utility platform has emerged. I’m going to add some financial indicators onto this map. See figure 76.

Figure 77 — Finance of the worth based project


From the map, we hope to have visitors to our microsite which will extoll the virtue of owning large format printing and hopefully persuades some of these visitors to go and test it out. The act of turning a visitor into an actual lead requires the user to test a printer.  So we have multiple conversion rates e.g. from microsite to testing application and from visitor to lead.  At the start these will be unknown. We can guess.

Normally, operating a microsite requires all those hard to calculate costs but in a utility platform world, your application is simply a function running on the platform and I’m charged for use.  The operational cost of my microsite is basically the number of visitors x the average cost of the microsite function.  Remember, an application consists of many functions and users can navigate around it which means some “wandering” users turn out to be more expensive than others.  But we can cope with that by taking an average for our microsite.

The same will apply to my testing application but in this case there will be direct visitors plus converted visitors from the microsite i.e. those we’ve persuaded on the benefits of LFP and hence encouraged them to go and test out a printer.  Every use of the testing application (a function) will incur a cost.  The two function costs (microsite and testing application) could be wildly different depending upon what the applications did and how well the code was written but at least we had a granular price for every call.

I could now say
  • We have number of visitors [V1] to the microsite
  • Each call to the microsite costs on average C1
  • The total cost of the microsite would be V1 x C1
  • Of the visitors V1 then a percentage (the conversion rate R1) would visit the testing application.
  • Each call to the testing application costs on average C2
  • The total cost of the testing application would be (V1 x R1) x C2
  • Of the (V1 x R1) visitors to the testing application then a percentage would try a printer (the conversion rate R2)
  • Those visitors who tried a printer (V1 x R1 x R2) are leads
  • Each lead incurs a distribution cost (C3) for the brochure and print which also incurs a printing cost (C4)
  • The total cost of distribution and printing would be (V1 x R1 x R2) x (C3 + C4)
  • Each lead would generate a revenue of P1 (the agreed price)
  • The total revenue generated would be P1 x  (V1 x R1 x R2)
  • The total cost of generating that revenue would be
    (V1 x C1)
    + (V1 x R1) x C2
    + (V1 x R1 x R2) x (C3 + C4)
  • Operating Profit =
    P1 x  (V1 x R1 x R2) - total cost of generating

This is like mana from heaven for someone building a business model.  Certainly I had investment in developing the code but with application being a variable operational cost then I can make a money printing machine if I set the price (P1) right.  No big shocks and no capital investment step.  In fact, any investment can be directed to making that equation more profitable – increasing the conversion rates, reducing the cost of application function call, getting more visitors etc.  Of course, this wasn’t the only path.  The visitor might not come to the microsite but instead go directly to the testing application.  There were a number of potential flows through the map.
When you look at map, there can be many forms of flow within it whether financial or otherwise.  It could be flows of revenue to the provider or flows of risk.  For example, if the utility platform dies due to some catastrophic event then it’ll impact my microsite and my testing application which will impact the consumer needs and stop any lead generation incurring a financial penalty to me in terms of lost revenue.  Whereas, if I run out of brochures, this impacts distribution and I have a choice on whether to send out the prints now or delay until the brochures are available.  In figure 78, I’ve given an example of a flow within a map from potential consumer through their need to microsite to testing application to distribution.

Figure 78 — Flow of the worth based project


It’s important to note that the interfaces between components in a map represent flows of capital.  Such capital can be physical, financial, information, knowledge, risk, time or social.  It could be anything which we can trade.  Often people talk about the marvellous “free” web services that they’re interacting with which provide storage for photos or online blogs or a “free” encyclopaedia.  These are rarely free.  You’re trading something whether it’s information for the service or social capital (e.g. loyalty to a scheme) or even just your time (e.g. to create new entries, to edit the content).  That activity that someone else provides that meets your needs has a price, even if you don’t visibly notice it.  

By using the concept of flow, it is relatively simple to build a financial model for an entire system. I’ve created the skeleton of such a model for the map above in figure 79.

Figure 79 — Building a financial model


Now back when we built LFP in 2004, there wasn’t a utility platform, I didn’t have maps and I didn’t have the concept of flow.  Instead myself and my CFO had a mass of spreadsheets trying to calculate what the above did and cope with all the stepwise investments and capital costs needed.  What was a nightmare then is now child’s play.

Whenever building something novel, then the game is to use operational expense over capital as much as possible in order to reduce risk either due to the system not being used or growing rapidly.  You want to tie the cost as close to the path of revenue generation with any worth based system when you’re gambling on an uncertain outcome.  However, there will always be some investment e.g. writing the application, marketing the microsite.  This sort of modelling can help you identify which options you should consider for the future.

The rage today is all about “DevOps” in the technology world, a moniker combining development and operations.  This shift towards utility platforms is starting to occur in earnest and over the coming years the world of finance and engineering will go through an exciting but disruptive marriage.  I do hope someone comes up with a better phrase than “DevFin” or “DevOps 2.0” or “NoOps” though.


Efficiency vs effectiveness

So there I was in 2008 with an understanding of the importance of maps and of the flow of capital within them.  This helped me explain efficiency versus effectiveness in one of my client’s projects that I was quite proud of.  There is unfortunately a problem.  I can’t explain it to you.

Hopefully, you’re discovering that maps are quite a powerful strategic tool.  The information they contain can be very sensitive.  Even in Government projects, the maps are rarely shared outside of Government itself.  I’m certainly not going to break the trust of a private client by exposing their dirty laundry.  This is why many of the maps that I use in this book are slightly distorted and don’t identify the original owner unless I was the one running the show.  I don’t mind you knowing all the mistakes and failings that I’ve made.  If you’re uncomfortable with that and you need the reassurance of “big company X does this with maps, here’s the map” then I suggest you stop reading or find someone else to help you.  Hopefully, you’ve got enough ideas from what I’ve written to justify your time invested so far.  

The next section covers a hypothetical that blends a story related to a modern company reset into a technology context to help tell a past story.  Yes, maps are part of story telling or as J.R.R Tolkien said on writing the Lord of the Rings, “I wisely started with a map”.

Our story begins, as many do, with a challenge.  The company was expanding and needing to increase its compute resources.  It had created a process flow diagram for this (figure 80) which involved a request for more compute to the actions needed to meet that demand.  The process however had a bottleneck.  Once servers were delivered at “goods in” they needed to be modified before being racked.  This was time consuming and sometimes prone to failure.  They were focused on improving the efficiency of the process flow as it was important for their future and revenue generation.  A proposal was on the table to invest in robotics to automate the process of modifying.  Whilst the proposal was expensive, the benefits were considerably more considering the future revenue (of a not insignificant scale) that was at risk.

Figure 80 — The process flow




I want you to consider the above for a moment and decide whether a proposal to invest in improving the efficiency of an inefficient process makes sense particularly when the benefits of the proposal vastly outweigh the costs and your future revenue stream is at risk?

I had met the company in 2008, talked about the concept of evolution and introduced the “spot” diagram.  We agreed to take a look at the proposal.  I’ve taken those first same steps (see figure 81) and “spotted” the process.  Whilst the ordering and goods in process were quite industrialised, the modify part of the process was very custom.  Have a look at the figure and see if you notice anything interesting or odd before continuing with this story.

Figure 81 — Spot diagram of process.


What was interesting to note was that the racks were considered custom.  On investigation, it turned out the company had custom built racks.  It had always used custom built racks, it had a friendly company that even made them for it and this was just part of its corporate corpus.  This was a body from a long gone past that still haunted the place.  Even in 2008, racks were standardised. 

The modifications were needed because the standard servers that they bought fitted standard racks. They didn’t fit the custom built racks that had been so lovingly built.  Hence additional plates needed to be added, holes drilled etc.  Let us be clear, on the table was a proposal to invest in robotics in order to customise standard servers to fit into custom built racks which they were also paying to be built.  Does the proposal still make sense?  Is it a good investment?  Are there alternatives?

Before you shout “use standard racks”, let us map this space out starting from the user need of more compute.  This actually has two needs, the ordering of a server and the racking of the server once it has delivered.  Of course racking (i.e. mounting, adding power and cabling) needs the server to be modified.  Both of these chains are connected at the point of goods in – see figure 82.

Figure 82 — Mapping the proposal


Now the question is whether we should just use standard racks?  This obviously moves racks towards the commodity (which is where they should be) and the modification part disappears though we still have mounting, cabling and power.  It seems a lot better though (see figure 83).  

Figure 83 — Using standard racks


However, you still have a problem which is the legacy estate.  Are you going to migrate all the racks?  What about our sunk costs?  How are we going to maintain our existing systems?  There will be a long list of reasons to counter the proposed change.  Before you go “this is daft” remember the budget example, the corporate corpus?  Don’t expect to change a system without some resistance.

In this case, despite resistance, we should go a step further.  Computing was becoming a commodity provided by utility services.  We can simplify this whole flow by just adopting utility services.  We don’t need to think about robotic investment or even converting to using standard racks (itself a cost which might be prohibitive). This entire chunk of the value chain should go along with any additional costs it might be hiding (see figure 84). 

Figure 84 — Hidden costs and removing parts of the value chain


These hidden costs can be vast.  Today, when someone provides me with a proposal for building a private cloud, then the first question I ask them is what percentage of the cost is power?  The number of times I’ve been told “that’s another budget” is eye opening.  Power is a major factor in the cost of building such a system.  However, that’s another story for later and I’m digressing.

The issue above is we started with a proposal of investment in robotics based upon improving the efficiency of an existing process.  It sounded reasonable on the surface but if they had taken that route then they would have invested more in maintaining a highly ineffective process.  In all likelihood, it would have exacerbated the problem later because the corporate corpus would have expanded to include this.  If some future person had said “we should get rid of these custom racks” then the response would be “but we’ve always done this and we’ve invested millions in robotics”.  

The “efficient” thing to do might be investing in robotics but the “effective” thing to do was to get rid of this entire part of the value chain.  It’s a bit like the utility platform area, I can either invest in making my infrastructure and platform components more efficient by automation or I could just get rid of that entire part of the value chain.  Often the “efficient” thing to do is not the “effective” thing.  You should be very careful of process efficiency and “improvement”.  You should also be aware of the corporate corpus.

The company in question was a manufacturing company, the problem had nothing to do with computing and yes, they were about to spend many millions making a highly ineffective process more efficient.  They didn’t, they are alive and doing well.  I also kept the wolves at bay.  That’s what I call a “win-win” except obviously for the vendors who lost out.

Before we move on

In the last two chapters, we’ve been sneaking around the strategy cycle again covering mainly purpose and then landscape.  You should be familiar enough with the strategy cycle that I can represent it in a slightly different form just to reinforce the different types of Why (purpose and movement) and the connections between the parts in this book – see figure 85.  In the next section we will focus on climate including common economic patterns and anticipation.  We will keep on looping around this, sometimes diving into interconnections as we go.  Anyway, this will be the last time that I’ll mention that.

Figure 85 — The strategy cycle



We should recap on some of the ideas from this chapter.

Landscape

  • Be careful of simplicity.  There’s a balancing act here caused by Ashby’s Law.  Be aware that you’re often trading your ability to learn for easier management.  In some cases, you can simplify so far that it becomes harmful e.g. one size fits all, group wide KPIs.
  • The map contains flows of capital which are represented by the interfaces.  There are usually multiple flows in a single map.  Such capital can be physical, financial, information, risk, time or social.  It could be anything which we trade.
  • Maps are a means of storytelling.  Despite my dour attitude to storytelling (especially the hand waving kind of verbal garbage often found in strategy), maps are a form of visual storytelling.

Doctrine

  • Focus on the outcome, not the contract.  Worth (outcome) based tools can be useful here but be warned, they can also expose flaws in the understanding of value and become stymied by internal procedures e.g. budgeting processes and inability to cope with variable charging.
  • Use appropriate tools.  When using maps, if I’m looking at financial flows then I’ll often dive into financial modelling when considering multiple investment paths e.g. focus on increasing visitors through marketing or the conversion rate from a microsite.  Equally, if I’ve identified multiple “wheres” that I can attack, then I’ll often dive into business model canvas to compare them.  Don’t be afraid to use multiple tools.  Maps are simply a guide and learning tool.
  • Optimise flow.  Often when you examine flows then you’ll find bottlenecks, inefficiencies and profitless flows.  There will be things that you’re doing that you just don’t need to. Be very careful here to consider not only efficiency but effectiveness.  Try to avoid investing in making an ineffective process more efficient when you need to be questioning why you’re doing something and uncovering hidden costs.  Also, don’t assume that an “obvious” change will be welcomed.  Beware the corporate corpus.
  • When it comes to managing flow then granularity is your friend here.  Be prepared though, most companies don’t have anywhere near the level of granularity that you’ll need and you may even encounter politics when trying to find out.

Gameplay
  • Trading.  Maps are a form of knowledge capital and they tend to have value.  Don’t expect people to just share them with you.  You’ll need to trade or create your own.
----

Next Chapter in Series [to be published soon]
GitBook link [to be published soon]

Saturday, September 03, 2016

Finding a new purpose

Chapter 7

In 2007, I was at home.  Unemployed.  I twiddled my thumbs for a couple of days, did some DIY and then set about thinking on my future.  This is code for watching my bank balance plummet whilst not doing anything useful.  I was exhausted, running a company, inspiring a future and being broadsided had taken its toll.  However, whilst I wasn’t ready to immerse myself into a new role, I couldn’t just sit idle.  So, I undertook a few paid speaking gigs, did some advisory work, wrote a few articles, ghost wrote a few more and researched. At least, it would keep the wolves at bay for a bit.

I was convinced that there was some mileage in the mapping concept but I had two major problems.  First, I had failed to create that bright future with it.   Second, I had no real evidence to support it.  I had collected data that hinted components evolved but the evolution axis was no more than a pattern that I had observed and talked about at Euro Foo in 2004.   Maybe it was completely wrong?  Maybe that’s why I failed?  Maybe that’s why no-one else seemed to be talking about these concepts?  I decided my library wasn’t big enough to answer these questions and became a reader at the British Library.  I collected, collated and trawled through a huge volume of written work in pursuit of my answers.  At the very least, it was keeping me busy and providing time to recoup.

As I read more into the subject of strategy then I noticed that disquiet over the field was palpable.  Phil Rosenzweig, in the Halo Effect (2007) pointed to the cause being a marriage of convenience: “Managers are busy people, under enormous pressure to deliver higher revenues, greater profits and ever larger returns for shareholders. They naturally search for ready-made answers, for tidy plug-and-play solutions that might give them a leg up on their rivals. And the people who write business books – consultants and business school professors and strategy gurus – are happy to oblige.”

I wanted to change this, to somehow give people the tools they needed to learn themselves by exposing that secret tome of strategy to everyone.  I wanted to be free of this marriage of convenience and I still believed there was a secret tome back in 2007 and that it was probably guarded in the halls of business schools.  I started to think about doing an MBA, shuddered at the expense and borrowed copious notes and books from friends who had.  However, I was disappointed.  Beyond basic concepts in financial, marketing and operational “strategy” there was no discussion of landscape or context.  Maybe the tome was guarded in the halls of strategy consultancies themselves?

I applied for a job with one of most prestigious consultancy firms and I was invited to a competitive interview process with dozens of other candidates.  We would be put through our paces in a number of rounds in a Darwinian battle, a survival of the fittest.  In my first round I was asked a question - “A news media company is looking at divesting itself of its print and distribution business. What things should it consider?”

I immediately starting mapping out the landscape, pointing to opportunities and impacts from loss of control through physical capital to provision of distribution as a public utility to redirecting print capabilities into printed electronics - those large scale printers have the potential to be tomorrow’s Intel! There was a wealth of potential but before making a choice then we needed to understand the landscape more. I started to dig, asking questions about the user, their needs and what did we understand about the landscape. I met a wall of silence combined with “it’s not relevant”.  The company had already decided to take this action. It was part of its strategy.  My role was to give some input into how to achieve this.  I asked what was this strategy based upon and an argument ensued. Needless to say, I didn’t make it past round one and was the very first to leave the competition. Mapping had failed on its second outing.  So I carried on researching. 

It was at this time that I was also becoming quite well known in certain technology circles as a speaker on open source, web 2.0 and cloud computing. I kept being invited to more and more conferences and to present and discuss on technology changes within companies. I was flattered but quickly discovered that I needed to keep things simple.  I was told the mapping concepts were just too ‘confusing’ and so I restricted myself to talking about the impacts in more general terms.  However, here I hit a snag.  General concepts such as the world moving towards more utility provision of IT were often brushed aside for lacking any understanding of ‘real’ business and the maps I needed to demonstrate why this would happen were considered ‘too confusing’.  I felt increasingly trapped in a Paul Valéry paradox of "Everything simple is false. Everything which is complex is unusable”. I found myself sitting in rooms listening to conversations of the form: -

CTO: “All the new servers are installed; systems are running fine”.

CIO: “Excellent. Apparently the latest thing is cloud, hence I’ve asked Simon to come along. According to this business magazine then numerous successful companies are considering future pilots that might use it.  We should look into it and whether it’s worth considering as part of our long term strategy.”

CTO: “We’ve already examined the subject.  Cloud just means virtualisation of the data centre.  The latest research I have says that virtualisation has entered the plateau of performance and provides an extremely efficient mechanism of infrastructure provision over our existing data centre technology.  Our technology partners have virtualisation based products in this space that we should consider buying.”

CIO: “Excellent work. Well let’s look at getting this up and running.  There’s some business interest and I’d like to tell the CEO we’ve been using cloud if it comes up in conversation.  We don’t want to be left behind in this technology war.  Any thoughts Simon?”

It sounded so simple but it was so wrong, my heart always sank. To explain why, I’m going to perform a mental translation that I started to do by converting IT speak into military speak. 

Captain: “All the new cannons arrived. We installed them and fired them this morning.”

Colonel: “Excellent. Apparently the latest thing is bombing hills, hence I’ve asked Simon to come along. According to General’s weekly then numerous successful military leaders are considering future campaigns that might use it.  We should look into it and whether it’s worth considering as part of our long term strategy.”

Captain: “We’ve already examined the subject.  Bombing hills just means using mortars. The latest research I have says that mortars have entered the plateau of performance and provide an extremely efficient mechanism of killing compared to our existing technology.  Our technology partners have mortar based products in this space that we should consider buying.”

Colonel: “Excellent work. Well let’s look at getting this up and running.  There’s some military interest and I’d like to tell the general we’ve been bombing hills if it comes up in conversation.  We don’t want to be left behind in this technology war.  Any thoughts Simon?”
There seemed to be an overwhelming predilection towards copying others, technology faddism and buying pieces of kit rather than dealing with the problems at hand.  There was no discussion of the users, the landscape or how it was changing.  When I would raise how cloud was simply an evolution of an existing act from product to more industrialised utility models and as such it was more of change of business model rather than buying some tech ... well, it was almost like I had spoken heresy in gobbledygook.  

Business and IT both seemed to be operating in an environment that it did not understand and often with an assumption that buying more high tech wins the day.  But this is flawed.  Low tech can be used to overcome a high tech opponent that has poor situational awareness. The U.S. Seventh Cavalry, with access to gatling guns and “hi-tech” weaponry suffered a severe defeat at the Battle of the Little Bighorn against bows, arrows and stone clubs.  Occasionally I would let my guard down and deep dive into the topic and thereby hitting the other side of Valéry’s paradox.  Nearly every time I did this, I was dismissed by the simple question “what evidence do you have that evolution works in this way?”  

A new purpose

Unbeknownst to me, I had just been given a new purpose by others.  I had my own crusade, to explain topographical intelligence to the world of business and to provide an “uncommon sense to the common world of strategy”. It wasn’t quite as catchy as “Pre-shaved Yaks” but it became the title of my first failed attempt to write a book on mapping in 2007. 

I needed to demonstrate or disprove the concept of evolution in technology and mapping itself.  I had no clue on how to do this but that didn’t stop me becoming a bit obsessed.  My beard grew longer and I’m pretty sure I was mumbling mapping in my sleep.  The reason why my purpose became all-consuming was it had two other things that mattered.  First, it had a defined scope that was tangible and could be understood i.e. I was looking at the validity of this mapping technique.  Second, it also had a moral imperative, I was rebelling against the hordes of management consultants that enslaved us with 2x2s in this marriage of convenience! It felt good. I had: -

Purpose: Explain topographical intelligence to the world of business.
Scope: Demonstrate or disprove the concept of evolution and mapping.
Imperative: Rebel against the hordes of management consultants that enslave us by enabling ordinary people to learn.

Being mindful of this purpose, I could now start thinking about the potential users of mapping and try to define what their needs might be. The users would need some way of exploiting mapping, some way of learning how to map given the complexity of the topic and also some sort of confirmation or validation that mapping was based upon something sensible. There was a chain of needs from purpose to user need (the very anchor of mapping) which I’ve drawn in figure 59.
Figure 59 — Purpose



Given I had user needs then the very least I could do was map out that environment. Taking - “Confidence that mapping can benefit us” – from above then I’ve created a map of what is involved in figure 60.  I’ll use this to describe some particular points on mapping itself. One thing you will notice is the x-axis that I’m using here is slightly different.  Whilst I normally just use the descriptions for activities (genesis to commodity), in this case because we’re talking about knowledge then I’ll add those descriptions for the different stages of evolution (see figure 8, chapter 1) as well.

Figure 60 — Map of mapping


From the map above;

Point 1 –  From “confidence that mapping can benefit us” then we had two high level user needs which were a means to learn mapping and some form of validation.

Point 2 – learning mapping requires not only the ability to create a map of the landscape but to understand common economic patters, doctrine and even context specific gameplay.  Whilst common economic patterns are often discussed in a multitude of different economic sources, the issue of context specific gameplay is fairly unique and rarely covered.

Point 3 – the map itself is based upon user needs (anchor) which is reasonably well discussed, a value chain (position) which itself is a common topic in business books but also evolution (movement).  This last topic was rarely discussed back in 2007 other than in vague and hand waving notions.  There were certainly concepts and competing hypothesis on how things evolved but no clear definitive path.

One of the first things that struck me was that there existed a chain of needs above my users. When I am a supplier of a component to others (e.g. providing nuts and bolts to a car manufacturer) then my map extends into their map.  However, it also extends into my own purpose.  In other words, any map is part of a wider chain of needs.  

In figure 61, I’ve drawn an extended map from my purpose through to my user and their needs.  I’ve reverted back to the more typical x-axis because you should be familiar that multiple types (activities, practices, data and knowledge) can be used on a map and it makes it less busy to just to show evolution terms for activities rather than all.

Figure 61 — The chain



From the map above;

Point 1 – we have my needs i.e. my purpose, my scope and my moral imperative. This is my why of purpose expressed as a chain of needs e.g. be the world’s best tea shop or teach everyone to map.  Naturally, I’d hope that my purpose would lead to others doing something and hence there would be users.  In 2007, my scope was relatively novel as few seemed to be few who talking about mapping.  However, my imperative wasn’t quite so unique.  There was many rallying against the imposed consultancy industry.

Point 2 – whilst I hadn’t expressed this before, I had an unwritten need to survive, to make revenue and a profit.  This is a very common and well understood need.  In my case, I hoped that I could achieve this by meeting my users’ needs of either teaching them how to map or helping them create advantage over others. 

Point 3 – my users had needs themselves.  If my needs (i.e. purpose) didn’t fit in some way with the needs of my users, then this mismatch was likely to cause problems.  For example, if my highest purpose was to make profit rather than explain topographical intelligence, then I would be focusing on extracting money from my users (this is not one of their core needs) rather than providing a means of learning mapping and creating advantage (which is a core user need).  You should always strive to generate revenue and profit as a direct consequence of meeting users’ needs and providing value to them.   

There are few other subtler things worth noting about the map above.  First, my purpose is part of a chain of needs and as such it is influenced by the underlying components as they evolve.  Over time, if mapping and the related activities become more industrialised then a scope of “demonstrate the concepts of evolution and mapping” ceases to be relevant.  Even my moral imperative might disappear if the world becomes one where everyone maps, learns about their environment and has rebelled against management consultants with their 2x2s.  If you think back to the strategy cycle, this is simply a reflection of the issue that as you act, as your landscape changes then your purpose, scope, moral imperative and even how you survive have to adapt.  Nothing is permanent.

The second thing to note is that everything is evolving.  At some point in the future, I will need to adapt my scope not only because the underlying components have evolved but also that my scope has become industrialised.  There would be a point that you will be able to read endless free guides on how to map and even wikipedia articles.  If at that point might scope isn’t something else designed to meet users’ needs and provide value to them then I’ll be attempting to survive against free.

The final issue is the potential balancing act for conflict between different user needs.  I thought I had learned that lesson in my past doomed attempt to build a platform future by ignoring one set of very powerful users (the board) but I repeated the same mistake in my strategy consultancy interview.  I was trying to engage in a discussion on the environment whereas they needed a financial and HR analysis of impacts caused by a disposal.  Any play I created may have been right but without support of these users then it didn’t matter. 

This concept of conflict is worth exploring a bit more.  Let us take a trawl back through time and imagine you’re the boss of a hypothetical gun company just when we’re learning how to industrialise mechanical components.  We’re moving away from the world of highly customised mechanical components built by a cottage industry to having things like more uniform pipes and bolts.  Let us imagine that you’ve taken a bold move and started to buy more standard bolts and pipes (for barrels).  You then use these components in the manufacture of your guns by combining them with your skills and practice as gunsmiths. I’ve highlighted this in a map in figure 62.  Remember, it’s a hypothetical and I’ve no idea how to actually make a gun.

Figure 62 — The Hypothetical Gun company


You are the gun company (point 1) and you’re reliant upon bolts (point 2) from a company that manufactures them (point 3).  The first thing to note is that a map can cover many different organisations if you wish it to.  Each of those organisations could be expanded into more detailed.  When you map an environment then you’re only looking at a fraction of a vast chain of needs.  Hence the importance of defining a scope that is tangible rather than attempting to map an entire industry in detail.  You will learn over time how to simplify maps but to begin with, keep your ambitions small.  Think small! (see chapter 4, doctrine).

In the above, I’ve highlighted that guns are evolving and heading towards more of a commodity.  This can create conflict with your own desire to survive and your shareholders desire for profit as the revenue per unit decreases.  Such change can be compensated by volume but the desire is always to keep the same margin but increase units.  We almost want the thing to become ubiquitous but seen as unique.  There are ways of achieving this through branding i.e. persuading users that your “commodity” is somehow special or by adding new features to it.  It’s not just a gun, it’s special gun that makes you popular with others etc. 

At the same time, you want the components that you consume in manufacturing your gun to become more commodity like in order to reduce cost.  Obviously the shareholders of the bolt company would like to have volume operations but maintain the margin per unit.  They’ll be hoping their management use branding to try and persuade you that their “commodity” is somehow special.  It’s not just a bolt, it’s a special bolt that makes you popular with others etc. There will inherently be conflict throughout the landscape. 

But that conflict doesn’t even require another person.  Your own purpose can create its own conflict when faced with an evolving landscape.  Take for example my map of mapping above (figure 61).  My moral imperative was to rebel against the hordes of consultants that enslave us.  By definition I wanted mapping to spread far and wide.  But as mapping spreads then my ability to make revenue from teaching others how to map will ultimately diminish especially as basic guides on mapping become free.  I could either pursue a path of “it’s not just a map, it’s a special map that makes you popular with others” or I would have to find another way of surviving e.g. selling context specific forms of gameplay rather than just teaching people how to map.

Fortunately, context specific forms of gameplay aren’t just one thing.  If I taught people how to exploit ecosystems with an ILC (innovate-leverage-commoditise) model, then I should expect it to become industrialised over time.  However, mapping is itself a means of exploring and learning about new forms of context specific gameplay i.e. there should be a constant pipeline of new forms of gameplay as long as was willing to learn.

I’ve drawn this map up in figure 63 below. Whilst teaching mapping will ultimately industrialise (point 1) there is also a constant pipeline of gameplay (point 2) with new forms of gameplay emerging.  I could create a business, with a strong purpose and though it would have to adapt as components changed, there would be other opportunities for me to exploit.  Even if I open sourced the mapping method to encourage it to spread (which I did by making it all creative commons) then I knew that I could create a future as an “arms dealer” of gameplay. 

Figure 63 — Mapping the landscape.



There was a weakness however to this plan caused by point 3. The whole play would depend upon some sort of validation of mapping and at that time, I had nothing to back up my evolution axis, no success stories and no users.  I also needed users with success stories to entice other users because no user would risk their business on it without success stories.  It was a chicken and the egg moment and I had nothing to encourage someone to try.

The trouble with maps

I had to find some way of either showing the evolution scale had merit or disprove it and hence get on with my life.  I thought this was going to be easy and I couldn’t have been more wrong.  In his 1962 book on the Diffusion of Innovation, Everett Rogers explained a theory of how new technology spreads through cultures.  These innovations are communicated over time through various social structures from early adopters to late adopters (or laggards) and are consequently either spread through adoption or rejected in a society.  This spread is measured using adoption versus time through what are known as diffusion curves.   As Rogers’ noted, not all innovation spreads: even where an innovation has apparent usefulness, a number of factors can influence its adoption.  In 1991, Geoffrey Moore refined these concepts and noted that there was a chasm between the early adopters of an innovation and the early majority.  Many innovations failed to cross this chasm.  Numerous effects would impact the probability that the chasm would be crossed from positioning of the product to its target market to distribution channels to product pricing and even to marketing.  

It seemed self-obvious to me that as something diffused, crossing the chasm on the way to the mass majority then it would become more of a commodity.  All I had to do was find at what percentage of adoption did things on a diffusion curve start to evolve i.e. at what percentage did it become a product or a commodity?  – see figure 64.

Figure 64 — When does a diffusing thing change?


Unfortunately, as simple as it sounded, it was just plain wrong.  If you take something like a smartphone and ask people whether it’s a product or more of a commodity, then today you’ll probably get a range of answers and some disagreement.  However, there are more smart phones in the US than people, so we can say it’s widely diffused despite a lack of clarity over whether it’s a product or a commodity.  But, if I ask people whether a gold bar is a commodity then they’ll say yes.  This is bizarre because only a tiny fraction of the population actually own gold bars.  On one hand, you have a thing which is diffused but not a commodity whilst on the other hand you have something which is uncommon but is a commodity.

I spent weeks collecting diffusion curves for different activities and found there was no clear correlation between adoption and when something became a commodity.  I was unable to make statements such as “when 10% of the population have this it’ll become a product”. Hence, I looked at the time axis.  Surely, if it wasn’t adoption then we must be able to measure this evolution over time?  I took the diffusion curves and hypothesised that we could measure over time when the transition between stages would occur e.g. the first five years would be genesis and in the next three years we would see custom built examples – see figure 65.

Figure 65 — when does a diffusing thing change.


 However, when looking at the rate of diffusion then it turned out not to be constant and comparisons over time demonstrate a wide variety of adoption curves and different timescales for how things evolved.  I was stuck. I couldn’t seem to use time or adoption to measure evolution. 

My problem, was I had been lulled into a belief that we somehow understood the process of change.  The popular view tends to be that innovations appear somewhat randomly, either through the efforts of a group of people or often by happenstance e.g. a fortuitous accident such as the discovery of inkjets through the placing of a hot syringe in a well of ink.  These innovations then diffuse as above, some succeeding and crossing the chasm whilst others fail.  We often have competing examples – AC vs. DC electricity or BetaMax vs. VHS – until one becomes more established and dominant.  Over time, the same innovation becomes a commodity. It feels simple and logical.

However, the rate of diffusion is not constant and we cannot measure the change of evolution over adoption.  Furthermore, whatever process was occurring was not always continuous.  As highlighted by Christensen’s work on disruptive innovation, an existing industry can be disrupted by the introduction of a new technology that offers different performance attributes from those established.  In other words, the diffusion of one thing can be substituted for another.  For example, hydraulic excavators disrupted cable excavators and its associated suppliers.  However, the same process could also be continuous.  These innovations could be improving and sustaining e.g. a better car, a better phone, a better computer or a more efficient means of steel manufacturing such as the Bessemer convertor. 

It seemed that organisations were competing in an ecosystem with others and the desire to differentiate was driving the creation of innovations that diffuse forcing all companies to adapt (the Red Queen effect, chapter 3).  The innovations themselves appear somewhat randomly, often by fortuitous accident and whilst some innovations disrupt, others will sustain.  Furthermore, the innovations themselves might be novel or represent an incremental improvement to some past innovation e.g. a better car rather than the first car.  The process of diffusion itself is complex, changing in terms of the rate of diffusion and has pitfalls such as the chasm.  Given this complexity, how could I hope to describe a process of evolution? 

Given such an environment, how could any CEO be anything but bewildered and lost by the apparent randomness of competition?  Where will the next great innovation appear?  Will it be sustaining or a disruptive change?  How quickly will it spread?  Will it not spread?  Will it jump the chasm?  Will it impact me?  Should we be early adopters or not?  Is it any wonder that our ability to predict the future is often lamentable?  Is it any surprise that given the fluid nature of our environment we are reduced to hoping to keep afloat by catching the latest wave of change?  Is it really that shocking that in practice we’re forced to copy what others are doing, to go with the market as we all swarm around new concepts?  

All of these thoughts were swirling through my mind as I looked at that evolution axis of genesis, custom, product and commodity.  It seemed so simple.  I had obviously been seduced by this.  I could find no evidence to support this pattern.  I had probably wasted months trying to solve an impossible problem.

That first question

The standard model I’ve outlined contains the random appearance of innovation, different rates of diffusion and both sustaining and disruptive change.  Whilst it sounds simple, it it is hopelessly complex in practice.  It was probably a day or two after I had decided that this was probably a lost cause that I thought of the first question that I needed to ask.  What actually constitutes an innovation?

Whether something is an innovation or not partially depends upon the perspective of the observer. Hence, the Bessemer convertor was a process improvement to iron and steel manufacturers but a product innovation to suppliers of equipment for those industries.  Equally, the modern day provision of computing resources through large utility suppliers (such as Amazon’s EC2 service) is certainly a new business model for those suppliers but for its consumers then the use of computing resources in business is hardly new. 

Jan Fagerberg defined innovation as the “first attempt to put an idea into practice”.  Unfortunately, this equally applies to something novel or a feature improvement or a new business model for an existing activity.  However, is a feature improvement to a phone really the same as the creation of the first phone?  Is this equivalent to the introduction of rental service for phones? They are all called innovations but are they really the same or are we using one word to describe many different types of change?  Maybe this was the confusion? I was looking at the diffusion of innovations but maybe we were talking about diffusion of different types of innovation?

Somehow, in a mad frenzy of writing on whiteboards, I connected three pieces of information to challenge my view of random and equivalent innovation impacting across society.  Rogers and Kincaid in “Towards a new Paradigm of Research” published the first piece of the puzzle in 1981.  When examining continuous and sustaining technological innovation, they noted that the rate of improvement tends to be slow and then accelerates until reaching a plateau of a more mature and slow improving stage.  Each improved version increasingly adds definition, eventually providing something that can be considered feature complete, mature and generally well understood. The insight here is that the maturing of a technology requires multiple improved versions with each reducing uncertainty about the technology.

The second piece of the puzzle was published in 1996 by Paul Strassmann, a great and often under acknowledged economist.  In “The value of computers, information & knowledge”, Strassmann showed that within business there was no correlation between IT spending and the value it created for the business.  The work demonstrated that IT wasn’t one thing but instead consisted of many activities of which some appeared to create value whilst others did not.  The insight here is that organisations consist of multiple components some of which create value whilst others did not.

The third piece was a Harvard Business Review paper, “Does IT Matter”, published by Nicholas Carr in 2003.  This paper discussed the IT industry and demonstrated that as certain aspects of IT became widespread and common they had diminishing differential value and became more of a cost of doing business.

In isolation the three pieces were interesting to note but in combination they implied something remarkable but obvious in hindsight about how activities (i.e. the things we do) change.

Activities evolved through multiple improving versions.

Activities were not uniform; any component could contain multiple components which were at different stages of evolution i.e. there was no “IT” but a mass of components that made “IT”.

The characteristics of activities changed as they evolved; as they became more commonplace they had diminishing differential value, became more of a cost of doing business and more certain. The improving versions of the same activity would have different characteristics. 

These were the same economic patterns that I had noticed (chapter 3) and so it seemed to imply independent corroboration that evolution was occurring but somehow I just couldn’t get the pattern to fit with diffusion.  I felt that I must be wrong.  Then, I started to realise that maybe these two processes are related but separate.  

Maybe I had just got stuck on trying to tie diffusion of innovation to evolution?  What if instead, evolution consisted of multiple waves of diffusion e.g. the diffusion of the first innovation of the act followed by waves of diffusions of improving innovations?  Maybe those waves were different? An examination of historical records clearly showed that technological tends to mature through multiple waves of diffusion of ever-improved versions.  The pattern of evolution was there and I had collected a wealth of data over the years which suggested it.  I just had to break out of the shackles of diffusion.

Uncertainty is the key

I started to think in terms of multiple diffusion curves.  Let us take an activity, we shall call it A – it could be television or the telephone, it doesn’t matter.  Now let us assume this activity will evolve through several versions - A1, A2, A3, A4 and A5.  Each version might be disruptive or sustaining to the previous and each will diffuse on its own diffusion curve – see figure 66.

Figure 66 — multiple waves of diffusion.


Whilst each version of the act diffuses to 100% adoption of its market, those applicable markets could be different sizes for different versions. Hence the market for the first phones might not be the same market for later, more evolved phones. The time for diffusion of each version could also be different.

I had been assuming that by looking at adoption in a population then we could determine how evolved something was because of how “ubiquitous” it had become.  This idea had come from concepts that commodity was commonplace. But what if the applicable markets were fundamentally different?  Maybe ubiquity for gold bars meant 2% of the total population owning them whereas ubiquity of smart phone meant everyone owns three of them?  What I need to measure was adoption in its ‘ubiquitous’ market but unless the thing had become a commodity then I wouldn’t know if a more improved version with a larger market wasn’t just around the corner.

In the above figure, I’ve have a connection between evolution (moving from A1 to A5) and multiple waves of diffusion. As something evolved it would become more “ubiquitous” (the market for A5 is larger than A1) and I also knew that it would become more feature complete, more well understood and more mature.  I somehow had to connect this all together.

By pure serendipity, it was just at this time that I stumbled upon the Stacey Matrix.  This is a mechanism for classifying types of agreements between groups and the appropriate mechanisms of management.  At one extreme, you had groups that were far from agreement with high levels of uncertainty; this was the domain of chaos and anarchy.  At the other extreme you had groups that were close to agreement with high degrees of certainty, the domain of simple. What struck me with the Stacey Matrix (see figure 67) was the polar opposite nature of the domains and how the language was not dissimilar to the apparent process of evolution. 

Figure 67 —  Brenda Zimmerman’s simplified version of The Stacey Matrix


With evolution then at one extreme we had the more chaotic world of the novel and new with high degrees of uncertainty and unpredictability over what would be created.  At the other extreme, more evolved activities were well understood.  The matrix mimicked the same sort of conversations that I was having where people could agree that a commodity was a commodity but disagreed vehemently on what stage of evolution were less evolved components in.  It occurred to me that maybe these sorts of discussion and arguments would be occurring in journals and that somehow I might be able to use this to get an idea on how evolved something was?  

I headed back to the Library.  In 2007, I spent a great deal of time trying to determine a measure for the certainty for an act.  It was by looking in detail at publications of journals and papers on activities that I noted how they changed.  In examining a core set of activities and 9,221 related articles, I was able to categorise the articles into four main stages – see figure 68.

Figure 68 — Changing nature of publications


To begin with, articles would discuss the wonder of the thing e.g. the wonder of radio.  This would then be replaced with articles discussing building, construction and awareness e.g. how to build your own radio. These would then be replaced by articles discussing operation, maintenance and feature differentiation e.g. which radio is best.  Finally, it would become dominated by use e.g. the Radio Times and what programs to listen to.  Using stage II & III publications I developed a certainty scale.  On the figure above, I’ve also marked the point of stability, the moment when publications changed from being dominated by operations, maintenance and feature differentiation to being dominated by use.

I felt I was getting close to something but I still couldn’t quite describe how evolution worked or why.  I’m not sure what possessed me to do this but I started to look at that point of stability and determine how diffused something was in the marketplace (see figure 69). I did this for radios, for TVs and all sorts of other common appliances.  I defined this marketplace as the point of ubiquity i.e. the applicable market at which something had become stable.

Figure 69 — The point of ubiquity


All of these markets were different sizes and different percentages of adoption and there was no obvious connection.  By pure chance, whilst experimenting around with this, I took a wild stab in the dark and decided to plot ubiquity versus certainty for a range of activities.  

For each activity, I determined the point of stability by looking at the publication changes. I defined this point as 100% certain on my certainty scale. I could now trace back through time and determine how certain that act was relative to that point.  I also looked up the applicable market at the time that the act reached the point of stability and defined this market as the point of ubiquity i.e. 100% ubiquitous. I could then trace back through history to determine how ubiquitous an act was relative to the point. Then for a range of activities, I traced back ubiquity and certainty.  Finally, I plotted ubiquity versus certainty and the result is provided in figure 70.

Figure 70 — Ubiquity versus Certainty.


I spent several hours staring at the result trying to understand what it meant.  It suddenly dawned on me that every activity seemed to be following the same path. There was a strong correlation here.  I then went back and overlaid those different stages of publication onto the graph and extended both ends as activities emerge before people start writing about them and continue well after becoming a commodity.  I then gave each stage a generic term e.g. product for stage III and commodity for stage IV.  The result was the evolution curve in figure 71 that I published in various often simpler guises (e.g. Butler Group Review, Mar 2008, Why Nothing is Simple in Management) and talked about at numerous conferences.

Figure 71 — The evolution curve.


Evolution begins with the genesis of an activity e.g. the first battery, the first phone, the first television or the first digital computer such as the Z3 in 1943. If it is successful, then it will diffuse in its market. If it is useful then others will copy and custom-built examples of that activity will appear (e.g. systems such LEO – Lyons Electronic Office).  They will also diffuse in what tends to be a larger market.  As the activity spreads through custom-built systems then pressure mounts for adoption and products are produced.  These products themselves diffuse through a wider market often with constant improvements or disruptive changes with each diffusing in turn and growing the market.  As the act becomes more widespread and well understood alternative models of provision appear such as rental services.  Eventually the act becomes so widespread and well defined it becomes “ubiquitous”, well understood and more of a commodity.  It will tend to be standardised with little feature differentiation between offerings.  At this stage of volume operations then utility services are likely to have appeared especially if the the act is suitable for delivery by such a mechanism. There is no time or adoption axis on the evolution curve, only ubiquity (to its market) versus certainty.  It may take ten years or a hundred for something to make its journey from genesis to commodity. It may become a commodity when 2% of the population own it or everyone has three.  However, regardless of this, I know the path it will take.  

What causes things to take that journey turns out to be simple competition represented in two forms: -

Demand competition and consumer desire for anything that is useful, makes a difference and creates an advantage is the driver for ubiquity (i.e. anything useful spreads). 

Supply competition and the desire of providers to supply an activity to consumers is the driver for feature completeness and improvement of an activity.  For example, an average car today includes as standard a wide variety of what was once novel feature differentiations such as electric windows, air bags, an alarm systems, a stereo, seat belts, roll bars, windscreen wipers etc.  It’s the desire to differentiate and to make things better combined with the acts of competition and copying that drives things to become more uniform and complete.

It is important not to confuse evolution with diffusion though both patterns have an S-curve shape.  The pattern of diffusion is one of adoption of a specific change over time whether that change is something novel or a feature differentiation or a particular business model. The first telephone diffused, a better method of producing glass known as the Pilkington float glass method diffused, new and improved washing powder diffuses and a utility model for provision of electricity diffused but these examples are all different in nature.  The pattern of evolution deals with this changing nature.  It does not concern itself with adoption of a specific change (i.e. a better computer) but instead it shows how the activity itself evolved from custom built to more of a product. 

Diffusion and evolution are of course connected.  Evolution of an act can consist of thousands of diffusion curves of improving versions of that act, each with their own chasms.  As an activity evolves, each more evolved version will diffuse from early adopters to laggards through its own applicable market.  That market can and does grow as the act becomes more accessible to a larger audience.  For example, with the first computing products you had early adopters and laggards within a relatively small market.  As the products improved through constant incremental changes the applicable market grew significantly and later versions diffused through a much broader public market. Today, computing infrastructure is “ubiquitous” which is why we have utility services like Amazon EC2.

It’s important to reiterate that unlike diffusion, evolution cannot be determined over time but instead over the ubiquity of the act versus its certainty i.e. how complete, well understood and fit for purpose it is.  Whilst we can use the evolution curve to say that a specific product will evolve over an undetermined amount of time to become more of a commodity, we cannot say precisely when this will happen only what will happen.  Also, the evolution curve can only be precisely determined for the past i.e. the act needs to become stable (i.e. reach the point of certainty) for us to determine its point of ubiquity and therefore calculate the path.  This means we cannot accurately determine where something is on the evolution curve until it has become a commodity, at which point we can determine where it was in the past.  Hence, we are forced to rely on a cheat sheet based upon changing characteristics (chapter 2) along with weak signal analysis to estimate where something is. There is unfortunately, no crystal ball to the future and we have to embrace a degree of uncertainty until it reaches the point of stability and becomes certain.

As evolution deals with the change to the act itself, it does not care whether some specific change is incremental or disruptive to the past.  A company may produce a better product (e.g. a better cable excavator) or instead a product may be substituted by another (e.g. cable vs. hydraulic excavators) but the act of “digging holes” doesn’t change.  Instead we simply have a more evolved way of doing this.  Today, the evolution of computing infrastructure from product to utility is disruptive for the past product industry but the act of consuming computing infrastructure isn’t new, it is simply more evolved. 

Every activity I have examined throughout history follows this path.
  • The genesis of the humble screw can be traced back to Archytas of Tarentum (400 BC). The principle was later refined by Archimedes and also used to construct devices to raise water.  Over the next two thousand years most screws (and any associated bolts) were cut by hand however demand for screw threads and fasteners created increasing pressure for a more industrialised process.  J and W Wyatt had patented such a concept in 1760 and Jesse Ramsden in 1770 introduced the first  form of screw cutting lathe.  However without a practical means of achieving industrialisation and with no standards then the industry continued primarily as was.  Maudslay then introduced the first industrially practical screw-cutting lathe in 1800 that combined elements such as the slide rest, change gears and lead-screw to achieve the effect.  However, whilst screws and bolts could be manufactured with inter-changeable components, the lack of any standards thwarted general inter-changeability. In 1841, James Whitworth collected a large number of samples from British manufacturers and proposed a set of standards including the angle of thread and threads per inches. The proposals became standard practice in 1860 and a highly standardised and industrialised sector developed that we recognise today.
  • The history of electrical power generation can be traced from its genesis with the Parthian battery (around 200AD) to custom-built examples of generators such as the Hippolyte Pixii (1832) to the first products such as Siemens Generators (1866) to Westinghouse’s utility provision of AC electricity (1886) and the subsequent standardisation of electricity provision from the introduction of the first standard plugs and sockets to standards for transmission and the formation of national grids (UK National Grid, 1926).
  • The history of modern computing infrastructure can be traced from its genesis with the Z3 computer (1943) to custom built examples such as LEO or Lyons Electronic Office (1949) to the first products such as IBM 650 (1953) to rental services such as Tymshare (1964) to commodity provision of computing infrastructure and more recently utility provision with Amazon EC2 (2006).

It’s also worth noting the hockey stick effect of the graph.  What happen is that first a novel activity appears but it evolves mainly through understanding rather than rapidly spreading.  As our understanding of the activity increases, we reach a tipping point that the act now rapidly spreads through multiple waves of custom built examples and then products.  As the act becomes widespread, our understanding of it increases until becomes embedded in our social systems and in many cases almost invisible.  We no longer consider how it is constructed, it is almost a given and can in many cases be buried in higher order systems as a component e.g. the nut and bolt hidden in the machine or the car or the toaster.

For interest, this hockey stick pattern is similar to that found by Boiset, Canals and Macmillan in their simulation of I-Space and an agent-based approach to modelling knowledge flows.  Their work looked at modelling how knowledge spreads through economic and social systems by examining the interactions of agents (i.e. individuals).  One of the things they demonstrated confirmed a previous expectation that knowledge is first abstracted and codified before it rapidly diffuses (see figure 72).  The same pattern where first our understanding and certainty over an activity increases (i.e. it is abstracted and codified) before rapidly becoming widespread occurs in the evolution curve. 

Figure 72 — Simulation of I-Space




The pattern of evolution that I used as the x-axis of my map had some sense of validity in history.  I could with some confidence describe how things would evolve even though I couldn’t say precisely when.

Looking back, I could now see that the term “innovation” does appear to be currently used to describe changes in different states of evolution.  Some things described as “innovations” are genuinely novel, new and hence uncertain activities (i.e. genesis).  By virtue of being uncertain then the appearance of these is almost impossible to predict and you cannot know with certainty what will appear.  However, many things described as “innovations” are simply improvements to an existing activity and part of a visible process of evolution that is driven by competition.  Whilst you cannot predict when these changes will occur as evolution cannot be plotted over time, you can predict what will happen.  This notion is contrary to the more random perception of “innovation” i.e. Amazon EC2 a utility computing infrastructure service (commonly known as cloud computing) wasn’t a random accident but instead it was inevitable that some company would provide utility computing infrastructure.   

Far from being like navigators in a storm constantly coping with the maelstrom around us, it appears that the sea has structure.  Mapping seemed to have merit and I had a purpose, to teach everyone who would listen.  Alas, it was now mid 2008 and I was fast running out of cash. I would have to turn mapping to a profit one way or another.

----
Next Chapter in Series : Keeping the wolves at bay
GitBook link [to be published soon]