Wednesday, September 07, 2016

Keeping the wolves at bay

Chapter 8

[Draft version. The completed version is on Medium]

To keep funding my research, I took a few more paid gigs which basically meant becoming a gun for hire.  Cloud computing was colliding with the technology scene and there was lots of confusion about.  This meant a constant stream of conferences - including some that actually paid - along with plenty of opportunity for piecemeal work.  This wild west also resulted in some fairly shady practices and exploitation.  I tried not to cause harm and instead gave time to community events like Cloud Camp London.  Hang on, don’t you either decide to do harm or not?  Where’s the ambiguity?  My problem was simplicity.

Making it simple

One of the issues with mapping is people found it complex.  This is not really that surprising because you’re exposing a world that people are unfamiliar with.  It takes time and practice.  However, confusion over cloud computing had also created plenty of opportunity for a new way of thinking.  Alas piling on complexity onto a confused person doesn’t help them hence I looked at ways to simplify, to make it more palatable and more familiar.  I started spot painting.

Spot painting companies

To help people understand maps, I produced mini-cheat sheets (see figure 73) for each of the stages of evolution and then rather than produce a map, I used to paint on whatever existing diagrams they had.  I’ll use the following figures from a Butler Group conference in 2008 to explain.  I’ve taken the liberty of updating the older terms of innovation & bespoke to use the modern terms of genesis and custom built.

Figure 73 — A cheat sheet for commodity


The cheat sheet had a number of features.  It had the evolution scale (point 1) to provide you an idea of what we were talking about (e.g. commodity) and how it fitted into evolution.  It had a basic list of primary characteristics (point 2) and a benefit curve (point 3).  This benefit curve was a reflection of the changing differential benefit of an activity to an organisation e.g. research was an overall cost, commodity provided little difference and the differential value of products declined as they became more widespread.  I then used these mini cheat sheets to annotate existing business process diagrams – see figure 74.

Figure 74 — Annotated diagram



These diagrams have certain advantages and disadvantages over maps.  First, they are more familiar and unthreatening i.e. you can take an existing process diagram and colour it.  They certainly (in combination with the cheat sheets) help you question how you’re building something.  But, as they have no anchor, there is no position relative to a user which means people don’t tend to focus on the user.  Also movement cannot be clearly seen but has to be implied through the changing of colours.  These diagrams enabled me to introduce the concepts of evolution but without position and movement then they were unhelpful for learning economic patterns and forms of gameplay.  However, the simplicity made them moderately popular with a few clients.

Taking it too far

Unfortunately, I didn't stop there.  The next diagram, I’m a bit loathe to show.  I wasn’t trying to cause harm and I hope it hasn’t.  In order to teach mapping then I simplified the evolution curve and focused on the extremes – see figure 75 from the Butler Group ADLM conference 2008.

Figure 75 — Polar opposite IT


The idea conveyed was one of IT consisting of polar opposite extremes, which is perfectly reasonable (Salamon & Storey Innovation paradox, 2002).  These extremes of the uncharted (chaotic) and industrialised (linear) domains exist.  But there’s also the transition in the middle which has different characteristics.  The graph certainly helped introduce concepts of evolution such as why one size doesn’t fit all and also it proved quite popular due to its simplicity.  The danger occurs if you take it too literally and start organising in this manner.  In my defence, I did publish articles during the time that emphasised that you needed to deal with the transition, though it’s fair to say I could have been much clearer often reducing entire concepts to single lines (see Exhibit 1).

Exhibit 1 : Lack of clarity crimes committed by the Author.
Butler Group Review, Dec 2007, The Sum of all Fears

While the use of utility services removes obstacles to new IT innovations, it may create an organisational challenge for IT departments. Innovations are dynamic problems and require dynamic methodologies such as agile development and a more worth-focused VC-like approach to financing. By contrast, the use of utility services requires a focus on cost, standards, and static methodologies. Unless you intend to stop innovating and give up on the long-term source of profit for any organisation, then the IT department must find a way to manage both of these extremes. As commoditisation is ongoing, you’ll also need to continuously deal with the transition between these two extremes. To make matters worse, as “X as a service” grows, barriers to participation in many industries will reduce, causing greater competition and accelerating the rate of new innovations into the marketplace. 

Overall, the pattern suggests reductions in non- strategic costs, more competition for information businesses (including competition from consumers), a faster rate of release in the marketplace, and increasing pressure on IT as it deals with the conflicting demands of two polar opposites in focus. 

The problem with simple

If I believed these simple versions to be unhelpful then why did I use them?  It’s a question of balance and a trade-off.  The problem is Ashby’s law of requisite variety.  Basically, the law states that in a stable system then the number of states in its control mechanism must be greater than or equal to the number of states in the system being controlled i.e. the controlling mechanism must represent the complexity in what is being controlled.  Organisations are very complex things and whilst mapping provides you a window onto this, you need to have a management capability able to cope with the complexity. 

There is unfortunately another solution to Ashby’s Law.  Rather than cope with complexity, you pretend that what is being managed is simple.  We tend to like things such as 2x2 diagrams not because they represent reality but because they obscure it and hence are simple to understand.  We trade-off our ability to learn and to understand the environment for an illusion of simplicity and easy manageability.  This is why we use one size fits all or apply KPI (key performance indicators) across an entire organisation even when they are not appropriate. When it comes to the KISS principle, do remember that keeping it simple can make us act stupidly. 

Eventually, I was faced with a choice.  Do I keep it simple thereby making it more accessible and just accept the flaws or do I take a slower path and try to push organisations towards a higher understanding of position and movement?  This opened the door to another compromise.  I could also do the heavy lifting for others!  I could just give them the result!  However, this would make them dependent upon me, the normal consultant path.   My purpose was to free people from the shackles of consultants and not to chain them up even more.  This compromise was out of the question.  I’d like to think that I stood my ground here but with almost no-one mapping, bills mounting and clients taking an interest in the simplified concepts then it’s fair to say that I was starting to wobble.

Finding my mojo

My salvation was a piece of paid work that I’m particularly fond of. It concerned the issue of efficiency versus effectiveness and to have any hope of explaining it then we need to introduce three concepts – worth based development, pricing granularity and flow.

Worth based development

In 2003, the company that I ran built and operated small sized systems for others.  There were no big systems, these were more of the £100k - £2M scale covering a few million users.  Our clients usually wanted to write a detailed specification of exactly what they needed to ensure we delivered.  That doesn’t sound too bad but even at this small scale then some of the components in these projects would be in the uncharted space and hence no-one knew exactly what was wanted.  Unfortunately, back then, I didn’t have the language to explain this.  Hence we built and operated the systems and inevitably we had some tension over change control and arguments over what was in or out of a particular contract. 

During one of these discussions, I pointed out to the client that we were sitting around a table arguing over what was in or out of a piece of paper but not one of us was talking about what the users of the system needed.  The contract wasn’t really the customer here; the client’s end users were.  We needed to change this discussion and focus on the end user.  I suggested that we should create a metric of value based upon the end user, something we could both work towards.  The idea fell on deaf ears as the client was pre-occupied with the contract but at least the seed was planted.  It wasn’t long after this that another project provided an opportunity to test this idea.  The client gave me a specification and asked how much would it cost to build a system to do this?  I replied – “How does free sound?” 

They were a bit shocked but then I added “However, we will have to determine a measure of value or worth and I’ll get paid on that”. There was a bit of um and ah but eventually we agreed to try out this method of worth based development.  In this case, the goal of the system was to provide leads for an expensive range of large format printers (LFPs).  The client wanted more leads.  Their potential end users wanted a way of finding out more on these printers along with a way of testing them.  I could build something which would marry the two different set of needs. But rather than the client paying up front and taking all the risk, I would build it for free and take a fee on every new lead created.  

We (as in the client and my company) were no longer focused on what was in or out of a contract but on a single task of creating more leads.  We both had an incentive for this.  I also had a new incentive for cost effectiveness because the more efficient I made system then the more profit I retained. We agreed and so I built and operated a system which enabled people to upload an image, test it on a large format printer and get delivery of their print plus information on the kit’s performance plus a sales call. The system soared.  

In three months we had generated more leads than the client normally had in a year and this was accelerating.  It was stunning.  The client’s revenue was rocketing but so was my revenue as the system was based upon a metric of leads.  The more success they had, the more success I had.  It was a win-win situation.  Alas, this actually created two problems and one headache.

The problems were caused by the client being unprepared for this level of interest and internal budgeting systems that weren’t designed to cope with such a variable success.  What has budgeting got to do with this?  Well, the client’s success was more leads which translated into more revenue. This was good from a budgeting point of view.  But the more success the client had then the more my fee increased as it was also based on leads.  This was bad from a budgeting point of view.  The system became so successful that it exceeded an internal budget figure the client had set for costs and this caused an internal conflict with demands to switch off the system until new budget was allocated (a very lengthy process).  Switch off a revenue generating system because it’s doing better than expected and passed some arbitrary budget figure?  This is what happens when an inflexible one size fits all approaches hits reality. 

Before you go “this is daft”, actually it’s not.  Over time companies tend to build up a body of work and processes – the corporate corpus - designed to stop past failure.  It’s all done with reasonable intentions.  The desire to spend money effectively and the desire to know resources are being well used.  That mass of good intentions is often the cause of many problems when you try to change the system.  That corpus can become a zombie, killing off innovation whenever it is found.  I had attempted to change the system by introducing a worth based approach and I should have known that this would cause tensions with the corpus.  I learned that lesson quickly.

Today, these worth based techniques are normally called “outcome” based or something of that ilk.  I’ve used them many times over the last decade, in fact I prefer them.  Whilst they tend to solve the issue of an excessive focus on contracts, they have invariably hit other roadblocks such as a client not being able to describe a value or the purpose of the system or  even conflict and politics within internal processes.  You need to be aware of this and to mitigate it. 

Those were the problems - lack of preparation, the corporate corpus - but the headache that worth based approaches caused was always mine.  There was some financial risk associated with these projects and some investment needed.  I had to be concerned with not only the development but operations.  This included lots of capital intensive investment along with costs that weren’t either truly variable or ones that we could only guestimate at.  To minimise the risk we shared data centres and other common components but in a large heterogeneous application environment then this just complicates allocation of costs.  How much would a user visiting our application cost us in terms of compute, power and data centre usage was an incredibly tough question. 

In my risk models, we also had no clear way of determining operational costs as it scaled.  We had to make lots of estimates on stepwise changes and how much compute resources would be used by an application we hadn’t built.  The financial model was more akin to art than any form of science.  Some of that uncertainty certainly ending up as “padding” in the metric e.g. the price per lead that I would charge.  Other areas had better costs models.  In the LFP example above then distribution systems and even printing were more variable (i.e. price per print or price per package) because we had experience of running an online photo and printing service.  This brings me to the next topic of pricing granularity.

Pricing granularity

With a worth based approach, I have a strong incentive to: -

reduce the operational cost of the project because the cheaper it is then the more profit I make. 
provide reliability because if the system went down, I wasn’t making any money.
ensure the system maximises the value metric which in the LFP case was "generating leads".

But I also had questions on where to invest.  In the case of LFP, it was doing very well and so I decided to invest an additional $100K.  But where do I best put the money?  Improving the site reliability?  Reducing the operational cost of the application through better code?  Maximising the number of users through marketing?  Improving conversion of users to leads?  Which choice brings me the better return?  This is particularly tough to answer if you can’t effectively determine operational cost of an application beyond hand waving or if other data is also guessed at.

One of the huge benefits of Zimki (our platform as a service play) was not only its serverless nature and how you could simply write code through an online IDE but also its pricing granularity was down to the function.  Any application is nothing more than a high level function that calls other functions.  If I developed a function in Zimki, then whenever that function was called then I could see exactly how much it had cost me.  I was charged on the network, storage and compute resources used by that function.  This was quite a revelation.  It changed behaviour significantly because suddenly in the sea of code that is my application, I could find individual functions that disproportionately cost me more.  I’ll talk more about this change of practice in the next chapter but for now, just being aware of this is enough.

So, for a developer on Zimki, I had price granularity down to running a single function.  As far as I know this was unparalleled in the world of IT and we haven’t seen the likes of this since then until AWS Lambda.  Now, obviously I was also the provider of Zimki and behind the scenes was a complex array of basket of goods concepts and all manner of financial instruments to be able to provide those cost figures.  But this was abstracted from the developer.  All they saw was a cost every time their function ran no matter how much it scaled.  There was no capital investment and this turned the operational cost of an application into a manageable variable.

Flow

What we’re now going to do is combine the ideas of worth based (outcome) development and pricing granularity to introduce an idea known as flow.  In order to do this, we’re also going to have to use scope and how a map can have multiple users as introduced in chapter 7.  After this, I’ll show you how flow was used to question efficiency vs effectiveness and why those simplified maps (e.g. the spot diagrams) are ok but not ideal.

Revisiting LFP

To begin with, we’re going to revisit the LFP project but with a map and the knowledge of what a utility platform can bring.  In figure 76, I’ve created a map of the worth based LFP project.  Back when we were working on the project, I hadn’t developed the mapping concept fully and so this is post event analysis.  I won’t mark-up points on this map, hopefully you’ve enough experience now to start reading them.

Figure 76 — Map of the worth based project


The map begins with our client who has a need for more leads and ultimately companies buying their product.  The conversion from lead to actually purchasing a printer is beyond the scope of this project as that was within client’s sales organisation, we’re focused solely on generating leads.  The other type of user in this map is the consumer who hopefully will buy one of these expensive printers.  They have different needs, they want to find out about the right sort of printer for their commercial operations and to test it before buying something they will use.  At that time, this was all done through onsite or showroom visits or glitzy brochures. We aimed to provide an online mechanism for the consumer to find out about the printer (a microsite) and to test it (the testing application).

The test would be a high resolution image that the potential customer would upload which is then printed out on the printer of their choice.  Their poster (this was large format) would be distributed to the potential consumer along with a standard graphical poster (showing the full capabilities), relevant marketing brochures and a sales call arranged.  Each of the components on the map can expand into more detail if we wish e.g. platform needs compute which needs a data centre but this map is good enough for our purpose.  The platform space was the source of my headaches due to my inability to provide a variable operational cost for an application.  But the platform space was evolving towards more of a utility service – in fact, I was the person causing this.

So, let us look at the map again but move further into the future in which a utility platform has emerged. I’m going to add some financial indicators onto this map. See figure 76.

Figure 77 — Finance of the worth based project


From the map, we hope to have visitors to our microsite which will extoll the virtue of owning large format printing and hopefully persuades some of these visitors to go and test it out. The act of turning a visitor into an actual lead requires the user to test a printer.  So we have multiple conversion rates e.g. from microsite to testing application and from visitor to lead.  At the start these will be unknown. We can guess.

Normally, operating a microsite requires all those hard to calculate costs but in a utility platform world, your application is simply a function running on the platform and I’m charged for use.  The operational cost of my microsite is basically the number of visitors x the average cost of the microsite function.  Remember, an application consists of many functions and users can navigate around it which means some “wandering” users turn out to be more expensive than others.  But we can cope with that by taking an average for our microsite.

The same will apply to my testing application but in this case there will be direct visitors plus converted visitors from the microsite i.e. those we’ve persuaded on the benefits of LFP and hence encouraged them to go and test out a printer.  Every use of the testing application (a function) will incur a cost.  The two function costs (microsite and testing application) could be wildly different depending upon what the applications did and how well the code was written but at least we had a granular price for every call.

I could now say
  • We have number of visitors [V1] to the microsite
  • Each call to the microsite costs on average C1
  • The total cost of the microsite would be V1 x C1
  • Of the visitors V1 then a percentage (the conversion rate R1) would visit the testing application.
  • Each call to the testing application costs on average C2
  • The total cost of the testing application would be (V1 x R1) x C2
  • Of the (V1 x R1) visitors to the testing application then a percentage would try a printer (the conversion rate R2)
  • Those visitors who tried a printer (V1 x R1 x R2) are leads
  • Each lead incurs a distribution cost (C3) for the brochure and print which also incurs a printing cost (C4)
  • The total cost of distribution and printing would be (V1 x R1 x R2) x (C3 + C4)
  • Each lead would generate a revenue of P1 (the agreed price)
  • The total revenue generated would be P1 x  (V1 x R1 x R2)
  • The total cost of generating that revenue would be
    (V1 x C1)
    + (V1 x R1) x C2
    + (V1 x R1 x R2) x (C3 + C4)
  • Operating Profit =
    P1 x  (V1 x R1 x R2) - total cost of generating

This is like mana from heaven for someone building a business model.  Certainly I had investment in developing the code but with application being a variable operational cost then I can make a money printing machine if I set the price (P1) right.  No big shocks and no capital investment step.  In fact, any investment can be directed to making that equation more profitable – increasing the conversion rates, reducing the cost of application function call, getting more visitors etc.  Of course, this wasn’t the only path.  The visitor might not come to the microsite but instead go directly to the testing application.  There were a number of potential flows through the map.
When you look at map, there can be many forms of flow within it whether financial or otherwise.  It could be flows of revenue to the provider or flows of risk.  For example, if the utility platform dies due to some catastrophic event then it’ll impact my microsite and my testing application which will impact the consumer needs and stop any lead generation incurring a financial penalty to me in terms of lost revenue.  Whereas, if I run out of brochures, this impacts distribution and I have a choice on whether to send out the prints now or delay until the brochures are available.  In figure 78, I’ve given an example of a flow within a map from potential consumer through their need to microsite to testing application to distribution.

Figure 78 — Flow of the worth based project


It’s important to note that the interfaces between components in a map represent flows of capital.  Such capital can be physical, financial, information, knowledge, risk, time or social.  It could be anything which we can trade.  Often people talk about the marvellous “free” web services that they’re interacting with which provide storage for photos or online blogs or a “free” encyclopaedia.  These are rarely free.  You’re trading something whether it’s information for the service or social capital (e.g. loyalty to a scheme) or even just your time (e.g. to create new entries, to edit the content).  That activity that someone else provides that meets your needs has a price, even if you don’t visibly notice it.  

By using the concept of flow, it is relatively simple to build a financial model for an entire system. I’ve created the skeleton of such a model for the map above in figure 79.

Figure 79 — Building a financial model


Now back when we built LFP in 2004, there wasn’t a utility platform, I didn’t have maps and I didn’t have the concept of flow.  Instead myself and my CFO had a mass of spreadsheets trying to calculate what the above did and cope with all the stepwise investments and capital costs needed.  What was a nightmare then is now child’s play.

Whenever building something novel, then the game is to use operational expense over capital as much as possible in order to reduce risk either due to the system not being used or growing rapidly.  You want to tie the cost as close to the path of revenue generation with any worth based system when you’re gambling on an uncertain outcome.  However, there will always be some investment e.g. writing the application, marketing the microsite.  This sort of modelling can help you identify which options you should consider for the future.

The rage today is all about “DevOps” in the technology world, a moniker combining development and operations.  This shift towards utility platforms is starting to occur in earnest and over the coming years the world of finance and engineering will go through an exciting but disruptive marriage.  I do hope someone comes up with a better phrase than “DevFin” or “DevOps 2.0” or “NoOps” though.


Efficiency vs effectiveness

So there I was in 2008 with an understanding of the importance of maps and of the flow of capital within them.  This helped me explain efficiency versus effectiveness in one of my client’s projects that I was quite proud of.  There is unfortunately a problem.  I can’t explain it to you.

Hopefully, you’re discovering that maps are quite a powerful strategic tool.  The information they contain can be very sensitive.  Even in Government projects, the maps are rarely shared outside of Government itself.  I’m certainly not going to break the trust of a private client by exposing their dirty laundry.  This is why many of the maps that I use in this book are slightly distorted and don’t identify the original owner unless I was the one running the show.  I don’t mind you knowing all the mistakes and failings that I’ve made.  If you’re uncomfortable with that and you need the reassurance of “big company X does this with maps, here’s the map” then I suggest you stop reading or find someone else to help you.  Hopefully, you’ve got enough ideas from what I’ve written to justify your time invested so far.  

The next section covers a hypothetical that blends a story related to a modern company reset into a technology context to help tell a past story.  Yes, maps are part of story telling or as J.R.R Tolkien said on writing the Lord of the Rings, “I wisely started with a map”.

Our story begins, as many do, with a challenge.  The company was expanding and needing to increase its compute resources.  It had created a process flow diagram for this (figure 80) which involved a request for more compute to the actions needed to meet that demand.  The process however had a bottleneck.  Once servers were delivered at “goods in” they needed to be modified before being racked.  This was time consuming and sometimes prone to failure.  They were focused on improving the efficiency of the process flow as it was important for their future and revenue generation.  A proposal was on the table to invest in robotics to automate the process of modifying.  Whilst the proposal was expensive, the benefits were considerably more considering the future revenue (of a not insignificant scale) that was at risk.

Figure 80 — The process flow




I want you to consider the above for a moment and decide whether a proposal to invest in improving the efficiency of an inefficient process makes sense particularly when the benefits of the proposal vastly outweigh the costs and your future revenue stream is at risk?

I had met the company in 2008, talked about the concept of evolution and introduced the “spot” diagram.  We agreed to take a look at the proposal.  I’ve taken those first same steps (see figure 81) and “spotted” the process.  Whilst the ordering and goods in process were quite industrialised, the modify part of the process was very custom.  Have a look at the figure and see if you notice anything interesting or odd before continuing with this story.

Figure 81 — Spot diagram of process.


What was interesting to note was that the racks were considered custom.  On investigation, it turned out the company had custom built racks.  It had always used custom built racks, it had a friendly company that even made them for it and this was just part of its corporate corpus.  This was a body from a long gone past that still haunted the place.  Even in 2008, racks were standardised. 

The modifications were needed because the standard servers that they bought fitted standard racks. They didn’t fit the custom built racks that had been so lovingly built.  Hence additional plates needed to be added, holes drilled etc.  Let us be clear, on the table was a proposal to invest in robotics in order to customise standard servers to fit into custom built racks which they were also paying to be built.  Does the proposal still make sense?  Is it a good investment?  Are there alternatives?

Before you shout “use standard racks”, let us map this space out starting from the user need of more compute.  This actually has two needs, the ordering of a server and the racking of the server once it has delivered.  Of course racking (i.e. mounting, adding power and cabling) needs the server to be modified.  Both of these chains are connected at the point of goods in – see figure 82.

Figure 82 — Mapping the proposal


Now the question is whether we should just use standard racks?  This obviously moves racks towards the commodity (which is where they should be) and the modification part disappears though we still have mounting, cabling and power.  It seems a lot better though (see figure 83).  

Figure 83 — Using standard racks


However, you still have a problem which is the legacy estate.  Are you going to migrate all the racks?  What about our sunk costs?  How are we going to maintain our existing systems?  There will be a long list of reasons to counter the proposed change.  Before you go “this is daft” remember the budget example, the corporate corpus?  Don’t expect to change a system without some resistance.

In this case, despite resistance, we should go a step further.  Computing was becoming a commodity provided by utility services.  We can simplify this whole flow by just adopting utility services.  We don’t need to think about robotic investment or even converting to using standard racks (itself a cost which might be prohibitive). This entire chunk of the value chain should go along with any additional costs it might be hiding (see figure 84). 

Figure 84 — Hidden costs and removing parts of the value chain


These hidden costs can be vast.  Today, when someone provides me with a proposal for building a private cloud, then the first question I ask them is what percentage of the cost is power?  The number of times I’ve been told “that’s another budget” is eye opening.  Power is a major factor in the cost of building such a system.  However, that’s another story for later and I’m digressing.

The issue above is we started with a proposal of investment in robotics based upon improving the efficiency of an existing process.  It sounded reasonable on the surface but if they had taken that route then they would have invested more in maintaining a highly ineffective process.  In all likelihood, it would have exacerbated the problem later because the corporate corpus would have expanded to include this.  If some future person had said “we should get rid of these custom racks” then the response would be “but we’ve always done this and we’ve invested millions in robotics”.  

The “efficient” thing to do might be investing in robotics but the “effective” thing to do was to get rid of this entire part of the value chain.  It’s a bit like the utility platform area, I can either invest in making my infrastructure and platform components more efficient by automation or I could just get rid of that entire part of the value chain.  Often the “efficient” thing to do is not the “effective” thing.  You should be very careful of process efficiency and “improvement”.  You should also be aware of the corporate corpus.

The company in question was a manufacturing company, the problem had nothing to do with computing and yes, they were about to spend many millions making a highly ineffective process more efficient.  They didn’t, they are alive and doing well.  I also kept the wolves at bay.  That’s what I call a “win-win” except obviously for the vendors who lost out.

Before we move on

In the last two chapters, we’ve been sneaking around the strategy cycle again covering mainly purpose and then landscape.  You should be familiar enough with the strategy cycle that I can represent it in a slightly different form just to reinforce the different types of Why (purpose and movement) and the connections between the parts in this book – see figure 85.  In the next section we will focus on climate including common economic patterns and anticipation.  We will keep on looping around this, sometimes diving into interconnections as we go.  Anyway, this will be the last time that I’ll mention that.

Figure 85 — The strategy cycle



We should recap on some of the ideas from this chapter.

Landscape

  • Be careful of simplicity.  There’s a balancing act here caused by Ashby’s Law.  Be aware that you’re often trading your ability to learn for easier management.  In some cases, you can simplify so far that it becomes harmful e.g. one size fits all, group wide KPIs.
  • The map contains flows of capital which are represented by the interfaces.  There are usually multiple flows in a single map.  Such capital can be physical, financial, information, risk, time or social.  It could be anything which we trade.
  • Maps are a means of storytelling.  Despite my dour attitude to storytelling (especially the hand waving kind of verbal garbage often found in strategy), maps are a form of visual storytelling.

Doctrine

  • Focus on the outcome, not the contract.  Worth (outcome) based tools can be useful here but be warned, they can also expose flaws in the understanding of value and become stymied by internal procedures e.g. budgeting processes and inability to cope with variable charging.
  • Use appropriate tools.  When using maps, if I’m looking at financial flows then I’ll often dive into financial modelling when considering multiple investment paths e.g. focus on increasing visitors through marketing or the conversion rate from a microsite.  Equally, if I’ve identified multiple “wheres” that I can attack, then I’ll often dive into business model canvas to compare them.  Don’t be afraid to use multiple tools.  Maps are simply a guide and learning tool.
  • Optimise flow.  Often when you examine flows then you’ll find bottlenecks, inefficiencies and profitless flows.  There will be things that you’re doing that you just don’t need to. Be very careful here to consider not only efficiency but effectiveness.  Try to avoid investing in making an ineffective process more efficient when you need to be questioning why you’re doing something and uncovering hidden costs.  Also, don’t assume that an “obvious” change will be welcomed.  Beware the corporate corpus.
  • When it comes to managing flow then granularity is your friend here.  Be prepared though, most companies don’t have anywhere near the level of granularity that you’ll need and you may even encounter politics when trying to find out.

Gameplay
  • Trading.  Maps are a form of knowledge capital and they tend to have value.  Don’t expect people to just share them with you.  You’ll need to trade or create your own.
----

Next Chapter in Series [to be published soon]
GitBook link [to be published soon]