Thursday, August 25, 2016

Getting started yourself

Chapter 6

I often talk about that wise SVP that I met in the Arts hotel of Barcelona. I’ll jump ahead of the story and let you into a little secret. He didn’t have a clue either. However, I didn’t find this out until 2011. I had always assumed that there was some secret tome but it turns out that much of industry is fighting battles with a poor understanding of landscape. It’s like generals fighting without maps. It boils everything down to luck and individual heroism. When I discovered this, I started to question the trove of business strategy books in my small library. I had an onerous task of going through it all and categorising individual pieces as doctrine, climatic pattern, context specific or just plain luck. 

These days, when someone tells me they know strategy then I ask them for a map of their business. If they can’t show me it, then regardless of their claims I take a skeptical position.  They probably don’t know as much as they hope they do. They might even be more dangerous than this as it’s rarely the unknown that gets you but what we think we know but don’t. This doesn’t mean I think people are daft but instead that understanding your landscape, the context that you’re competing in and having a modicum of situational awareness is not a luxury for strategy, it is at the very core of it. Inspiring vision statements, well trained forces, a strong culture and good technology will not save if you fail to understand the landscape, the position of forces and their size and capabilities. Colonel Custer is a worthy lesson here and even he had maps which were better than most corporates today. I've seen billions wasted by companies that have charged into battles that they have no hope of winning. I've seen endless SWOT diagrams, stories and other magic thinking used to justify such actions. I've also seen others tear apart industries with ease.

Unfortunately, for those who lack some form of military background then situational awareness is rarely a topic of discussion. It’s often a struggle to make executives appreciate that it might matter, that you have to apply thought to the process and that the secrets of success might not work everywhere. In more recent years, I started to recommend that executives spend a month or two in some form of coaching that involves playing a massive multiplayer online role playing game (MMORPG) such as World of Warcraft (WoW).  You might think that this sounds like goofing off from the real work of business but for those who are uninitiated then there are some basic practices that an MMORPG will teach you. These include: -

The importance of maps.  Before launching your team of elves and dwarves into the midst of a battle then the first thing you do is scout out the landscape and improve your situational awareness. Understanding the landscape is critical to strategic play, to learning, to using force multipliers and to not getting spanked i.e. beaten soundly by the opponent. Play the game long enough and you’ll know this by instinct along with moaning at players who haven’t bothered to look at the map hence wasting both their and your time with constant questions of “Where is this? or "How do we get there?”

The importance of aptitude. The biggest battles require a multitude of aptitudes from damage (those who do our spanking usually from range) to tanking (defensive protection) to healing (those tanks get spanked a lot and need healing) to crowd control (those mage sleep spells aren't there for just looking at). The way you play and how the roles are deployed depends upon the scenario. Of course, without situational awareness then you're at a huge disadvantage as you can often turn up with precisely the wrong sort of forces.

The importance of team play. A multitude of roles requires team play which means communication, co-ordination and acting in the interests of the team.

The importance of preparation. There's no point turning upto the fight with an assortment of weapons if you don’t know how to use them. The largest guilds in some of these MMORPGs have many hundreds to thousands of players supported with extensive wikis, communication mechanisms, training and development, tactical game plays, UI engineering, structure, leadership, specialist cells and information systems. 

So, how does an MMORPG compare to business? In general, we don't have maps. Most companies suffer from poor situational awareness being caught out by predictable changes. The most telling factor here is that business strategy is normally a tyranny of action - how, what and when - as opposed to awareness - where and why. On the whole, we do a bit better at recognising that multiple aptitudes are needed. However, we often fall down by not considering attitude, the context and isolation (operation in silos). We certainly try when it comes to team play, often having team building exercises which can be a bit hit or miss. 

We also tend to complain about communication despite the plethora of tools available. This can usually be traced back again to poor situational awareness - if we don't know the landscape and fail to create a plan of attack based upon this, replacing instead with vague notions of vision or a story then it becomes difficult to communicate how things are actually going. It’s far better that the question “Where are you” receives a response of a co-ordinate on a map than a response of “I’ve just walked along a path, I’m by a tree and I can see lots of orcs”.

In fact, abundant communication mechanisms rather than efficient communication can itself become a problem without good situational awareness as new players constantly ask “where should we go” as they run around in a daze. This can take up valuable time from other team members and weaken your overall strength. Preparation itself is almost non-existent in corporates. In some areas we might attempt scenario planning and a few exec games about imagining you're a startup trying to disrupt your business but on the whole we're often so busy with immediate work such as firefighting and keeping up with competitors that we create little time to prepare. 

There's an awful lot to be said for learning about these aspects from online games. Anyone under the illusion that business is some bastion of strategic play should spend a few minutes watching an experienced group run an organised raid. Those people tend to use levels of strategic and tactical play that businesses can only dream of. Fortunately, in business we're often up against other organisations that equally lack situational awareness, suffer from isolation, have weak team play, ineffective but abundant communication and lack preparation. The effect is somewhat remarkably similar to a group of inexperienced World of Warcraft players just charging at each other with cries of "Attack" followed by "Will someone heal me!"  An exciting brawl of chaos with often single participants - hero players, the Steve Jobs of your Elven army - making the difference. Of course, face either team or in fact both teams against an experienced and well-rehearsed group then it stops becoming a brawl and starts becoming a massacre. Opposing healers get wiped first, followed by crowd control, tanks and then poor and undefended damage dealers.

In the world of business, there are some really dangerous groups out there. Don't expect to go up against them with the usual 'Here’s the vision, we’ve got great people … now Charge!' approach. It’s far more sensible to find a profitable exit in order to fight another day. That's a hint to those gaming companies starting to be concerned about Amazon's encroachment into their space with Lumberyard. Either start learning from your own online players or find a new industry to bunker down in.  Finally, don’t expect to just to read  a few chapters on mapping or play a couple of games and an instantly transform into a master of strategy, there is a long journey ahead of you. Finally, I don’t care that you’ve been doing strategy for several decades. Show me a map first.

Tips for mapping

There are a couple of general tips, common terms and diagrammatic forms that I apply to mapping itself. My tips include: -

All models are wrong; some are merely useful. 
Mapping is not the answer, it’s simply a guide. Hence don’t try to create the perfect map, the key is to produce good enough and this requires you to share and open yourself up to challenge. Also, you’ll likely to use other tools alongside mapping when scenario planning and examining the viability of different points of attack. This can include financial models to my current favourite of business model canvas. 

Where before why
When thinking about strategy, the first thing you need to do is identify where you can attack before why here over there. It’ all about position (y-axis) and movement (x-axis).

Iterative and continuous learning
The entire strategy cycle is iterative and you’re going to have to follow the same path. Which means mapping is not going to be a one off exercise but something that happens all the time. Again the temptation is to map the entire landscape in some sort of “deathstar” – large scale, all encompassing, doomed to fail – effort in order to create that perfect answer.  Instead you have to embrace the uncertainty, think small and start somewhere. If you’re using mapping, it’s taking a long time and doesn’t seem to help answer any questions, then stop. Don’t be afraid to find a better way of doing this.

Learn yourself
If you are responsible for strategy, then you need to learn to play the game yourself. I often give strategy consultants a hard time but this doesn’t mean they don’t have a use. Don’t however rely on third parties to give you an answer, instead use them to help you challenge your strategy and to learn new forms of gameplay.

There are numerous terms associated with mapping. I’m often guilty of using them without clearly explaining to others, so in order to rectify this I’ve provided the most common in figure 52.

Figure 52 – Terms

Maps are obviously visual and whilst they are far from the ordinance survey maps of geography, it’s useful to have a common lexicon of symbols. In figure 53, I’ve provided the ones I most commonly use.

Figure 53 – Symbols

A nod to early terms
Mapping itself has evolved over time hence the terms I used in the past are slightly different to the terms I use today. These cosmetic changes are purely to help refine the craft, the underlying meaning has remained constant. However, for clarification, these changes include: -

The original terms were Innovation, Bespoke, Product (+rental) and Commodity (+utility). The problem with this was confusion over the term innovation. An innovation is simply the first(ish) attempt to put an idea into practice but this can mean the genesis of a new thing or a feature differentiation of a product or a shift in business model from product to utility. To clarify I used the term genesis instead as in the first attempt to create an act, a practice, a set of data or a type of knowledge. 

The original term was … blank. I didn’t feel the need to spell it out but it became clear that I needed to after being asked many times “What does the Y-Axis represent?” Hence, value chain and the marks for visible & invisible were added.

The original terms were chaotic and linear. However, this created some confusion with mathematical terms and wasn’t very descriptive. Hence I replaced it with uncharted and industrialised.

The original terms were development, framework and systems which became pioneer, coloniser and town planner as the business was folded into the structure. However, coloniser never really stuck and it morphed into settler over time. 

Implementing mapping

Most organisations have structures in place that can be used to embed mapping whether it’s an architectural group or an office of the CEO or a business relationship function or some other home. Typically, in a distributed organisation then you normally have the business units that are responsible for delivery, some form of executive function that covers policy, approval and accountability and a common or shared services supply group that provides some element of commonality as per figure 54.

Figure 54 – Common structure

However, the common components provided are often a bit hit or miss. Without a form of mapping then it’s difficult to find what is duplicated and how it should be provided. It will often degenerate into plucking things from the air. There also tends to be an element of political conflict between the business units and the shared services and in the worst cases the shared services function can be viewed as a hindrance.

To resolve this, we need to separate out the delivery of shared services from the identification of what is common. I’ve found the best way to achieve this is not to remove budget from the business units (often a political bone of contention) but instead to introduce a co-ordination function. The role of the co-ordination function is to encourage compliance to policy (doctrine) often via a spend control mechanism and to enable sharing between the business units through the use of maps. This doesn’t require some big bang overhaul but usually the formalisation of an existing structure e.g. Office of an executive function or an architectural board into this role. When spend control is used then a policy limit (e.g. £500K) should be set above which any project must be mapped and the map sent to the co-ordination function. The function can then analyse the map, make recommendations and introduce elements of transparency and challenge within the organisation. As more maps are gathered then the function can also identify patterns for common services. This should become a relatively quick process lasting a few hours from initiation to recommendation. 

It’s through such a function that other forms of doctrine such as cell based structure, use of Pioneer-Settler-Town Planner along with more context specific gameplay can be introduced into the business units. I’ve summarised this in figure 55, adding in the co-ordination function (point 1). I’ve also noted that your shared service (point 2) should be elevated to a business unit and not just limit itself to provision of common components within a federated organisation particularly if you intend to run an ecosystem model such as ILC. If it’s important enough for you to create a shared and common service, then there either exists an outside market opportunity or you’re just rebuilding what already exists in the market.

Figure 55 – Adding co-ordination

Over time, your shared services group is likely to become dominated by small cells of town planners providing industrialised components. Your business units will tend to become dominated by cells of pioneers and settlers providing custom to product and rental services. Your co-ordination function will mainly become settlers focused on ensuring transparency and learning within the organisation itself.  However, this is over time. To begin with, simply start with a small co-ordination team (a handful of people) helping other business units create, share maps and learn from them. You will probably find that some business units start to offer their own home grown capabilities as common components to other business units. Don’t discourage these emergent behaviours. Whilst there may be an element of opportunistic “empire building” involved, if units are sharing and learning from maps then this is supportive. You can always migrate those components to a shared services group at a later date. The one thing to be careful of is business units trying to subvert the process e.g. trying to find exclusions to sharing or spend control. 

Often some will claim they are “too busy to write a map” or “it’s too complex”.  For me, the idea that someone could be willing to spend £500K on something they can’t map sets alarm bells ringing. For such an expense we should know what the user needs are and what is involved. Mapping provides us the means to reflect on this, to challenge the assumptions, to question what is being considered and to demonstrate we have thought about it. Be warned however, these excuses are often code for resistance to sharing due to concerns that it will reduce their power base within an organisation. Knowledge is power often translates to shared knowledge is less power for me!  If you ever want to stop the self-harm that occurs in corporations from the endless duplication and bias to the poor gameplay, then you need to counter this with force. However, check for yourself first. Despite arguments that you’ll hear that “we have architecture groups” or “good communication” just look around.  Most federated organisations have hundreds of duplicated examples of the same thing being built.  Ask yourself, how many pet IoT or AI projects doing roughly the same thing are actually going on in your organisation right now? If you're of any size the answer is "don't know" and from experience, it's going to be vastly more than whatever number you just thought of. Without a communication tool such as mapping and some form of co-ordination function then you’ll unlikely to find out. Any resistance to sharing requires you to wield the stick not the carrot with ferocity otherwise others will clamour for exclusions, protection of silos and sharing will be lost. Organisations tend to be full of corporate antibodies that will try to kill off any change to existing structures.

To the question, shouldn’t the co-ordination function be part of the executive function then I’d answer yes. In my company, the co-ordination function was the executive team. In a larger company you might want to start with a separate topographical intelligence function. Remember, you are unlikely to have any maps of your landscape and your SVPs / VPs won’t be able to magic them out of hand. They’ll need support and help.

Continuous learning

This entire book is a process of continuous learning, however it’s more important for me to demonstrate how to achieve this (the strategy cycle) rather than the specifics of particular patterns. Once you have the basics, you will learn the patterns for yourself. However, it’s also worth me recapping as we go along this journey. In figure 56, I provided the basic patterns so far examined.

Figure 56 – Patterns covered


I’m a great believer in using anti-patterns to examine the effect of not doing something. In this case, what are the anti-patterns for not mapping? In general, they will be the reverse of the doctrine that is developed from mapping along with a failure to cope with climatic patterns and incorrect use of context specific play. We can use this to describe what an organisation that doesn’t understand its landscape looks like. I often use this as a way of analyzing competitors but be careful, there’s a whole topic of misdirection that we haven’t touched upon yet. The anti-pattern organisation will look something like this.

Fails to focus on user needs.
Has an inability to describe its user needs and often confuses its own needs – profitability, revenue, data acquisition – with those of its user.

Fails to use a common language. 
Uses multiple different ways of describing the same problem space e.g. box and wire diagrams, business process diagrams and stories. Often suffers from confusion and misalignment.

Fails to be transparent. 
Has difficulty in answering basic questions such as “How many IoT projects are we building?” Information tends to be guarded in silos.

Fails to challenge assumption. 
Action is often taken based upon memes or Hippo (highest paid person’s opinion).  Often parts of the organisation will admit to building things they know won’t work. 

Fails to remove duplication and bias. 
The scale of duplication is excessive and exceeds in practice what people expect. Any investigation will discover groups custom building what exists at a commodity in the outside world, their very own Thomas Thwaite toaster. Often resistance is given to changing this because it is somehow unique despite any inability of the group to explain user needs.

Fails to use appropriate methods. 
Tends towards single size methods across the organisation e.g. “outsource all of IT” or “use Agile everywhere”. This can often be accompanied with a yo-yo between one method (the old emperor) and a new naked emperor based upon its success in a specific example (outcome bias).

Fails to think small
Tends toward big scale efforts (e.g. Death Star projects) and big departments. This can include major platform re-engineering efforts or major re-organisations.

Fails to think aptitude and attitude. 
Tends to consider all of a specific aptitude (e.g. finance, operations or IT) as though it’s one thing. Promotes a mantra of there is only “IT” rather than a nuanced message of multiple types. Tends to create general training courses covering the entire subject e.g. “Let’s send everyone on six sigma training”

Fails to design for constant evolution.
Tends to bolt on new organisational structures as new memes appear. A cloud department, a digital department, a big data group etc.

Fails to enable purpose, mastery and autonomy. 
There is often confusion within the organisation over its purpose combined with feelings of lacking control and ability to influence.

Fails to understand basic economic patterns.
Often conducts efficiency or innovation programmes without realising the connection between the two. Assumes it has choice on change (e.g. cloud) where none exists. Fails to recognise and cope with its own inertia caused by past success.

Fails to understand context specific play. 
Has no existing language that enables it to understand context specific play. Often uses terms as memes e.g. open source, ecosystem, innovation but with no clear understanding of when they are appropriate.

Fails to understand the landscape. 
Tends to not fully grasp the components and complexity within its own organisation. Often cannot describe its own basic capabilities. 

Fails to understand strategy. 
Tends to be dominated by statements that strategy is all about the why but cannot distinguish between the why of purpose and the why of movement. Has little discussion on position and movement combined with an inability to describe where it should attack or even the importance of understanding where before why. Often strategy is little more than a tyranny of action statements based upon meme copying and external advice.

Books to read

There aren’t any books that deal with topographical intelligence in business that I’m aware of which is why after almost eight years of badgering I'm finally getting around to writing one. I'm a very reluctant writer and this is not a comfortable activity for me.  However, there’s lots of other books that I’d recommend reading because of general concepts they provide. I don’t necessarily agree with everything that is said but these are definitely worth exploring especially because of later chapters in this book. I find that all of them are worth spending time with.

Sun Tzu, the art of Warfare (Robert Ames translation)
Science, Strategy and War by Frans P.B. Osinga
Atlas of Military Strategy 1618 – 1878 by David Chandler.
The Simplicity Cycle by Dan Ward
Accidental Empires by Robert X. Cringely
Hierarchy Theory, The Challenge of Complex Systems by Howard H. Pattee
The Evolution of Technology by George Basalla
Diffusion of Innovations, Everett Rogers.
Customer driven IT by David Moschella
Digitizing Government by Alan Brown, Jerry Fishenden and Mark Thompson
Learn or Die by Edward D.Hess
The Oxford Handbook of Innovation by Jan Fagerberg, David Mowery and Richard Nelson
The Starfish and the Spider, Ori Brafman and Rod Beckstrom
Does IT matter? by Nicholas Carr
Technological revolutions and financial capital, Carlota Perez
The Entrepreneurial State by Marriana Mazzucato
The Intelligent Investor by Benjamin Graham
Cybernetics by Norbert Wiener
The Age of Discontinuity by Peter F. Drucker
The Red Queen, William P. Barnett

However, before you start burying your head in books and I continue to writing. Can I strongly recommend that you go and play World of Warcraft if you have any doubts over the importance of situational awareness. I understand that Fernando Flores runs an executive training course on this.

Next Chapter in Series [to be published soon]
GitBook link [to be published soon]

Sunday, August 21, 2016

The play and a decision to act

Chapter 5

In chapters one to four I've covered the basics of mapping, common economic patterns and doctrine. However, these Wardley maps of business don’t tell you what to do any more than a geographical map tells an Admiral.  The maps are simply a guide and you have to decide what move you’re going to make, where you’re going to attack and how you navigate your ship through the choppy waters of commercial competition. In other words, you have to apply thought, decide to act and then act. In this chapter we’re going to cover my journey through this part of the strategy cycle – see figure 42.

Figure 42 – The play and a decision to act

Identifying opportunity

There exists two different forms of why in business – the why of purpose (i.e. win the game) and the why of movement (i.e. move this piece over that).  The why of movement is what I'm going to concentrate on here but in order to examine this then we must first determine where we can attack.

In the past, I had sat in many meetings where options were presented to myself and my executive team and then we made a choice based upon financial arguments and concepts of core. We had never used a landscape to help determine where we could attack.  This was a first for us and very much a learning exercise. I’ve taken that earliest map from 2005 and highlighted on it the four areas that we considered had potential. There were many others but for the sake of introduction, I thought I’d keep it simple. These four wheres are shown in figure 43.

Figure 43 – Wheres

Where 1 – we had an existing online photo service that was in decline but which we could concentrate on. There existed many other competitors in this space, many of which were either well financed (e.g. Ofoto) or ahead of us in terms of offering (e.g. Flickr). There were also unmet needs that we had found. As a company we had acquired many capabilities and skills, not necessarily in the online photo business as the group developed many different types of systems. We also had an internal conflict with our parent company’s online photo service which we built and operated.  Whilst our photo service was open to the public, the parent company’s service was focused on its camera owners and we had to tread a careful game here as our own service was sometimes considered a competitor.

Where 2 – we had anticipated that a coding platform would become a utility.  We had ample skills in developing coding platforms but most importantly, we had also learned what not to do through various painful all-encompassing "Death Star" projects. There would be inertia to this change among product vendors that would benefit us in our land grab.  To complicate matters many existing product customers would also have inertia and hence we would have to focus on startups though this required marketing to reach them.  There was also a potential trade-off here as any platform would ultimately be built on some form of utility infrastructure similar to our own Borg system (a private utility compute environment providing virtual machines on-demand based on Xen) and this would reduce our capital investment.  Our company had mandates from the parent to remain profitable each and every month and to keep headcount fixed.  I had no room to expand and any investment made would have to come out of existing monthly profit despite the reserves built up in the bank.  A platform play offered the potential to reduce the cost and increase the speed of development of our other revenue generating projects hence freeing up more valuable time until a point where the platform itself was self-sustaining. 

Where 3 – we had anticipated that a utility infrastructure would appear.  We had experience of doing this but we lacked any significant investment capability.  I was also mindful that in some circles of the parent company we were considered a development shop on the end of a demand pipeline and they were heavily engaged with an external hosting company.  This might cause conflict and unfortunately I had painted ourselves into this corner with my previous efforts to  simply “survive”.  If we made the move then in essence many of these problems were no different from the platform space except the agility benefits of platform were considered to be higher.  The biggest potential challenge to us would not be from existing product (e.g. server manufacturers) or rental vendors (e.g. hosting companies) but the likes of Google entering the space.  This we expected to happen in the near future and we certainly lacked the financial muscle to compete if it did.  It seemed more prudent to prepare to exploit any future move they made.  However, that said it was an attractive option and worth considering.  One fly in the ointment was concerns that had been raised on issues of security and misuse of our systems by various members of my own team.  It seemed we would have our own inertia to combat due to our own past success with using products (i.e. servers) and despite the existence of Borg.

Where 4 – we could instead build something novel and new based upon any utility environments (either infrastructure or platform) that appeared. We understood that using utility systems would reduce our cost of investment i.e. the gamble in the space.  However, any novel thing would still be a gamble and we’d be up against many other companies.  Fortunately, we were very adept at agile development and we had many crazy ideas we could pursue generated by the regular hack days we ran.  It might be a gamble in the dark but not one we should dismiss out of hand.

Looking at the map, we had four clear “wheres” we could attack.  We could discuss the map, the pros and cons of each move in a manner which wasn’t just “does this have an ROI and is it core?” Instead we were using the landscape to help us anticipate opportunity and points of attack.  I suddenly felt our strategy was becoming more meaningful than just gut feel and copying memes from others.  We were thinking about position and movement.  I was starting to feel a bit like that wise SVP I had met in the lift in the Arts hotel in Barcelona when he was testing that junior (i.e. me) all those years ago.  It felt good but I wanted more. How do I decide? 

The dangers of past success

The problems around making a choice usually stem from past success and the comfort it brings.  We had an existing photo service along with other lines of business which generated a decent revenue. Would it not be better for me to just continue doing what we were doing?  I’d be taking a risk changing the course we were on.  However, I had recently watched another company fail to manage change and was acutely aware of the dangers of not taking a risk.  That company was Kodak.

Being an online photo service, I had a ringside seat to the fundamental shift happening in the image market between 2000 to 2005.  The photo had been seen as something with value to customers due to its costs in terms of time and money to produce - the visit to the photo lab, the cost of processing and the wait for it to be delivered via the post.  Film was at the centre of this and the only thing more annoying than waiting for it to be processed was not having enough film to take that next shot on holiday.  Many times in the past, I had to make choices over which picture I took due to  a limited number of shots left.  However, the image and the film were really just components to delivering my overall need which was sharing my experiences.  The image was also evolving from analog film to a new digital world in which I could take pictures and delete the ones I didn’t like.  I might have a limit in terms of memory card but I could always download to a computer and share with others.  There was no film processing required.

I’ve created a map for that changing landscape in figure 44 and as I go through more of my experience with the Kodak story then I’ll make references to that map.  The old world was one of analog film (Point 1).  Sharing a moment was about sitting on the sofa with friends and family and passing the photo album.  The film itself needing some mechanism of fulfilment such as the photo lab.  However, the camera industry was rapidly becoming commodity with good enough disposable cameras.  The analog world of images was also changing to one which was more digital (Point 2).  Digital still cameras (DSC) were becoming more common and I could share an image by simply emailing it to others.  Kodak had led the charge into this brave new world with early research in the mid 1970s but somehow it also seemed to be losing ground to others such as Sony and Canon.

Figure 44– Kodak

The growth of digital images and the spread of the internet had enabled the formation of online photo services (Point 3). These provided simple ways of printing out your images along with easier means for sharing with others.  There was a very noticeable shift occurring from printing to sharing. You could create social networks to share images about hobbies or instead share with a close circles of friends.  One of the early pioneers in this space was Ofoto which had been acquired by Kodak in 2001. The messaging of Kodak had also changed around that time, it became more about sharing experiences and moments.

However, Kodak wasn’t the only competitor in the space and unlike many others, Kodak seemed to have a problem in that it made significant revenue from film processing.  Whilst it had a strong position in digital still cameras and online photo services, it didn’t seem to be maximizing this. Others were quickly catching up and overtaking.  I can only assume that its past success with film had created inertia (Point 4) to the change within the organisation.  It seemed to an outside observer that Kodak was in conflict with itself.  The first signs of this were apparent in the late 90s with the release of the Advantix camera system, a curious blend of digital camera which produced film for processing.  There were also conflicting messages coming out of Kodak despite its messaging, whilst one part of the organisation seemed to pushing digital another part seemed to be resisting. 

In 2003, Kodak had introduced the Easyshare printer dock 6000 that enabled consumers to produce Kodak photo prints at home from digital images. When I first heard of this, it felt as through Kodak had finally overcome its inertia through a compromise between the fulfilment and the digital business (Point 5).  The future was one of a self-contained Kodak system from digital still camera to online service to photo printer.  But there was a problem here.  Already, on our online site we had witnessed the rapid growth of images taken with mobile phones (Point 6).  Though camera phones were still uncommon, they seemed to herald a future where people would take pictures with their phones and share online. There was no mass market future for print, only a niche compared to an enormous market of shared digital images.  It seemed as though Kodak had overcome its inertia through a compromise which meant investing in exactly where the future market wasn’t going to be.  By early 2005, from our perspective then the future of the entire industry from fulfilment to photo printers to cameras to film to digital still cameras (Point 7) was starting to look grim.  For us, the future of pictures looked more like figure 45 and printed photos were barely worth mentioning unless you intended to specialise in a profitable niche.

Figure 45– A future picture

In any choice I was going to make, I had to be careful of inertia and past success. Simply standing where we were might be the comfortable option but it didn’t mean we would have a rosy future.  Our fraught issues around our parent's photo service could grow if we embraced a camera phone future as this would put us in direct conflict with its core DSC business.  However Kodak was a clear example of what could go wrong if you didn’t move fast enough into the future, allowed inertia to slow you down or compromised by placing the bets in the wrong place. But maybe there was another future we could find but how far into the future should we peek?

The near, the far and the crazy

Back in the late 90s, I had taken a deep interest in 3D printing.  It was the main reason why I had originally joined the near bankrupt online photo service because I envisaged a future where images of physical things would be shared.  I wanted to learn about the space of sharing images.  When we were acquired by one of the world’s largest printer manufacturers, I was overjoyed.  I assumed that they too would share my passion.  I gave numerous presentations on the topic both externally and internally within the parent company on this subject and to my disappointment it was always the external crowd that got more excited.  In 2004, I gave a presentation at Euro Foo on the future of 3D printers. The subject was a pretty hot topic at the time and one of the audience that I was fortunate enough to meet was Bre Pettis who was demonstrating his felt-tip pen printer, the DrawBot. Why fortunate? Bre founded MakerBot and subsequently rocked the world of 3D printing.

Whilst 3D printing was a passion, I had also an interest in printed electronics especially the work of Sirringhaus and Kate Stone. I started to use these concepts to describe a future world of how manufacturing would change. The basics are provided in figure 46 but we will go through each step of this map.

Figure 46 – The near, the far and the crazy

First let us start with the user need for some device (Point 1).  I’ll leave it as generic because I want to cover manufacturing itself and not the specific use of one device over another.  Our device would have physical elements including electronics along with any software that would interact with it.  The physical and electronic elements are commonly described through some form of computer aided design (CAD) diagram which provides instructions on what to build and this is combined with our software which is simply our code (Point 2). 

The physical form would normally be manufactured by a factory which generally used common machinery involved in significant custom processes.  However this was starting to change with concepts such as digital factories and even 3D printers which were becoming less magical and more common (Point 3).  This promised a future world of highly industrialised factories without extensive re-tooling for each product run.  Also, since those first inkjet-printed transistors of Sirringhaus in 2001, a new field of plastic and printed electronics was rapidly growing (Point 4). Electronics manufacture was on the path to becoming industrialised and I would just print the electronics I needed rather than combine a mix of commodity and non-commodity components on my own circuit board created on some assembly line that changed with every product run.

For me, the interesting aspect of this was the combination of both physical and electronic forms.  In 2005, I had become aware of several University led efforts to create hybrid objects including junction boxes where both the physical form and electrical components were printed (Point 5). This too would become industrialised to a world in which I printed my entire device rather than used factories which assembled.  Now, along with potential for creating novel materials and components, this also had the opportunity to fundamentally change the concept of design. 

The function of a device is a combination of its physical form, its electronics and any software that interacts with this.  As hybrid printers industrialise then this function is described by purely digital means – the CAD (an instruction set) which is then printed and the code (an instruction set) which is run. When we wish to change the function of a device then we need to change one of those two instruction sets along with considering the interaction between the two. Normally, we try to make changes in software because it’s the less costly but as hardware become more malleable then that equation changes.  More than this, we are now in a position to simply describe the function of the device that we want and allow a compiler to determine how that should be instantiated in the instruction sets. My desire to add a sun dial to my phone could be achieved through software or electronic or physical means or a combination of all - a compiler could work out that decision tree for me. This opens up a possibility for an entire new form of programming language that compiles down to physical, electronic and coding forms and where designers concentrate on describing the function of the thing and even object inheritance in the physical world.  I called this theoretical programming language SpimeScript (Point 6) in honour of the marvellous book by Bruce Sterling on Shaping Things. This topic was my central theme of a talk I gave at Euro OSCON in 2006.

However, I had previously raised these discussions within the parent company and had become aware that whilst we might be able to make far future anticipations of change, they were increasingly built on layers of uncertainty and were increasingly unfamiliar and uncomfortable to others.  The further we went, the crazier the ideas sounded and the more concerned people became.  This itself creates a problem if you intend to motivate a team towards a goal.  Hence, if I was going to choose a course of action, it needed to push the boundary but not too far so that it seemed like science fiction.

I was starting to feel uncomfortable with: - 
Where 1 - focus on the online photo service, for reasons of inertia and conflict.
Where 4 - build something novel and new based upon future industrialised services, for being too far reaching. 

The question now became, given our choices could we influence the market in any way to benefit us? Could that help us decide why here over there?

Learning context specific gameplay

Context specific play: Accelerates, decelerators and constraints

I understood that everything evolved due to competition and had plenty of evidence to show past examples from electricity to nuts and bolts. The question was could I somehow influence this?  By coincidence, from the very early days of 2001 we had not only been users of open source but also contributors to it.  We supported the Perl language and many other open source projects.  I had purposefully used these as fertile hunting grounds to recruit my amazing team during 2002-2005.  But I had also observed how open source efforts through collaboration with others had produced stunning technology that out surpassed proprietary efforts in many fields.  In many cases, open source technology was becoming the de facto standard and even the commodity in a field.  It seemed that the very act of open sourcing, if a strong enough community could be created would drive a once magical wonder to becoming a commodity.  Open source seemed to accelerate competition for whatever activity it was applied to. 

I had also witnessed how counter forces existed such as fear, uncertainty and doubt.  This was often applied by vendors to open source projects to dissuade others by reinforcing any inertia they had to change.  Open source projects were invariably accused of being not secure, open to hackers (as though that’s some form of insult), of dubious pedigree and of being a risk.  However, to us, and the millions of users who consumed our services then they were an essential piece of the jigsaw puzzle. By chance, the various battles around open source had increased my awareness of intellectual property.  I became more acutely conscience of how patents were regularly used for ring-fencing to prevent a competitor developing a product.  This was the antithesis of competition and it was stifling.  I started to form an opinion that certain actions would accelerate competition  and drive a component towards a commodity whilst others could be used to slow its evolution.  The landscape could be manipulated.

At the same, I had noticed that as certain activities became more industrialised and therefore more widespread then it often became difficult to find people with the right skills or there were shortages of underlying components.  The evolution of a component could therefore be constrained by a component it depended upon.  I’ve summarised these points in figure 47 by applying them to our first map.

Figure 47 – Accelerators, decelerators and constraints

Point 1 – the evolution of a component can be accelerated by an open approach, whether open source or open data.

Point 2 – the evolution of a component can be slowed down through the use of fear, uncertainty and doubt when crossing an inertia barrier or through the use of patents to ring-fence a technology.

Point 3 – the evolution of a component can be affected by constraints in underlying components e.g. converting compute to a utility would potentially cause a rapid increase in demand (due to new uncharted components that are built upon it or the long tail of unmet business needs) but this requires building data centres. Whilst the provision of virtual machines could be rapid, the building of data centres are not.

I started to explore the map further, looking for other ways we could exploit. 

Context specific play: Innovate, Leverage and Commoditise

I have frequently been told that it is better to be a fast follower than a first mover.  But is that true? Using the map told me a slightly more complex story.  Certainly when exploring an uncharted space, there was lots of uncertainty and huge costs of R&D.  It certainly seemed better to let others incur that risk and then somehow acquire that capability.  But researchers and companies were constantly creating new things and so there was also a cost of discovering that new successful thing in all the noise.  We wouldn’t be the only company trying to play that game and any acquisition cost would reflect this.  If we wanted to play that game, then somehow we need to be able to identify future success more effectively than others.

By comparison, when taking a product to a utility then the component was already quite well known. It was defined, there was an existing market but yes there would be inertia.  I realised was there was a connection between the two and we were sitting on the answer.  Our pioneer – settler – town planner structure had enabled us to cope with evolution and connect the two extremes.  The settlers role was simply to identify future successful patterns and learn about them by refining a product or library component.  In 2005, we actually referred to our settlers as the framework team and their success came from understanding the patterns within what the pioneers - our development team - had built. The pioneers were our gamblers.

However, what If our pioneers weren’t us but instead other companies?  Could our settlers discover successful patterns in all that noise?  The problem of course was where would we look?  Like any product vendor we could perform some marketing survey to find out how people were using our components but this seemed slow and cumbersome.  Fortunately, our online photo service gave us the answer.

For many years we had exposed parts of the photo service through URL requests and APIs to others. It wasn’t much of a leap to realise that if we monitored consumption of our APIs then we could use this to identify in real-time what other companies were being successful without resorting to slow and expensive marketing surveys.  This lead to the innovateleveragecommoditse (ILC) model.  Originally, I called this innovate - transition - commoditise and I owe Mark Thompson a thank you for persuading me to change transition to something more meaningful.  The ILC model is described in figure 48 and we will go through its operation.

Figure 48 – ILC

Take an existing product that is relatively well defined and commonplace and turn it into an industrialised utility (Point A1 to A2). This utility should be exposed as an easy to use API. Then encourage and enable other companies to innovate by building on top of your utility (Point B1 ). You can do this by increasing their agility and reducing their cost of failure, both of which a utility will provide.  These companies building on top of your utility are your pioneers.

The more companies you have building on top of your utility (i.e. the larger your ecosystem) then the more things your “outside” pioneers will be building and the wider the scope of new innovations. Your “outside” ecosystem is in fact your future sensing engine.  By monitoring meta data such as the consumption of your utility services then you can determine what is becoming successful.  It’s important to note that you don’t need to examine the data of those companies but purely the meta data hence you can balance security concerns with future sensing.  You should use this meta data to identify new patterns that are suitable for provision as industrialised components (B1 to B2). Once you’ve identified a future pattern then you should industrialise it to a discrete component service (B3) provided as utility and exposed through an API.  You’re now providing multiple components (A2, B3) in an ever growing platform for others to build upon (C1). You then repeat this virtuous circle. 

Obviously, companies in any space that you’ve just industrialised (B2 to B3) might grumble – “they’ve eaten our business model” – so, you’ll have to carefully balance acquisition with implementation.  On the upside, the more discrete components you provide in your platform then the more attractive it becomes to others.  You'll need to manage this ecosystem as a gardener encouraging new crops to grow and being careful not to harvest too much.  Do note, this creates an ever expanding platform in the sense of a loose gathering of discrete component services (e.g. storage, compute, database) which is distinct from a coding platform (i.e. a framework in which you write code). 

There is some subtle beauty in the ILC model.  If we take our ecosystem to be the companies building on top of our discrete component services, then the larger the ecosystem is: -

the greater the economies of scale in our underlying components
the more meta data exists to identify future patterns
the broader the scope of innovative components built on top and hence the wider the future environment that we can scan

This translates to an increasing appearance of being highly efficient as we industrialise components to commodity forms with economies of scale but also highly customer focused due to leveraging meta data to find patterns others want.  Finally, others will come to view us as highly innovative through the innovation of others.  All of these desirable qualities will increase with the size of the ecosystem as long as we mine the meta data and acts as an effective gardener.

Being constantly the first mover to industrialise a component provides a huge benefit in enabling us to effectively be a fast follower to future success and wealth generation.  The larger the ecosystem we build, the more powerful the benefits become.  This model stood in stark contrast to what I had been told – that you should be a fast follower and that you could be one of highly innovate, efficient or customer focused.  Looking at the map, I knew that with a bit of sleight of hand then I could build the impression that I was achieving all three by being a first mover to industrialise and a fast follower to the uncharted.  I normally represent this particular form of ecosystem model (there are many different forms) with a set of concentric circles that describe the process – see figure 49.

Figure 49 – Circular view of ILC

Using context specific gameplay: the play

It was at this point, with some context specific gameplay in hand that I started to run through a few scenarios with James, my XO and my Chief Scientist in our boardroom.  Our plan started to coalesce and was enhanced by various experiments that the company had conducted.  Not least of which was the head of my frameworks team walking in to tell me that they had just demonstrated we could develop entire applications (front end and back end) in Javascript.  

At the same time as refining our play, I had encouraged the group to develop component services under the moniker of LibApi as in liberation API i.e. our freedom from endlessly repeated tasks and our existing business model.  To say I was rapturous by this experiment would be to underestimate my pure delight.  This fortuitous event helped cement the plan which is summarised in figure 50.  I’ll break it down and go through each point in detail.

Figure 50 – The Plan

Point 1 – the focus of the company would be on providing a coding platform as a utility service alongside an expanding range of industrialised component services for common tasks such as billing, messaging, an object store (a key-object store API), email etc.  All components would be exposed through APIs and the service would provide the ability to develop entire applications in a single language – JavaScript.  The choice of JavaScript was because of its common use, the security of the JS engine and the removal of translation errors with both the front and back end code built in the same language. The entire environment would be charged on the basis of JavaScript operations, network usage and storage. There would be no concept of a physical or virtual machine. 

Point 2 – to accelerate the development of the platform, the entire service would be open sourced. This would also enable other companies to set up competing services but this was planned for and desirable.

Point 3 – the goal was not to create one Zimki service (the name given to our platform) but instead a competitive marketplace of providers.  We were aiming to grab a small but very lucrative piece of a large pie. We would seed the market with our own utility service and then open source the technology.  To prevent companies from creating different product versions the entire system needed to be open sourced under a license which enabled competition on an operational level but minimised feature differentiation of a product set – GPL seemed to fit the bill.  Since our development process used test driven development and the entire platform was exposed through APIs, we were already creating a testing suite. This testing suite would be used to distinguish between community platforms providers and certified Zimki providers through a trademarked image.  By creating this marketplace, we could overcome one source of inertia (reliance on a single provider) whilst enabling companies to try their own platform in-house first and develop new opportunities for ourselves from an application store to market reporting to switching services to brokerage capability.

Point 4 – we needed to build an ecosystem to allow us to identify future services we should create and hence we had to create an ILC model. Obviously we could only directly observe the consumption data for those who built on our service but what about other Zimki providers?  By providing common services such as GUBE (generic utility billing engine) along with an application store, a component library (a CPAN equivalent) and ultimately some form of brokerage capability then we intended to create multiple sources of meta data.  We had a lot of discussion here over whether we could go it alone but I felt we didn’t have the brand name.  We needed to create that marketplace and the potential was huge.  I had estimated that the entire utility computing market would be worth $200bn a decade later in 2016.  Our longer term prize was to be the market enabler and ultimately build some form of financial exchange.  We would require outside help to make this happen given our constraints.

Point 5 – we needed to make it easy, quick and cheap for people to build entire applications on our platform.  We had to ruthlessly cut away all the yak shaving (pointless, unpleasant and repeated tasks) that were involved in developing.  When one of the development team built an entirely new form of wiki with client side preview and went from idea to launching live on the web in an under an hour then I knew we had something with potential.  Pre-shaved Yaks became the catch-phrase to describe the service.

Point 6 – we anticipated that someone would provide a utility infrastructure service. We needed to exploit this by building on top of them.  We had become pretty handy at building worth based services (i.e. ones we charged for on a percentage of the value they created) over the years and I knew we could balance our charging of the platform against any variable operational cost caused by a utility infrastructure provider.  It would also have the advantage of cutting them off from any meta data other than that our platform was growing.  If I played the game well enough then maybe that would be an exit play for us through acquisition.  If we were truly going to be successful then I would need to break the anchor of the parent company at some point in the future.

Point 7 – we knew there would be a constraint in building utility services and that compute demand was elastic.  This gave options for counter play such as creating a price war to force up the demand beyond the ability of one supplier to provide.  But in order to play one off against another we needed to give competitors a route into the market.  Fortunately, we had our Borg system and though we had talked with one large well known hardware provider (who had been resistant to the idea of utility compute) we could open source (Point 8) this space to encourage that market to form. I had counter plays I could use if needed.

The option looked good based upon our capabilities. It was within the realm of possibilities and mindful of the constraints we had.  This seemed to provide the best path forward. It would mean refocusing the company, removing services like our online photo site and putting other revenue services into some form of minimal state until the platform business grew enough that we could dispose of them. I was ready to pull the trigger but there was one last thing I needed.

Impacts on purpose

The decision to act can impact the very purpose of your company – the strategy cycle is iterative and it’s a cycle.  In this case our purpose was going from a “creative solutions group” a meaningless juxtaposition of words to a “provider of utility platforms”.  Nevertheless, if you want to win a battle then I needed to bring everyone onboard and create a crusade.  Our crusade became “pre-shaved Yaks”.  We intended to rid the world of the endless tasks which got in the way of coding.  We would build that world where you just switched on your computer, opened up a browser and started coding. Everything from worrying about capacity planning, configuring packages to installing machines would be gone.  Every function you wrote could be exposed as a web service.  Libraries of routines written by others could be added with ease through a shared commons and you could write entire application in hours not days or weeks or months. This was our purpose. It was my purpose. And it felt good.

What happened next?

We built it.  On the 18th Feb 2006 we had the platform, core API services, the billing system, the portal and three basic applications for others to copy.  We launched in March 2006, a full two years before Google appeared on the scene with AppEngine.  By the 18th April 2006, we had 30 customers, 7 applications and a monthly rate of 600K API calls. By the 19th June 2006, we had 150 customers, 10 applications and a run rate of 2.8M API calls. We were growing!

On August 25, 2006 it wasn’t Google but Amazon that launched with EC2. I was rapturous once again.  Amazon was a big player and we immediately set about moving our platform onto EC2.  Every time we presented at events our booths tended to be flooded with interest.  The company had embraced the new direction (there were still a few stragglers) but there was a growing buzz.   We still had a mountain to climb but we had announced the open sourcing, secured a top billing at OSCON in 2007 and the pumps were primed.  But Houston, we had a problem.

What went wrong?

The problem was me. I had massively underestimated the intentions of the parent company. I should have known better given that I had spent over three years (2002–2005) trying to persuade the parent company that 3D printing would have a big future or my more recent attempts that mobile phones would dominate the camera market. The parent company had become pre-occupied with SED televisions and focusing on its core market (cameras and printers). Despite the potential that I saw, we were becoming less core to them and they had already begun removing R&D efforts in a focus on efficiency. They had brought in an outside consultancy to look at our platform and concluded that utility computing wasn’t the future and the potential for cloud computing (as it became known) was unrealistic. The parent company’s future involved outsourcing our lines of business to a systems integrator (SI) and as I was told “the whole vision of Zimki was way beyond their scope”.

I had several problems here. First, they wouldn’t invest in our service because apparently a decision had been made higher up within the parent company on what was core. What they were concerned with was the smooth movement of our lines of business to the SI. That supported their core aims and their needs. When I raised the idea of external investment then the problem became they couldn’t keep a stake in something which they said was not core. When I raised the idea of a management buy-out, they would always go to the unrealistic $200bn market figure I had predicted for 2016. Surely, I would be willing to pay a hefty sum based upon this future market as a given for a fledgling startup in a fledgling market? No venture capital firm would take such an outrageous one-sided gamble. In any case, I was told the discussion could always be left until after the core revenue services were transferred to the SI. This was just short hand for “go away”.

The nail in the coffin was when I was told by one of the board that the members had decided to postpone the open sourcing of our platform and that they wanted me to immediately sign contracts cancelling our revenue generating services at an unspecified date to be filled in later. As the person who normally chaired the board meeting then I was annoyed at being blindsided, the choice and myself. Somehow, in my zeal to create a future focused on user needs and a meaningful direction, I had forgotten to gain the political capital I needed to pull it off. I might have created a strong purpose and built a company capable of achieving it but I had messed up big time with the board. It wasn’t their fault; they were focusing on what was core to the parent company. The members were all senior executives of the parent company and it should have been obvious that they were bound to take this position. I realised that I have never truly involved them in our journey and had become pre-occupied with building a future for others. I had not even fully explained to them our maps relying instead on stories but this was because I still hadn’t realised how useful maps really were. In my mind, maps were nothing more than my way of explaining strategy because I hadn’t yet found that magic tome that every other executive learnt at business school. This was a powerful group of users — my board and the parent company — that had needs that I had not considered. Talk about a rookie mistake. I had finally been rumbled as that imposter CEO.

There was no coming back from this, they were adamant on their position and had all the power to enforce it. I was about to go on stage at OSCON (O’Reilly open source conference) in 2007 and rather than my carefully crafted message, I had to somehow announce the non-open sourcing of our platform and the non-creation of a future competitive utility market. I was expected to break a promise I had made to our customers and I was pretty clear that postpone was a quaint way of saying “never”. I couldn’t agree with the direction they had chosen and we were at loggerheads. My position was untenable and I resigned.

The company’s services were quickly placed on the path to being outsourced to the SI and the employees were put through a redundancy program which all started a few days after I resigned. The platform was disbanded and closed by the end of the year. The concepts however weren’t lost as a few of these types of ideas made their way through James Duncan into ReasonablySmart (acquired by Joyent) and another good friend of mine James Watters into Cloud Foundry. I note with a wry smile that Pivotal and its platform play is now valued at over $2.5bn and serverless is a rapidly growing concept in 2016. As for SED televisions? Well, some you win, some you lose. As for the consultancy, any frustration I might have is misdirected because I was the one who failed here. It was my job to lead the company and that didn’t just mean those who worked for me but also the board.

In these first chapters, I’ve hopefully shown you how to understand the landscape you’re competing in, anticipate the future, learn to apply doctrine, develop context specific gameplay, build the future and then finally blow it by ignoring one set of users. Would Zimki have realised its potential and become a huge success? We will never know but it had a chance. This was my first run through the strategy cycle and at least I felt as though I had a vague idea as to what I was doing rather than that na├»ve youth of “seems fine to me”. I was still far from the exalted position of that confident SVP that I had met all those years ago and I was determined to get better next time. Fortunately for me, there was a next time. But that’s another part of the story.

I have to reiterate that every time that I’ve gone around the cycle, I’ve got better at playing the game. As we travel along the same path I’ll be adding in more economic patterns, more doctrine and more context specific gameplay along with deep diving on some of the parts I’ve glossed over or were merely general concepts in those early days. This first section of five chapters describes my Beginning and ends in July 2007. The next three sections each cover a following part of my journey including My wilderness, then Ubuntu to Better for Less and finally my LEF days. The latter will show you just how advanced mapping has become. But as with all journeys, let us stick to the path and no short cutting. Every step is valuable; every landscape is an opportunity to learn from.

Before we start on our next part of the journey, I want to clean up some terms and provide some basic tips for mapping.

Next Chapter in Series : Getting started
GitBook link [to be published soon]

Tuesday, August 16, 2016


Chapter 4

I had created my first map and applied an understanding of some basic climatic patterns that might influence it.  These patterns were the ones that I could not stop but I could anticipate.  Whether I liked it or not the components on my map would evolve through the actions of the market.  However, whilst I had no choice over the market that didn’t mean I had no choice over my actions.  I might be able to influence the landscape through action, I could decide how I organised myself, the principles that I emphasised within the company and our manner of operating. 

Some of my choices might be context specific i.e. a decision to flank an opponent requires an opponent to be in a known position.  This doesn’t mean that everything is context specific.  There could exist in business generally useful principles that everyone should apply.  These principles are doctrine and in this chapter we’re going to examine that part of our journey – see figure 26.

Figure 26 – Doctrine

Learning doctrine

Doctrine are the basic universal principles that are applicable to all industries regardless of the landscape.  This doesn’t mean that the doctrine is right but instead that it appears to be consistently useful for the time being.  There will always exist better doctrine in the future.  As with climatic patterns we will go through some basic forms and refine in future passes through the strategy cycle. 

Doctrine: Focus on user need
Any value we create is through meeting the needs of others.  Even our ability to understand our environment by creating a map requires us to first define the user need as it is the anchor for the entire map – see figure 27.  Alas, a mantra of "not sucking as much as the competitors" whilst rarely explicitly stated is surprisingly common.  This is not acceptable.  We must be the best we can be and to do that we must understand what it is we need to be.  Despite this, the usual response I receive when asking a company or a specific project to explain its user needs is a blank stare.  I have seen many large projects in excess of a $100M with endless specification documents where the scale of spending and paperwork is only matched by the inability of the group to explain its most basic user needs. It should be obvious that failing to meet the needs of your users especially when competitors do manage to achieve this is usually a bad idea. 

Figure 27 – Focus on user needs

But how do we work out those user needs?  This is extremely tricky because we bring our own biases to the table.  The first thing to do is to understand that you're talking about user needs not your needs i.e. you might need to make revenue and profit but that is NOT your user need.  By meeting the needs of your users then you hope to make revenue and profit, not the other way around.

The best way I've found for determining user needs is to start by looking at the transactions that an organisation makes with the outside world.  This will tend to give you an idea of what it provides and what is important.  The next step is to examine the customer journey when interacting with those transactions.  By questioning this journey and talking with customers then you will often find pointless steps or unmet needs or unnecessary needs being catered for.  Another mechanism I've also found to be exceptionally useful, especially when your users are in fact other corporations, is to go and map out their landscape. In most cases I find these users have a poor idea of what they actually need.  If you're a supplier to such a company then discussions tend to degenerate to things they want and things they think are necessary rather than things they need.  By mapping out their landscape, you can often clarify what is really needed along with finding entire new opportunities for business.

Discussion and data collection are a key part of determining user needs and so talk with your consumers and talk with experts in the field.  However, there is a gotcha.  In many cases they turn out to be both wrong!  Gasp?  What do you mean they're wrong?  There are two important areas where the users and the experts are usually wrong in describing their own needs.  By happenstance, both are crucial for strategic gameplay. 

The first area is when a component is moving between stages of evolution e.g. when something shifts from custom built to product or more importantly from product to commodity (+utility). The problem is that the pre-existing installed base causes inertia to the change.  Invariably users will be fixated on a legacy world and hence they will have a bias towards it.  This is the equivalent to a user saying to Henry Ford – “we don’t want a car; we want a faster horse!”  The bias is caused by a climatic pattern known as co-evolution but for the time being you simply need to be wary of the legacy mindset.  

The second area to note is that of the uncharted domain. These needs are both rare and highly uncertain and this means you're going to have to gamble.  There is no consistent way of determining what the user actually needs with something novel because they don’t know themselves.  Hence be prepared to pivot. You might think you’re building a machine that will stop all wars (the Wright Brothers original concept for the airplane) but others will find alternative uses – the fighter plane, the bomber.

When it comes to dealing with needs then there are three different approaches according to the domains of uncharted, transitional and industrialised.  In the uncharted domain you have to gamble. Users and experts don't actually know what is needed beyond vague hand waving.  In the transitional domain you have to listen. Users and experts can guide you to what they need.  In the early days of the industrialised domain then you have to be mindful of users and experts bias caused by the inertia of past success.  You already know what is needed but it has to be provided on a volume operations and good enough basis.

Doctrine: Use a common language
Instead of using multiple different ways of explaining the same thing between different functions of the company then try to use one e.g. a map.  If you’re using business process diagrams on one side and IT systems diagrams on another then you’ll end up with translation errors, misalignment and confusion.  If you can't map what you are doing, then I recommend you hold back from acting and spend a few hours mapping it.

Doctrine: Be transparent
Sharing a map will enable others to challenge and question your assumptions.  The is essential because it helps us to learn and refine our maps.  The downside of sharing is it allows others to challenge and question your assumptions.  Many people find this uncomfortable.  As the CEO of the company did I really want one of my juniors ripping apart my strategy using the map that I had created? Yes.  I’d rather someone point out to me that our strategy involved walking through a minefield than let me discover this for myself.  However, don’t underestimate how difficult this transparency is within an organisation. 

Doctrine: Challenge assumptions
There is little point in focusing on user needs, creating a common language through the use of a map and sharing it transparently in the organisation if no-one is willing to challenge it.  This act should be a duty for everyone in the company.  I didn’t care if it was my pet project, I needed people to openly and honestly tell me where they thought I was going wrong.  This requires not only transparency but also trust.  Any form of retribution or bias against someone for challenging is a deadly sin that will harm your company.  As the CEO, I made my CFO the XO back in 2004. One of his duties was to challenge my choices.

Doctrine: Remove duplication and bias
You should not only share maps, you should collate them in an effort to remove duplication and bias i.e. rebuilding the same thing or custom building that which is a commodity.  Mapping is itself an iterative process and you’ve probably been making decisions for a long time without understanding the landscape.  So you don’t need to map the entire landscape to start making decisions but rather think of maps as a guide which tells us more the more we use it. 

With your first map you can probably challenge whether we’ve adequately met user needs or maybe how we’re treating components.  As you collect more maps of different systems or lines of business then you start discover the same component is on multiple maps.  I’ve marked some examples in figure 28 in green.  

Figure 28 – Duplication

Now, the same component being on different maps is fine except when we’re saying it’s a different instance of that component.  For example, if you have ten maps all with database as a component then that’s not necessarily a problem but it might be if you’re actually saying we have 10x different databases running on 10x different systems.  In large organisations such as petrochemical or banking companies with committees of architects you don’t normally see duplication on a scale of tenfold.  Instead, from experience, what I commonly find in a single global organisation built by acquisition with a federation of business units is more on the scale of 380x isolated teams custom building 380x ERP systems to meet the same user needs with 380x different systems (a chemical company).  The worst case example I know has a duplication in excess of 740x (an energy company).  These days, I dream of meeting a large global organisation which has duplication down at the scale of ten of or even units.  Most companies have no idea of what their duplication levels really are and significantly underestimate the problem.  

One technique I find useful in helping to highlight this problem is to create a profile diagram.  I simply collate maps together, identifying commonly described components and then place them onto the profile.  This gives me an idea of both duplication and bias. From the profile diagram below in figure 29, then the following points are noted: -

Figure 29 – Profile

Point 1 – for each common component you record how many times it is repeated. High numbers of repetition is not necessarily a problem as there may be a legitimate reason or if could be the same component in different maps.  In this case, our maps show seven references to websites. 

Point 2 – recording how evolved a component is can provide you with an idea of bias within the organisation. For example, there are 6 examples of user registration in the maps. One of which is distanced from the others. This could be because one group simply thought in their map that user registration was a unique activity (it isn’t) or alternatively, you might have five groups using a common service and one group custom building their own. 

Point 3 – collating maps often helps in creating a common lexicon. The same thing is often described with different terms in a single organisation.

Point 4 – there are 7 references to email within the maps. Hopefully (though alas not always the case) this refers to one email system used in different places. There is also some bias with most groups considering email to be more commodity.

Point 5 – there are 5 references to data centres. Again hopefully this refers to a couple built for specific geographical reasons.  Alas, a popular sport in many large enterprises seems to be building data centres as though they’re the first ones ever built.  In the worst cases, I have been shown around a lovingly created data centre and then gone to the shop floor to find a sad, solitary rack standing in the middle of a large empty hall.

The maps and the profile are simply guides to help you remove duplication and bias.  This is a necessity for efficient operations.  However, duplication should not be solely considered as a financial cost because it impacts our ability to develop more complex capabilities.   Another technique I find useful in a dispersed structure is to determine what capabilities we need as a group.  For example, in figure 30, a map is provided that explicitly highlights both the customer journey and the associated capabilities. I’ve derived this map from a real world example used by the Methods Group. In this map the customer journey (described as service patterns) is more clearly highlighted and we’re focusing not only on the technology required to meet higher order system needs but also those higher order systems e.g. manage call, determine sponsorship. For reasons of confidentiality, I’ve changed and removed many of the terms.

Figure 30 – Map with customer journey

By aggregating many of these maps together you can develop a picture of what the company actually does and what its existing capabilities are through a capability profile - see figure 31.  

Figure 31 – Capability Profile

You may find that common capabilities are often assumed to be custom (e.g. offer a selection of investments) when in reality they should be far more defined.  You may also find that you have a plethora of duplicated and custom built technology providing a single capability which should be streamlined.  It never fails to surprise me how a simple business with limited capabilities is made incredibly complex and slow by a smorgasbord of duplicated custom built solutions underneath.

Doctrine: Use appropriate methods
One of the climatic patterns we examined in the previous chapter (see figure 20) was how no one size fits all method exists.  Assuming you are removing bias in your maps either by challenging directly or with the aid of a profile built from multiple maps then the next question becomes what methods are suitable?  The most common mistake that I find is with outsourcing. The issue with outsourcing isn’t that the concept is wrong but instead that we have a tendency to outsource entire systems for which we do not understand the landscape. This is often done on the hope that someone else will effectively take care of it. 

Let us imagine a system with multiple components spread across the evolution axis. What we will tend to do is apply a single highly structured process, often through a contract detailing what should be delivered.  Unfortunately, some of those components will be in the uncharted domain and hence are uncertain by nature.  They will change and hence we will incur some form of change control cost.  These costs can be significant in any complex system that contains many uncharted components.  As a result, arguments tend to break out between the buyer and the supplier. Unfortunately, the supplier has the upper hand because they can point to the many components that did not change as being efficiently delivered and the cost is associated with the components that changed.  The old lines of “if you had specified it correctly in the first place” to “you kept on changing your mind” get trotted out and the buyer normally feels some form of guilt. It was their fault and if only they had specified it more! This is a lie and a trap.

The problem was not that a highly structured process with detailed specification was correctly applied to industrialised components but that the same technique was also incorrectly applied to components that were by their very nature uncertain and changing.  The buyer could never specify those changing components with any degree of certainty and excessive change control costs caused by a structured process are inevitable.  The fault is with the supplier who should have the experience to know that one size fits all cannot work.  Unfortunately, and there is no polite way of saying this, it’s a lucrative scam.  Even better, if the scam works – especially if the supplier waives some cost as a gesture of goodwill – then the next time the buyer will try even harder to specify the next system in more detail. They’ll often pay the supplier or a friendly consultancy to help them do this. Unfortunately, once again it will contain uncharted components which will change incurring cost.  The only way to avoid this is to break the system down into components and treat them with appropriate methods e.g. figure 32.

Figure 32 – Use appropriate methods.

In the above example from 2005, then power should be outsourced to a utility provider whereas CRM, platform, data centre and compute should use off the shelf products or rental solutions (e.g. hosting) with minimal change where possible. The online photo storage and image manipulation components which are going to rapidly change should ideally be built in-house with our own engineers and using an agile approach. Whilst we might use more detailed and specific contracts for items such as data centre (hosting), we are also mindful that we cannot fully specify image manipulation at this time.  If in 2005, we had outsourced the entire system in the figure above to a single highly structured approach using a detailed specification then I could almost guarantee that we would have ended up with excessive change costs around image manipulation and photo storage. 

The problem of inappropriate outsourcing is so rife that it’s worth doing a simple example to reinforce this point. In figure 33, I’ve provided a box and wire diagram (commonly used in IT systems) for a self-driving car. However, I’ve translated the description of the components into elvish because let’s face it most IT is elvish to people in business.  I’d like you to look at the diagram and answer the questions in point 1 and point 2.

Figure 33 – Elvish self-driving car (box and wire)

Now, in figure 34, I’ve provided exactly the same diagram in a mapping format. It’s still in elvish. See if you can answer point 1 and 2.

Figure 34 -  Elvish self-driving car (map)

You should find you can say something reasonable about how you treat point 1 and 2.  If you’re struggling look at figure 20, chapter 3.  

For reference, point 1 should probably be built in-house with our own engineers in an agile fashion whereas point 2 should be either outsourced with a structured and well defined process or some sort of commodity consumed.  In figure 35, I’ve provided the same diagram without the elvish so you can check your thinking.

Figure 35 -  A self-driving car

What enables you to do this feat of elvish sensibility is the movement axis of evolution. Unfortunately, in most outsourcing arrangements that I’ve seen then diagrams such as box and wires or business process maps (see figure 36) tend to dominate.  Unfortunately, these lack that all important movement characteristic.  Box and wires and business process maps are not actually maps; you are relying solely on contextual information from the words (i.e. knowing that GPS is a commodity).  The diagrams themselves will not provide you with a guide as to what you should or should not outsource.

Figure 36 -  A business process diagram

Before you go and ask your friendly consultancy or vendor to make a map for you, remember that their interests are not necessarily your own.  Equally, it’s important to challenge any bias in your maps.  A team building our own home grown electricity supply may well argue that electricity is not a commodity but instead we need to custom build our own supply.  Along with common sense, the cheat sheet (figure 15, chapter 2) and those profile diagrams built from aggregated maps (figure 29) should give you ample evidence to challenge this.  

At this point someone normally tells me - “that’s obvious, we wouldn’t do that” – however, ask yourself how many enterprise content management (ECM) systems do you have?  If you’re of any scale and a typical global company built by acquisition, then experience would dictate that you’ll probably say 5-8x.   In practice it is often more likely to be 40-250x customised versions with probably 3-5x separate groups building a global ECM whilst being unaware that the other groups exist. The problem is, most of you won’t know how much duplication you have.  Of course, there are a wide range of excuses that are deployed for not breaking up entire systems into components and then applying more appropriate methods.  My favourite ones include: -  

“we need better experts and specification” – that’s called not dealing with the problem. It’s like saying our death star project to clean up the mess of failed death star projects has failed; we need a new death star!  There’s a famous quote about repeating the same thing and expecting different results which is relevant here.

“it’s too complex, splitting into parts will make it unmanageable” – the age old effort to pretend that a system containing 100 different moving parts doesn’t actually contain 100 different moving parts.  We don't build cars by pretending they are one thing; in fact, we often have complex supply chains meeting the different needs of different components with appropriate measurement and contracts deployed based upon the component. Yes, it does make for a bit more work to understand what is being built but then if you’re spending significant sums it is generally a good idea to know this.

 "It will cause chaos" – cue the old "riots on the street" line.  Given construction, automotive and many other industries have no problem with componentisation then I can't see how anyone ever jumps to this notion of chaos.  The truth is usually more of a desire to have “one throat to choke” though there is nothing stopping a company from using one supplier to build all the components with appropriate methods.

"You’ll end up with hundreds of experimental startups" –  at this point we’re getting into the surreal. If you break a complex system into components, then some of the uncharted components are going to be experimental.  For those components then you're likely to do this in-house with agile techniques or use a specialist company focused on more agile processes.   But you won't give them all because the majority of components tend to be highly industrialised and hence you’ll use established utility providers such as Amazon for computing infrastructure.  I'm not sure how people make the jump from componentisation to giving it all to "hundreds of experimental startups". In general, this stinks of nonsense and a desire to keep the current status quo.

“complexity in managing interfaces” –  this is my favourite excuse which takes surreal to a whole new level. Pretending that a complex 100 component system with uncharted and industrialised components that have interfaces between them is in fact one system with a one size fits all method and non-existent interfaces is the very definition of fantasy.  Those components are there, same as the interfaces. The complexity doesn't go away simply by "outsourcing". All you've done is try and pretend that the complex thing you're building is somehow simple because then it's easier to manage. It would be like BMW or Apple outsourcing their entire product lines to someone else and trying to have no involvement because it makes management simple.

Doctrine: Think small
In order to apply appropriate methods then you need to think small. You can’t treat the entire system as one thing but you need to break it into components.  I will often extend this to using small contracts localized around specific components and even small teams such as cell based structures.  Probably the best known approaches to using small teams are Amazon’s Two Pizza model and Haier’s Cell based structure.

Such teams should be given autonomy in their space and this can be achieved by the team providing well defined interfaces for others to consume along with defined boundaries often described through some form of fitness function i.e. the team has a goal around a specific area.  Maps themselves can be useful in helping you identify not only the teams you should build but also the interfaces they need to create - see figure 37.

Figure 37 – Think small

Doctrine: Think aptitude and attitude
Now let us suppose you embark on a cell based structure and you’re thinking small.  Then each cell is going to require different skills i.e. aptitudes. However, there's another factor at play here - attitude.  When we look at a map, we know that activities evolve from the uncharted to industrialised domain and the methods and techniques we need are different.  The genesis of something requires experimentation and whilst you might need the aptitude of engineering you need a specific form i.e. agile techniques.  Conversely the type of engineering you need to build a highly industrialised act requires a focus on volume operations and removing deviation such as six sigma.  Hence, we have one aptitude of engineering that requires different attitudes.  It doesn’t matter what aptitude we examine - finance, engineering, network or marketing – the attitude also matters.  There isn't such a thing as IT or finance or marketing but instead multiples of.

To resolve this problem, you need to populate the cells with different types of people - pioneers, settlers and town planners.  It's not realistic to think that everyone has the same attitude, some are much more capable of living in a world of chaos, experimentation and failure whilst others are much more capable of dealing with intensive modelling, the rigours of volume operations and measurement. You need brilliant people with the right aptitudes (e.g. engineering, finance) and different attitudes (e.g. pioneers, settlers). 

Pioneers are brilliant people. They are able to explore the never before discovered concepts, the uncharted land. They show you wonder but they fail a lot. Half the time the thing doesn't work properly. You wouldn't trust what they build. They create 'crazy' ideas. Their type of innovation is what we describe as core research. They make future success possible.  Most of the time we look at them and go "what?", "I don't understand?" or "is that magic?".  They built the first ever electric source (the Parthian Battery, 400AD) and the first ever digital computer (Z3, 1943).  In the past, we often burnt them at the stake.

Settlers are brilliant people. They can turn the half-baked thing into something useful for a larger audience. They build trust. They build understanding. They make the possible future actually happen.  They turn the prototype into a product, make it possible to manufacture it, listen to customers and turn it profitable.  Their innovation is what we tend to think of as applied research and differentiation. They built the first ever computer products (e.g. IBM 650 and onwards), the first generators (Hippolyte Pixii, Siemens Generators). 

Town Planners are brilliant people. They are able to take something and industrialise it taking advantage of economies of scale. This requires immense skill.  You trust what they build.  They find ways to make things faster, better, smaller, more efficient, more economic and good enough.  They create the components that pioneers build upon. Their type of innovation is industrial research. They take something that exists and turn it into a commodity or a utility (e.g. with Electricity, then Edison, Tesla and Westinghouse). They are the industrial giants we depend upon.

In 2005, we knew that one culture didn't seem to work and enabling people to gain mastery in one of these three attitudes seemed to make people happier and more focused.  Taking one attitude and placing them in a field which requires another attitude is never a good idea.  Try it yourself, identify a pioneer software engineer used to a world of experimentation and agile development and send them on a three week ITIL course.  See how miserable they come back.  Try the same with a town planner and send them on a three week course of hack days & experimentation with completely uncertain areas and lots of failure.  When using a map, you should not only break into components and build small cells around this, you should also consider attitude – see figure 38.

Figure 38 – Aptitude and Attitude

Now, this idea is not new. A bit of digging will bring you to Robert X. Cringely's book, Accidental Empires, 1993.  Cringely described how there were three different types of companies known as infantry, commando and police. The PST (pioneer, settler and town planner) structure is a direct copy of that idea applied to a single company and put into practice in 2005.  To quote from his book, which I strongly recommend you buy - 

“Whether invading countries or markets, the first wave of troops to see battle are the commandos. Commando's parachute behind enemy lines or quietly crawl ashore at night. Speed is what commandos live for. They work hard, fast, and cheap, though often with a low level of professionalism, which is okay, too, because professionalism is expensive. Their job is to do lots of damage with surprise and teamwork, establishing a beachhead before the enemy is even aware they exist. They make creativity a destructive art.

[Referring to software business] But what they build, while it may look like a product and work like a product, usually isn't a product because it still has bugs and major failings that are beneath the notice of commando types. Or maybe it works fine but can't be produced profitably without extensive redesign. Commandos are useless for this type of work. They get bored.

It's easy to dismiss the commandos. After all, most of business and warfare is conventional. But without commandos you'd never get on the beach at all. Grouping offshore as the commandos do their work is the second wave of soldiers, the infantry. These are the people who hit the beach en masse and slog out the early victory, building the start given by the commandos. The second wave troops take the prototype, test it, refine it, make it manufacturable, write the manuals, market it, and ideally produce a profit.  Because there are so many more of these soldiers and their duties are so varied, they require and infrastructure of rules and procedures for getting things done - all the stuff that commandos hate. For just this reason, soldiers of the second wave, while they can work with the first wave, generally don't trust them, though the commands don't even notice this fact, since by this time they are bored and already looking for the door. While the commandos make success possible, it's the infantry that makes success happen.

What happens then is that the commandos and the infantry advance into new territories, performing their same jobs again. There is still a need for a military presence in the territory. These third wave troops hate change. They aren't troops at all but police. They want to fuel growth not by planning more invasions and landing on more beaches but by adding people and building economies and empires of scale”.

Doctrine: Design for constant evolution
Everything is evolving due to competition. The effects of this on business can be seen in their continual restructuring to cope with new outside paradigms.  Recent presidents of cloud and social media are no different from the former presidents of electricity and telephony that most companies employed.  Today’s bolt-on include Chief Digital Officers.  This new stuff is tomorrow's legacy and this creates a problem.  We might introduce a cell based structure with consideration for not only aptitude but attitude however the map isn’t static.  We need to somehow mimic that constant state of evolution in the outside world but within a company. The solution seems to be to introduce a mechanism of theft which means new teams need to form and steal the work of earlier teams i.e. the settlers steal from the pioneers and productise the work. This forces the pioneers to move on. Equally the town planners steal from the settlers and industrialise it, forcing the settlers to move on but also providing component service to enable the pioneers. This results in a cycle shown in fig 39.

Figure 39 – Design for constant evolution

Point 1 – The Town Planners create some form of industrialised component that previously existed as a product. This is provided as a utility service.

Point 2 – The Pioneers can now rapidly build higher order systems that consume that component.

Point 3 – As the new higher order systems evolve, the Settlers identify new patterns within them and create a product or some form of library component for re-use. 

Point 4 – As the product or library component evolves, the Town Planners complete the cycle by creating an industrialised form (as per Point 1). This results in creating an ever expanding platform of discrete industrialised components for which the pioneers can build on. 

Maps are a useful way to kick-start this process. They also give purpose to each cell as they know how their work fits into the overall picture. The cell based structure is an essential element of the structure and it need to have autonomy in their space, they must be self-organising. The interfaces between the cells are therefore used to help define the fitness functions but if a cell sees something they can take tactical advantage of in their space (remember they have an overview of the entire business through the map) then they should exploit it. The cells are populated with not only the right aptitude but attitude (pioneers, settlers and town planners). This enables people to develop mastery in their area and allows them to focus on what they're good at. You should let people self-select their type and change at will until they find something they're truly comfortable with. Reward them for being really good at that. Purpose, Master and Autonomy are the subjects of the book Drive by Daniel H. Pink.

As new things appear in the outside world they should flow through this system. This structure doesn't require a bolt-on which you need to replace later. No chief digital, chief telephony, chief electricity, chief cloud officer required.  The cells can grow is size but ultimately you should aim to subdivide into smaller cells and maps can help achieve this. You will increasingly have to structure the communication between cells using a hierarchy and yes, that means you need a hierarchy on top of a cell based structure.  I’ve found that an executive structure which mimics the organisation to be of use i.e. a CEO, a Chief Pioneer, a Chief Settler and a Chief Town Planner can be applied. However, you'll probably use more traditional sounding names such as Chief Operating Officer, Chief Scientist etc.  We did.  I'm not sure why we did and these days I wouldn't bother; I'd just make it clear.  You will also need separate support structures to reinforce the culture and provide training to each group. Contrary to popular concepts of culture, the structure causes three separate cultures to flourish. This is somewhat counter to general thinking because the culture results from the structure and not the other way around. It also means you don’t have a single company culture but multiple that you need to maintain. I’ve described the basic elements of this within figure 40.

Figure 40 – Culture.

Lastly, PST is a structure that I've used to remarkable effect in a very small number of cases.  That's code for 'it might just be a fluke'.  However, in the last decade I've seen nothing which comes close and instead I've seen endless matrix or dual systems that create problems.  Will something better come along - of course it will.  However, to invoke Conway's law then if you don't mimic evolution in your communication mechanisms (e.g. through a mechanism of theft) then you'll never going to cope with evolution outside the organisation.

So how common is a PST structure? Outside certain circles it's non-existent.  At best I see companies dabbling with cell based structures which to be honest are pretty damn good anyway and probably where you should go.  Telling a company that they need three types of culture, three types of attitude, a system of theft, a map of their environment and high levels of situational awareness is usually enough to get managers running away. It doesn't fit into a simple 2 x 2.  It also doesn't matter for many organisations because you only need high levels of situational awareness and adaptive structures if you're competing against organisations who have the same or you’re at the very sharp end of ferocious competition.  Personally, for most companies then I’d recommend reading “boiling frogs” from GCHQ which is outstanding piece of work.  It will give you more than enough ideas and it contains a very similar structure.

I will note that in recent years I’ve heard plenty of people talk about dual structures. I have to say that from my perspective and experience that these are fundamentally flawed and you’re being led up the garden path.  It’s not enough to deal with the extremes, you must manage the transition in between.  Fail to do this and you will not create an organisation that copes with evolution.  If you focus on the extremes then you will diminish the all-important middle, you will tend to create war between factions and because the components of the pioneers never evolve (the Town planners will describe these systems as “flaky”) then you create a never growing platform and on top of this an increasing spaghetti junction of new built upon new.  I’ve experienced this myself back in 2003 along with the inevitable slow grinding halt of development and the calls for a death star project of immense scale to build the “new platform for the future”.  I’ve never seen that work.

Using doctrine with our first map

So let us recap the basic forms of doctrine we’ve covered. These are universal, applicable to all landscapes as far as I can tell though many require you use a map in order to exploit them.  As with climatic patterns, this is not an exhaustive list but enough for now.  In later chapters we will loop back around this section, refining both the concepts and adding more doctrine. The basics are: -

Focus on user need
Use a common language
Be transparent
Challenge assumptions
Remove duplication and bias
Use appropriate methods
Think small
Think aptitude and attitude
Design for constant evolution
Enable purpose, autonomy and mastery

When you read the list, they mainly sound like common sense. Most of them are but then again, they’re very difficult to achieve. You really have to work hard at them.  In the case of “remove duplication and bias” then you can’t apply it to your first map because it requires multiple maps.  However, even with a simple map, you can apply many of these doctrines. In figure 41 I’ve taken our first map which we applied common economic patterns to (Chapter 3, figure 25) and shown where doctrine is relevant.

Figure 41 - Applying doctrine and economic patterns to our first map.

Point 1 – focus on user needs. The anchor of the map is the user.

Point 2 – The map provides a common language. It provides a mechanism to visually challenge assumptions.

Point 3 – Use appropriate methods (agile, lean and six sigma or in-house vs outsource) and don’t try to apply a single method across the entire landscape

Point 4 – Treat the map as small components e.g. small teams (team 4)

Point 5 – Consider not only aptitude but attitude (pioneers, settlers and town planners)

Point 6 – Design for constant evolution. The components will evolve and this might require new teams (e.g. team 8) with new attitudes. 

It’s worth taking a bit of time to reflect on figure 41. What we have is not only the user needs, the components meeting those needs and the common economic patterns impacting this. We also have anticipation of change, the organisational structure that we will need and even the types of methods and culture that are suitable.  All of this is in one single diagram.  In practice, we normally only show the structures on the map that are relevant to the task at hand i.e. if we’re anticipating change then we might not show cell structure, attitude and hence cultural aspects.  However, it’s worth noting that they can all be shown and with practice you will learn when to include them or not.  After a few years you will find that much of this becomes automatic and the challenge is to remember to include structures for those that are not initiated in this way of thinking. 

We are now in a position of understanding our landscape, being able to anticipate some forms of change due to climatic patterns and we have an understanding of basic universal doctrine to help us structure ourselves. We’re finally at a point that we can start to learn the context specific forms of gameplay which are at the heart of strategy. With a few basic lessons about gameplay then we will be ready to act.

Next Chapter in Series : The play and a decision to act
GitBook link [to be published soon]