Once you start mapping out environments,
you can quickly start to discover common economic patterns. basic rules of competition and repeating forms of gameplay.
A typical basic pattern is how supply and demand competition drives the evolution of one component (whether practice, data or activity) to a more industrialised form which not only improves its efficiency but through the provision of stable interfaces can enable rapid development of novel higher order systems. Those novel higher order systems may also turn out to be new sources of wealth but they are highly uncertain and unpredictable (i.e. uncharted). However those that succeed will evolve and the cycle will repeat. See figure 1.
Figure 1 - Competition enables new higher order systems
From the above, a component (either an activity, practice or data) evolves from A to A to A, for example the evolution of electricity from the Parthian battery (A) to Siemens generators (A) to utility provision by Westinghouse (A).
As it evolves from the uncharted space (e.g A where it is rare, uncertain and constantly changing) to a more industrialised form (A) then it becomes more efficient, defined, stable and standardised. This process enables the creation of higher order systems (e.g. B, C, D) built upon standard interfaces e.g. standard electricity (A) enabled lighting(B) , radio(C) and television(D) .
Those newly created novel and highly uncertain components (e.g. the uncharted B, C, D) then start to evolve if they are successfully adopted via the same forces of supply and demand competition (e.g. D to D). The cycle then repeats.
You can trace this effect throughout history (see figure 2) and it is the combination of this pattern with inertia and co-evolution which creates economic cycles both at a macro (k-waves) and micro economic scale. But, we've covered this many times before and that's not the purpose of this post.
Figure 2 - Cycle throughout history.
Now all of this is relatively dull stuff and there are dozens of common patterns you need to understand in order to play even the most basic strategic games. However, it's worth noting a couple of things even with this simple pattern.
For example, in figure 3, I've added some more detail on the characteristics of components at each stage of evolution.
Figure 3 - Treating Components
First, with A to A we're talking about the industrialisation of an existing act which is all about volume operations, efficiency, operational improvements and measurement. Whilst people often talk about the advantage of being a fast follower, in this case there's an additional source of value for being the first to industrialise. The value is derived from others building on top of the utility services you build and I'll explain why a bit later.
Let us assume you have industrialised some act (e.g. A to A), for example the shift of computing infrastructure from computing as a product (A) to computing as a public utility (A). This can enable others to build novel but uncertain higher order systems on top of your utility services (e.g. B, C, D).
Those novel higher order systems might be uncertain but they are potential sources of future worth. Since they're constantly changing (i.e. we're exploring the potential) then successful creation is both costly in terms of research and development along with being unknown in terms of success. In this case you ideally want to be a fast follower and let others incur the cost of R&D. But how do you know what to follow? How can you detect success?
Fortunately, due to competition then successful acts will start to mature through multiple diffusing waves of ever improving products. This pattern is detectable through consumption i.e. if the novel systems are built on top of your utility services then diffusion of ever maturing and hence successful systems (e.g. D to D) can be detected by simple consumption of your underlying sub system (i.e. consumption of your utility service A).
This provides you with an opportunity.
If you commoditise an act (A to A) to a more industrialised form which enables others to innovate (B, C, D) then you can leverage the consumption of your underlying component (A) by others to detect successful changes (e.g. D to D). You can then commoditise any identified successful component (e.g. D) to a more industrialised form in order to repeat the process. Hence by being a first mover to commoditise (A to A) and by exploiting consumption information then you are constantly in a position to be a fast follower (D to D) to any successful change without incurring the heavy R&D risk because everyone else is innovating for you.
This is a model known as Innovate - Leverage - Commoditise (ILC) and it's fairly old hat having first been applied around 2005 (the earliest model was called innovate - transition - commoditise, however it was renamed later ILC on the suggestion of Mark Thompson. I'm not very good at naming things and transition was a bit bleh). There are many critical factors to the model including :-
1) Speed of information. Whilst the model can be applied in the product space (e.g. A), the problem is the speed at which you can gain consumption information is limited by market surveys. Utility models (e.g. A) especially where services are provided through an API are more apt because you gain direct consumption information.
2) Size of ecosystem. Your ability to innovate, deliver what customers want and efficiency all depend upon the size of the consuming ecosystem for your underlying components (e.g. consumption of A).
This ecosystem consists not only of your own employees but also any consuming company. The efficiency of provision of A depends upon economies of scale i.e. how big your consuming ecosystem is. Your apparent rate of innovation (since you're not doing the innovation just fast following others) depends upon the number of companies innovating on top of your component (e.g. B, C, D). Your ability to deliver what customers want (i.e. spot successful new things) depends upon your ability to leverage the ecosystem to spot success (e.g. D to D). In a well run model then your apparent rate of innovation, customer focus and efficiency should all increase with the size of the consuming ecosystem.
I've provided an example of the above figure 3 in figure 4 using a circle model where the centre is your core component services (your platform) surrounded by an ecosystem of consuming companies. Such circle models are woefully inadequate for strategic play but they act as a useful visual reminder that effective play involves exploiting others.
Figure 4 - Ecosystem Size
3) Relevance of component. When commoditising a component, the potential size of the consuming ecosystem depends upon how relevant that component is in other value chains. Hence it's advisable to focus on components that are widely used e.g. computing infrastructure rather than highly specialised to an industry.
4) Speed of action. There's little point in using an ILC model if you don't exploit it to create new components and grow the ecosystem. Obviously, each time you do (whether through copying or acquisition) then you'll get accused of eating the ecosystem but the counter to this is you provide an increasing number of component services (i.e. a platform) which makes the environment more attractive to others. This harvesting of the ecosystem does need careful management.
5) Efficiency in provision. When you commoditise a component to a more industrialised form (e.g. A to A) then your ability to encourage others to build on top of it depends upon how much you reduce their risk of failure and increase their speed of development. Hence efficiency and standardisation of interface is very important in this process.
Now when correctly played you can build a constantly expanding platform of highly industrialised component services in which your rate of innovation, customer focus and efficiency is proportional to your ecosystem size and not your physical company size. Your 'platform' is simply your set of component services and it serves the purpose of interacting with and exploiting an ecosystem. The value is in the ecosystem. As you repeat the model building more component services then the attractiveness of your platform increases to others. Also, by being a first mover to commoditise an act to a more industrialised form then you actually gain highly stable, highly predictable volume based revenue. Finally, by exploiting consumption information (e.g. use of A) to always be the fast follower to the novel but uncertain sources of future worth (e.g. B, C, D) will enable you to maximise your future opportunity by only selecting success (e.g. D to D).
Simultaneously increasing your rate of apparent innovation, attractiveness to others, customer focus, efficiency, stability of revenue and maximising future opportunity are a powerful set of forces. Using an ILC type model is a no brainer ... except ... unless you map out your environment (i.e. have good situational awareness) and understand the rules of the game then you just won't know where to start other than sticking your finger in the air and saying 'this looks like a good one' or doing what most people do and copying others (i.e. '67% of companies do cloud, big data and social media' and hence 'so must we!').
You're just as likely to undermine a barrier to entry into your own business and encourage attack by others as you are to successfully build an ILC model. The first rule of playing chess is alway - 'Look at the board' - which is why building a map (a snapshot at a moment in time of the situation you find yourself in) is not only about effective management (see figure 5) and scenario planning but it should always be a first step before you embark on any form of strategic play
Figure 5 - An example Map
Before anyone shouts what about 'two factor markets', 'supplier ecosystems' etc - this post is about one aspect of ecosystems and not the entire field. Before anyone else shouts 'this is complex' - well if strategic gameplay was easy then it wouldn't be fun.
--- 29th August 2014
Just to re-emphasise this. The purpose of a platform (and hence an API) is to create an ecosystem. The value is in the ecosystem. The ecosystem is a future sensing engine. Correctly used (under an ILC model) you can create network effects whereby ...
- Your apparent rate of innovation
- Your customer focus
- Your efficiency
- Your stability of revenue
- Your ability to maximise opportunity
... all increase, SIMULTANEOUSLY, with the size of your ecosystem and NOT the physical size of your company.
For example, let us hypothesise that Amazon plays an ILC game. The value to Amazon is that the ecosystem creates a constant future sensing engine of all beneficial change. AMZN's ability to sense successful change, to respond to customer needs and efficiency would all be related to ecosystem size. This in turn would create a feedback loop with Amazon exploiting the ecosystem to identify useful patterns (everyone screams "they are eating the ecosystem again") and then providing this pattern as a service which helps grow the ecosystem which Amazon would exploit to … and so the cycle continues and accelerates. The bigger the ecosystem of Amazon got then more Amazon would able to exploit it to find successful patterns and hence the more beneficial and attractive the platform becomes for everyone. In order to test this then you would expect to see Amazon accelerating in innovation, customer focus and efficiency as the ecosystem grew (beyond the ability of equivalent physical size companies) along with complaints that they've eaten the ecosystem. Do remember, no-one is going to tell you they are playing this game - you'd have to detect it.
If you're involved in strategy and are sitting there going - "wow this is new" - then you really need to think about what you're doing if you're this far behind. This stuff is not new. This has been basic gameplay with ecosystems as force multipliers for many of us for about a decade.