Sunday, January 27, 2013

Ecosystems

When you consider an organization and the value chains that describe it then there are five groups of other organizations and individuals that any company interacts with.

There are the inputs into the value chain (suppliers), the outputs of the value chain (consumers), the people that operate and manage components of the value chain (employees), equivalent organizations that the company competes and co-operates with (competitors and alliances) and sources of learning or potential improvement to the value chain (wider business and academic environment).

These groups are the company’s ecosystem.

Each of these ecosystems provides many opportunities for discovery, improvement and manipulation of the landscape. The techniques of which often change depending upon how evolved the activities, practices and data models are and how they are used.

For example, let us consider a company whose output is more of a completed product (e.g. a Kettle) rather than a component product (e.g. Nut and Bolt) and the product is sold to the public.

The consumer ecosystem (in this case public consumers such as you and I rather than other organizations) can provide information on improvement, quality control, reliability and price sensitivity. This is normally achieved through secondary sources i.e. not directly derived from interaction with the product itself but instead surveys, warranty cards, sales volume, customer services and it can even extend to co-creation of the product.

The ecosystem can also be influenced through marketing, branding and association of the product with other values (e.g. buying this kettle will make you look cool, save a rainforest etc).

An ecosystem of consumers provides ample opportunities for manipulation and learning. There exist plenty of learned tomes on this subject and so I will assume the reader is familiar with this already.

The same opportunities also exist with all the other ecosystems and various models exist for benefiting from this such as the whole Enterprise 2.0 approach, use of wikis and internal social media tools with the employee ecosystems.

In this section, I want to concentrate on four specific issues with ecosystems – one is a model known as ILC, the others are two factor markets, alliances and the focus of competition.

The ILC model
This model is most frequently used when the output of a value chain is a component of other value chains (e.g. in the technology industries - a software development suite used by other companies to develop other software products or provision of a software API which is consumed by multiple other systems).

The operation of the model is fairly simple. The supplier provides the component that others consume in their value chains hence creating an ecosystem. Through efficiency in provision, the provider encourages the ecosystem to create new activities (i.e. genesis) by reducing the cost of failure. Genesis by its natures is highly uncertain and risky; hence reducing the cost of failure becomes a way of encouraging innovation. 

As any of these new activities spread, the supplier can detect this diffusion through consumption of the underlying component thereby leveraging the ecosystem to spot future sources of wealth. The supplier then commoditizes these newly detected activities to components hence enabling the development of new higher order systems. 

In effect, the supplier eats part of the ecosystem (i.e. those diffusing higher order activities) in order to provide new components that help the ecosystem to grow.

For example, the provider of a software development environment as a product can monitor the rapid growth in consumption of the product (i.e. buying licenses) and investigate those industries to identify any new activities being built. This is not actually that effective a technique because the cost involved in monitoring is high and there are significant time delays.

But let us suppose you were a provider of utility computing infrastructure services (e.g. something like Amazon EC2). Then not only does the provision of these services enable rapid creation of higher order systems by encouraging experimentation through a reduction in the cost of failure but also the supplier has direct access to information on consumption.

Let us suppose that one of these new higher orders systems (e.g. “big data” systems built with hadoop) started to diffuse. Through consumption of the component infrastructure service you could detect this diffusion in close to real time and hence rapidly decide to commoditize any new activity to your own component service e.g. in this case by introducing something like Amazon Elastic Map Reduce.

Naturally, you’d be accused of eating the ecosystem if you did this repeatedly but at the same time your new component services would help grow the ecosystem and create new higher order services. 

The operation of the model is shown in figure 36 to 38 and it exploits componentization effects as well as the changing characteristics of activities as they evolve. 

In essence the supplier encourages others to innovate (take the high risk gamble associated with chaotic activities), leverages the ecosystem to spot diffusion of successful changes and then commoditizes rapidly to component services. The cycle of Innovation – Leverage – Commoditise (ILC) is then repeated for subsequent higher order systems.

Figure 36 – A Standard View of Evolution

Figure 37 – Ecosystems and ILC



Figure 38 – A Map View of ILC


The component services are in effect your “platform” (though I prefer the term “garden”) around which you carefully nurture and grow an ecosystem. Like any gardener you’d have to balance this eating (or harvesting) of the ecosystem with the benefits that new components bring in growing it and the overall health of the garden (i.e. level of disquiet over the occasional munching session). 

The effectiveness of this model depends upon a wide range of different factors: -

Scope of the component: how broadly useful the component is. Is it a specialized component (e.g. a software service given train times for a specific train station) or used in a wide variety of value chains (e.g. a nut and bolt or electricity or computing infrastructure). The essential measures here are volume (how much it is used) and variation (i.e. the number of value chains consuming it).

Speed of feedback: ideally information needs to be captured directly on consumption of the activity rather than through secondary sources such as surveys. For this reason, it’s ideally suited to the world of utility provision where the supplier can directly detect the consumption of a component.

Ability of the supplier to act: is the supplier able to capture the information and are they willing to leverage the ecosystem to its benefit?

Efficiency of provision: how efficiently is the underlying component provided? Critical to this game is reducing the cost of failure within the ecosystem and hence encouraging experimentation and creation of new activities by others. Certainly provision of a former products as utility services will gain benefits for reduced capital expenditure by consumers, however efficient provision will also exploit volume effects and the larger the ecosystem the higher the rate of genesis.

Management of the ecosystem: the act of commoditizing to new component services is one of eating the existing ecosystem (either through acquisition or copying). The purpose is to provide new component services that help the ecosystem to grow and increase the usefulness of the entire range of components provided by the supplier.  Care should be taken not to too aggressively eat the ecosystem otherwise organizations may become wary of developing with the components.

The effects of this model are extremely powerful but it needs to be managed carefully. For the supplier, the rate of innovation (i.e. genesis of novel activities) is no longer dependent upon the physical size of the supplier but the size of the ecosystem of consumers. 

Equally, the ability to spot new and useful activities is extended beyond the supplier and its interaction with its consumers to an ecosystem of consumers who in turn supply activities to other consumers i.e. this much wider ecosystem are all consuming the base component activity and providing information on diffusion. Finally, the efficiency of provision depends not only on the volume required by the consumers but also this wider ecosystem where consumers themselves are suppliers to other consumers.

Hence the rate of innovation, customer focus (i.e. spotting new and useful trends) and efficiency of the supplier all increase with the size of the ecosystem itself rather than the physical size (as in number of employees) of the supplier. 

Through the use of a model like ILC, it’s entirely possible for a company to appear to be simultaneously innovative, customer focused and efficient which is counter to the popular management doctrine of choose one. Furthermore the rates of each (if properly managed) can increase with the size of the ecosystem which itself can increase at faster rate than the physical size of the company. 

In other words, the bigger the company gets the more innovative, efficient and customer focused it becomes.

For many of the followers of my writings over the last decade this will be a “where’s the good stuff?” moment. However, for others this model might be quite surprising and hopes of competitive advantage might appear. So, I want to once again bring things down to earth because ILC can overexcite some people.

The origin of the technique above started around 2002. It was an essential part of the Zimki strategy in 2005 and so well described by 2010 that I included it in part in “The Better for Less” paper that in turn had some influence in formulating UK Government ICT strategy. ILC and techniques of exploiting ecosystems are powerful but becoming increasingly common. Using and growing them is more of survival today, not of gaining advantage. 

There are three other aspects of ecosystems that I also want to mention.

Two Factor Markets
The two-factor market is a special case of ecosystem that brings suppliers and consumers together (hence two factor). Examples would include a farmers market, an exchange and Amazon’s online retail site. These not only provide ample opportunity for exploitation but they have powerful network effects as the consumers attract suppliers and the suppliers attract consumers.
Alliances
In the cases where you’re either competing or may compete against a large and threatening ecosystem or if you simply want to prevent this scenario occurring or want to nullify any advantage then the only way to do this is to build a bigger ecosystem. However, you don’t have to do this alone but can operate in an alliance with a view of taking a “small piece of a big pie” rather than a “big piece of a small pie”.

In the case of Zimki, the stated purpose of open sourcing the technology was to create a large pool of suppliers that competed on service with switching between them in order to overcome consumer concerns on lock-in. The focus for Fotango was to take “a small piece of a big pie” whilst building an exchange (a two factor market) of Zimki suppliers and consumers.

Now, creating such alliances can be tricky because individual suppliers (especially those with a product mind-set) will attempt to differentiate on features rather than service which in turn will limit switching hence raising consumer concerns whilst weakening the overall ecosystem. Equally suppliers will also be concerned over any loss of strategic control or dependency upon a third party i.e. a captured rather than a free market.

Hence with Zimki, the technology could have been provided as a proprietary offering but each Zimki supplier would then have been dependent upon Fotango. We would in effect have exerted a tax on the market both in terms of the licensing of the technology and also in controlling the future direction, furthermore we would increase barriers to adoption due to the constraints. The upside is we would limit any differentiation by suppliers.

By open sourcing the technology, we would remove the constraints, barriers to adoption and any tax on the market. However, we would open the door to differentiation by the suppliers on feature rather than service thereby weakening switching and the overall ecosystem.

To balance this, we needed to use a technique of assurance through trademarked images.

By open sourcing the entire platform technology, we would enable other competitors to become Zimki providers, remove barriers to entry and help establish a market. The trademarked image was only to be available for those who would comply with our monitoring service and hence we could provide assurance that this provider hadn’t differentiated the service by function in a way that any consumer would now be unable to switch.

This mix of open sourced technology and assurance through monitoring and a trademarked image is a way of balancing the needs of suppliers (i.e. low barrier to entry, a free rather than a captured market controlled by one player), the needs of consumers (a competitive market with switching) and the needs of the company forming the market (a wide and healthy ecosystem which can compete against alternatives).

Recently, similar examples of such play have appeared e.g. CloudFoundry, a platform as a service offering provided by VMware, is not only open sourced but provides a trademarked assurance service through CloudFoundry Core. 

Equally, Google whose value chain around data was under potential threat from Apple and the walled garden created by IOS has an open source technology (Android), a trademarked image on Android, the Open Handset Alliance and a mechanism of assurance through Android’s compatibility test suite. The Android ecosystem has rapidly risen to now dominate the smartphone market.

The importance of the control mechanism and careful management is in negating any effective “collective prisoner dilemma” when the members of an alliance in act of self-mutilation attempt to differentiate in their own immediate interests weakening the entire ecosystem and their own long term interests in the process.

The game is also highly nuanced. For example, when facing an existing and effective competitive ecosystem it is often better to co-opt rather than differentiate from it in the first place, a model of “embrace and extend”.

Hence in the case of utility infrastructure provision, if you were to go up against Amazon and its well-developed and highly effective ecosystem then co-opting the ecosystem would be the first order of the day. Since the ecosystem (and all the higher order activities created) are built upon the standard interfaces (APIs) of Amazon this means in effect providing identical APIs in both form and function.

Fortunately, under both European and US Law, APIs are not currently copyrightable (being principles) whereas the code that implements it is (being expression). Hence APIs can be re-implemented through reverse engineering. This practice we’ve seen with groups such as Eucalyptus and CloudStack (a Citrix project which is part of the Apache Foundation) who have both clearly stated directions of emulating the Amazon APIs.

Creating a competitive alliance is not a simple task neither is competing against an established ecosystem. There are plenty of pitfalls including the breakdown of an alliance through a collective prisoner dilemma. However, as in the case of Android, when it works it’s a highly powerful tool.
The focus of competition
The last thing I wish to mention is the focus of competition. The process of evolution is driven by consumer and supplier competition and those consumers and suppliers can either be individuals (as in the general public) or companies. This is not a static situation but it is fluid.

For example, let us consider the use of computers. To begin with both the suppliers and consumers of computers were companies. The sale, provision and competition around computers with the first products (such as the IBM 650) were all focused on business to business (B2B).

However, computers were made of components such as processing, storage and then networks that were evolving and becoming more of a commodity. The rate of evolution of those different underlying components affected the path that computing took. For example, because processing and storage commoditized faster than the network, the industry went through a transition of Mainframes to Mini computers to PCs to Tablets. However, had networks commoditized faster relative to processing and storage then an entirely different path of Mainframes to Personal Terminals to Tablets would have been possible. The rate of evolution of components can alter the path that higher order systems take.

However, what also happened is that the focus of competition in part shifted from being governed by B2B to being governed by Business to Public Consumers (B2C) as companies sold personal computers. In the same way, email (which started as primarily an academic and then business focused tool) shifted to the public consumer market with the introduction of services such as AOL.

What is important to understand is the rate of evolution is not uniform between the business ecosystem and the public consumer ecosystem. Hence as the competition around email shifted to the public consumer market (with the introduction of services such as Yahoo and Google Mail) then the public consumer market developed highly commoditized email services. In many cases these were vastly more commoditized and efficient than the equivalent activity in the business ecosystem, which was often provided by products.

Pressure mounted for those business consumers of email to adapt (and in many cases adopt) these more “consumerized” services available to the members of the public. 

This shift of competition and hence evolution from being governed by B2B, where companies represent both the suppliers and consumers of the activity, to one governed by competition in the public consumer space is known as “Consumerization” (as described by Doug Neal, LEF in 2001). See Figure 39.

Figure 39 Consumerization


Now, not all activities undergo this process. Many activities remain governed by competition in one ecosystem i.e. between companies with companies representing being consumers and suppliers. An example of this would be Financial ERP systems.

Equally, consumerization is not a one-way street. Activities that evolve and are governed by competition in the public consumer space can be forced into the business ecosystem. An example of this would be radio broadcasting equipment, a once vibrant and rapidly developing activity in the public consumer space with many public hobbyists creating and sharing capabilities which was forced under the control of companies through the legislative control of the radio frequency spectrum. 

The point to note is that the rate of evolution can rapidly change if competition around an activity switches focus from the business to public consumer ecosystem. Now, I won’t detail all the aspects of ecosystems mainly because there are numerous books covering Enterprise 2.0, use of Social Media, Supply Chain relationships and the above should provide the reader with the basics for the mapping exercises latter in this work.

It’s enough to be aware that various forms of ecosystem exist, exploitation can have powerful effects, the rate of evolution of components can effect the path that higher order systems follow and the rate of evolution can rapidly change as the focus of competition around an activity switches from one ecosystem to another. By now, if you've followed the entire series, you should have a good appreciation of the complexity of change and also why without maps it's no wonder that strategy becomes vague hand waving.

One final note, you should also have some understanding of the difference between the terms Consumerization (the process by which the focus of competition shifts from business to consumer ecosystem), Commoditisation (the process of evolution for activities) and Commodification (the process by which an idea with social value becomes instantiated as an activity with economic value, an idea from Marxist Political Theory). Endless confusion abounds because those entirely different concepts are constantly jumbled together as though they were the same.

Since I’ve already mentioned open source in this section, I will now turn our attention to the use of open as a competitive weapon and then finally, we can get back to maps.

---

Post 16 of 200 on the Management and Strategy series.

Next post in series ... Open

Previous post in series ...  The Next Generation

Beginning of the Management and Strategy series ... There must be some way out of here


Saturday, January 26, 2013

The Next Generation

In 2005, I had the basics of evolution and mapping. By 2007, I had enough supporting data to call it a weak hypothesis (correlation, causation and thousands of data points). What I lacked beyond the use of the mapping technique in predicting market and competitor changes were more general predictions.

However, the cycle of change was pretty clear on the co-evolution of practice and how new organizations formed. The industry was already going through one change caused by the commoditization of the means of mass communication (e.g. The Internet) that had all the normal patterns plus a new form of organization, the Web 2.0.

What I wanted to know is could we catch the next wave. Would the shift of numerous IT based activities to more utility services create a new organizational form? Could I catch this?

Timing was critical and unlike my earlier work in genetics where populations of new bacteria are grown rapidly, I had to wait. So wait, I did.

By 2010, the signals were suggesting that this was happening, so at the LEF (Leading Edge Forum) we undertook a project in 2011 (published in the same year) to examine this. Using population genetics techniques, we were looking for whether a statistically different population of companies had emerged and their characteristics (phenotypes) were starting to diffuse. It was a hit or miss project, we’d either find the budding population or it was back to the drawing board.

We already knew two main populations of company existed in the wild - the Traditional enterprise and the Web 2.0. The practices from the Web 2.0 were already diffusing throughout the entire environment. Most companies used social media, they thought about network effects, used highly dynamic and interactive web based technology and associated technology practices. The two populations were hence blurring through adoption of practices (i.e. the Traditional were becoming more Web 2.0 like) but also partially because past companies had died. But was there now a Next Generation budding, a new Fordism?

In early 2011, I interviewed a dozen companies that we thought would be reasonable examples of Traditional and Web 2.0 and where a couple of highly tentative Next Generation might exist. We developed a survey from those companies, removed them from the sample population to be examined and then interviewed over 100 companies divided roughly equally among those that described themselves as Web 2.0 and those who called themselves more Traditional. We examined over 90 characteristics giving a reasonable volume of data.

From the cycle of change and our earlier interviews, we had guessed that our Next Generation was likely to be found in Web 2.0 group and in terms of strategic play they would tend to be focused on disruption (the war phase) rather than profitability (the peace phase). From our earlier interviews we had developed a method of separating out into candidate populations.

So, we separated the population sample out into these categories and looked at population characteristics - means and stand deviations. Were there any significant differences? Were the differences so significant that we could describe them as a different population i.e. in a sample of mice and elephants then there exist significant characteristics that can be used to separate out the two populations.

I ran our analysis and waited. It was an edgy moment, one of those I’m well used to. Had we found something or as per many attempts before had we found nothing? I tend to assume nothing and when there is something, I tend to doubt it.

The populations all contained a mixed of medium and huge companies and within this we found statistically significant population differences across a large number of the characteristics. I re-examined, looked through my work, tested, sought the advice of others and tested again - but it remained.

For example, I examined each company’s view on open source and whether it was primarily something that means relatively little to them, a mechanism for cost reduction, something they relied upon, something they were engaged in or whether open source was viewed as a tactical weapon to be used against competitors. The result is provided in figure 32 with the subdivision by population type. Whilst the traditional companies mainly viewed open source as a means of cost reduction and something they relied upon, this Next Generation viewed it as a competitive weapon and something they were heavily engaged in. The Web 2.0 group had a broader view from cost to weapon.

Figure 32 – Views on Open Source by Population type



This difference in population was repeated throughout many characteristics spanning strategy, tactics, practice, activities and form. The odds of achieving the same results due to random selection of a single population were exceptionally low. We had found our candidate Next Generation.

To describe this Next Generation, it is best to examine them against the more Traditional. Some of the characteristics show overlap as would be expected. For example, in examining the highest priority focus for provision of technology by a company whether it’s profitability, enhancement of existing products and services, innovation of new products and services, enabling other companies to innovative on top of their products and services or creating an engaged ecosystem of consumers then overlaps exists (see figure 33).

Figure 33 – Percentage of Companies ranking the following focus as high priority by population type.



Traditional companies were mostly focused on profitability (a peace phase mentality) whereas the Next Generation are mostly focused on building ecosystems.

In other areas, the differences were starker. For example, in an examination of computing infrastructure and whether the company tended to use enterprise class servers, more commodity servers or a mix of both (see figure 34)

Figure 34 – Type of Servers used by Population Type.
However, it should never be expected that there are no common characteristics or overlap but instead a significant difference in specific characteristics (i.e. Mice have two eyes, same as Elephants).

Using these populations, we then characterized the main differences between Traditional and Next Generation just to highlight the differences. There is also significant differences between Next Generation and Web 2.0 but naturally they are lesser than in comparison to Traditional Enterprises which formed in an earlier cycle of change.

Figure 35 gives the main differences (though not all) and we’ll go through several of these differences in turn.

Figure 35 – Difference between Next Generation and Traditional



Organizational Form

Structure
Traditional organizations used a departmental structure often by type of activity (IT, Finance, Marketing) or region. The next generation used smaller cell based structures (with teams typically of less than twelve) often with each cell providing services to others cells within the organization. Each cell operated fairly autonomously covering a specific activity or set of activities. Interfaces were well defined between cells.

Culture
In traditional organizations culture is seen as relatively fixed, difficult to change and often a source of inertia. In next generation the culture is viewed as more fluid and gameable.

Strategy / Tactical Considerations

Focus
Traditional organizations tend to focus on profitability (a peace phase mentality) whereas the Next Generation is primarily focused on disruption of pre-existing activities (a war phase mentality). This is not considered to be a long-term distinction.

Open Source (including Open Data, Open APIs etc)
In traditional organizations, the use of open systems (where source, data, APIs or other) is viewed primarily as a means of cost reduction. In some cases technology or data is provided in an open means. 

In Next Generation, open is viewed as a competitive weapon, a way of manipulating or changing the landscape through numerous tactical plays from reducing barriers to entry, standardization, eliminating the opportunity to differentiate, building an ecosystem and even protecting an existing value chain.

Learning
Traditional organizations tend to use analysts to learn about their environment and changes that are occurring. The Next Generation use ecosystems to more effectively manage, identify and exploit change (more on this in the next section).

“Big Data”
Traditional organizations use big data systems and are focused primarily on the data issue. The Next Generation is run by extensive use of modeling and algorithms. Whilst the focus is not on the data per se but the models, these systems are not simply used but run the company.

Practices & Activities

Architecture and Infrastructure
Traditional organizations tend to use architectural practices such as scale –up (bigger machines) for capacity planning, N+1 (more reliable machines) for resilience and single, time critical disaster recovery tests for testing of failure modes. These architectural practices tend to determine a choice for enterprise class machinery.

The Next Generation has entirely different architectural practices from scale-out (or distributed systems) for capacity planning, design for failure for resilience and use of chaos engines (i.e. the deliberate and continuous introduction of failure to test failure modes) rather than single, time critical disaster recovery test. These mechanisms enable highly capable systems to be built using low cost commodity components.

Development
Traditional companies tend to focus towards singular management techniques for development (e.g. Agile or Six Sigma) and often operate on a change control or regular process of updates. The Next Generation tends towards mixed methods depending upon what is being done and the process of development of novel aspects is usually continuous with no specific time period for releases.

The LEF published the work in Dec 2011 and since then we have observed the diffusion of many of these changes. However, I very much don’t want you to read the above list and get the impression that “this is how we create an advantage!” but instead to be realistic. The above characteristics are already diffusing and evolving, tens if not hundreds of thousands of people and their companies are well aware. You’ll need to adapt simply to survive. Any real advantage has already been taken and the only advantage to be gained is over those who are slower to adapt.

However, the point of this exercise is not what are the new organizational forms (many books have or are being written on this subject of the New Fords by others) but that a new organizational form could be predicted to emerge in the first place.

The model suggested this in 2005 but I had to wait until 2011 to confirm this in its first instance (such is the slow nature of experimentation with companies). The above appear to be the characteristics of the New Fordism though it’ll take a decade or more to really confirm this.

By which time, if the model holds then the next wave of change (related to commoditization of the manufacturing process itself) will itself have created a new Next Generation, a ”New New Fordism” so to speak.  In much the same way every previous wave has created its own Fords – the Systeme General, the Plymouth and the American System, Fordism etc.

Now, who are the New Fords and is there any pattern to where this evolution is heading? Well, the former I’ll keep to myself (though many will be able to name several who are) whilst the latter I’ll discuss briefly when we talk about the future. 

For now, it’s enough to know that co-evolution of practice can lead to new organizational forms and this is happening today. In the next section, I want to turn my attention specifically to the subjects of ecosystems and open source, after which we can revisit our map and get on with the really interesting stuff.

---

Post 15 of 200 on the Management and Strategy series.

Next post in series ... Ecosystems

Previous post in series ... No Reason to Get Excited

Beginning of the Management and Strategy series ... There must be some way out of here


No Reason To Get Excited

By now, the reader should understand that things are created (genesis) which are uncertain, rare, constantly changing and hence chaotic by nature. These things diffuse through society through various constantly improving iterations (evolution) driven by competition (consumer and supply). Ultimately they become a more common, well-defined and standardized (i.e. linear) commodity. 

This process of evolution impacts activities (things we do), practices (how we do things) and data (through models of understanding).

Where those things can become components of higher order systems (e.g. nuts and bolts with machines) then as they evolve (become more linear) they accelerate the genesis of those higher order systems through componentization. This extends our value chains. Hence evolution is associated with increasing efficiency (of what is evolving) and increasing rates of speed and agility in creation of higher order systems. Genesis begets evolution begets commodity components begets higher order genesis ad nauseum. 

The process is a continuous cycle that we commonly describe as “progress”.

The new higher order systems are sources of future wealth and hence we see flows of capital from the past to the new (creative destruction). However, the process is not smooth because practices tend to co-evolve with activities and hence we see inertia to change due to legacy constraints. 

Equally suppliers have inertia due to past success, so the later stages of evolution (in particular the switch from product to utility) are associated with new entrants.

However, the change is inevitable as consumers are in constant competition and the benefits of efficiency, increased agility in building higher order systems and new sources of wealth turn a trickle into a flood. All companies have to adapt just to stand still relative to an evolving and surrounding environment (Red Queen).

This pace of change will often catch out suppliers as they are lulled by consumer inertia to the change and the previous more peaceful, slow moving stage of relative competition. Hence we can describe the transition of competition around an activity as one of relative peace to one of war to one of wonder and creation of new higher order marvels. 

The peace state can be characterized as one of incumbent giants with relative competition where sustaining change exceeds disruptive change. The war state is one of new entrants, a fight for survival and where disruptive change exceeds sustaining.

However the progress from peace to war is not unexpected and there is no reason (from culture to inertia) for why the past giants cannot be prepared. Disruption, unlike the case of unexpected changes in the market, is entirely preventable but rarely is prevented.

Of course, the change reduces barriers to entry and allows for new things that can impact value chains in unexpected ways (from gas lamps to light bulbs, from naturally harvested ice to ice making machines). Hence some indirect disruption is unpredictable and the innovator’s dilemma runs rampant. 

This cycle of changing states (wonder, peace, war) created by the interaction of inertia and the economic pressures of evolution (efficiency, agility and new sources of wealth) which is itself driven by competition (user and supply) and the need to adapt to competition (Red Queen), appear at both a local and macro economic scale. 

The macro economic scale we tend to call Ages as in Industrial Age, Mechanical Age, Internet Age. Each has a time of Wonder, Peace and War.

In certain cases that which is evolving can accelerate the entire process for all things by improving communication e.g. postage stamp, telephone, printing press, the Internet. In all cases, the drive towards more evolved and higher order systems consumes greater quantities of energy (though our waste vastly outweighs this).

Beyond creating inertia, the co-evolution of practice with activities will result in new organizational forms from Fordism (the age of electricity) to Web 2.0 (the age of the Internet). In all cases, these new organizational forms are more adapted to this changing world of higher order systems and are more effective at managing the flow of change from chaotic to linear. 

Each age can be associated with the evolution of organisations themselves.

However, our systems are far from perfect. Our tendency to one size fits all (one of the solutions of Ashby’s Law of Requisite Variety) tends to create a yoyo between extremes. Whether project management (agile vs six sigma) to marketing (push vs pull) to structure (networked vs hierarchical). A better balance can be found through embracing both and as organization evolve we tend towards this balance.

Our confusion over this simple pattern stems mainly from terminology and our inability to see it. We use the word innovation to mean genesis, a feature differentiation of a product or even utility provision of a pre-existing model. Our use of the word hides the obvious pattern of evolution in a fog of “Everything’s an Innovation”.

The same problem extends to other parts of our language. The process of evolution (often called commoditization) is different from the process by which an idea gains economic value by implementation into a tradable thing (i.e. idea or concept turned into something real). Alas, the process that represents a conversion of social (idea) to financial (tradable thing) capital is called commodification and whilst it is entirely different from the process of evolution, the terms of commodification and commoditization are often used to mean the same thing. It’s a bit like using the word chalk to mean cheese.

Hence in a world where obvious patterns are clouded by the misuse of terms, where companies often compete without any means of understanding the landscape that they exist within, we often believe they things are a lot more random than they are. 

Strategy often becomes one of “do what others are doing” and vague hand waving notions. We grasp at concepts like inertia and disruptive innovation as though this explains all - “We couldn’t help ourselves it was an unexpected change, we were caught by the Innovator’s dilemma”.

In some cases you are, in many cases you are not. You could have survived.

And so the cycle continues, new activities that appear evolve creating new inertia barriers (due to success) and a new war results from the inevitable march of competition. The same lessons are repeated, new forms of organization appear and we marvel at the changes.

The plethora of new activities created also results in new forms of data we have yet to understand, it is unmodelled or unstructured (if you insist). We stare in amazement at our progress as though somehow this time of wonder was any more wondrous than any previous time of wonder. The cycle continues.

The cycle has occurred numerous times over the last three hundred years. Alas “the one thing we learn from history is that we never learn from history”. In the hope that we learn this time, I’ve drawn the cycle in figure 31 and I’ve taken the liberty of removing the axis of value chain and drawing it as cycle. Each time we move through the cycle, value chains extend.

Figure 31 – A Frequently Repeated Cycle.

So, let us bring ourselves to our modern day.

Driven by consumer and supply competition, the activity of computing infrastructure has evolved from products to more of a utility. It is so widespread and so well defined it can now support the volume operations needed.

New entrants not encumbered by pre-existing business models (such as Amazon) have initiated this change and a resultant state of war has developed in an environment that was once governed by relatively peaceful competition between incumbent product giants (Dell, HP, IBM). 

We see an explosion in the genesis of novel higher orders systems created on these utility services, a flow of capital into the new higher order systems and we marvel at the speed and agility of creation. Endless books are written on creative destruction, componentization and change.

As expected, practices have co-evolved with the activities. We talk of distributed systems, design for failure and chaos engines (or monkeys if you like). An entire movement known as “devops” has developed around these changes. 

Consumers of the past models of provision (i.e. computing products such as servers) have also shown inertia to change. Citing all the typical risks and the issue of the legacy estates, they want all the benefits of agility, efficiency and new sources of wealth but without the cost of transition due to re-architecture. They want the new world but provided in an old way. They want the old dressed up as the new. They talk of enterprise clouds that are more rental services than utility.

Many of these consumers are oblivious to the issue that those benefits (efficiency, agility, wealth) are also pressures for adaption which will force them to change as competitors do. It’s not a question of “If”, it never has been. It’s a question of “When”.

Their suppliers encumbered by past business models race to provide this “old” world dressed up as new. They, suffering from their own inertia, are unaware that the trickle to the new world will become a flood at a pace they are not expecting. They watch Amazon thinking it will tail off, that it’s really only for new start-ups and green field sites. This is wishful thinking.

Along with changing practices and movements such as “DevOps”, new forms of organization appear. New structures, new ways of operating diffuse and evolve. Tomorrow’s Fordism has been with us for many years and it’s spreading.

As expected, for any student of history, we have also seen an explosion (as in genesis) of new data. Whilst the scramble to provide “big data” systems focuses on the issues of volume, it is the un-modelled nature of the data that is key. It wasn’t simply the volume of natural history data or the explosion in the number of books through printing presses that changed our world; it was the models of understanding that altered us. 

This data will become modeled and we will progress in understanding but not without arguments of the Structured vs Unstructured or Dewey Decimal vs Cutter type beforehand. We blissfully ignore that all data starts as unstructured and it is through modeling that the structure is found.

It’s like our assumptions of innovation. It’s never the innovation of something that changes the world; it’s commodity provision as good enough components (e.g. nut and bolts, electricity, computing). 

It’s not the volume of data that matters; it’s our model of understanding.

So, cloud is all about utility provision of pre-existing activities to commodity components, explosions in the creation of higher order systems, new sources of wealth, new practices, new forms of organisations and disruption of past models stuck behind inertia barriers and indirect disruption through changing value networks and lowering barriers to entry? Yes.

This was all perfectly clear in 1966 when Douglas Parkhill wrote the book the “Challenge of the Computer Utility”. It was only a question of when. 

By 2005, the weak signals of “when” were screaming loud. The “when” was NOW! 

None of this should come as a surprise.

The CEOs of the past giants should have leaned over their shoulders and pulled down from their bookcases their “What to do when Computing Infrastructure is ready to be a utility” playbooks. These playbooks should have been crafted and refined over the decade beforehand when the weak signals shouted “getting closer”. 

By the time Amazon launched, the past Giants should have prepared to launch at a massive scale. Culture is gameable and should have been gamed.  Inertia is manageable and should have been managed.

By 2010, Amazon should have been crushed. The past giants should have dominated the market. They had all the advantages they needed. But that’s not what happened. Those past giants hadn’t mapped this change. 

They were not prepared for the expected

Many will suffer the same fate as previous companies who have failed to prepare for the expected from Blockbuster to Kodak. But before the normal round of excuses begin, the inevitable rush to safety of executives behind the “innovator’s dilemma” and claims of unexpected changes, let me blunt.

Those companies failed because their executives failed. Not culture, not inertia, not unexpected changes but instead a total failure of strategy. They were simply not up to the job or as Gandalf might say “fool of a Took”.

As I said in the beginning, this work is not about gaining advantage but about surviving and mostly that’s surviving the expected. The cycle continues today, as it has in the past and as it will tomorrow.

So on the assumption that you’re not one of those facing oblivion through some gross failure of past executive play, let us turn to the new forms of organization and practice that you’ll need to deal with today.
---

Post 14 of 200

Next post in series ... The Next Generation

Previous post in series ... Revisiting our Map

Beginning of series ... There must be some way out of here

Sunday, January 20, 2013

Gartner's Wandering Hype Cycle Axis

Many moons ago I deconstructed Gartner's hype cycle from the underlying market dynamics and demonstrated how it couldn't be based upon any form of physical measurement but instead the aggregation of analyst's opinion. The effects of the hype cycle pattern appear real enough, just the hype cycle graph isn't some actual measurement of change just perceptions.

As an aggregate of Gartner analyst's opinions this has some value if you assume analysts know more about the market than your own people. If not, then any large company should simply build their own hype cycles. However, that's not the point of this post.

The point of this post is that whilst all the hype cycles have identical patterns, shapes and phases the axis keep on wandering i.e. we have :-

1. Visibility over Maturity. 

2. Visibility over Time. 


3. Expectation over Time. 


You could make a case for this being simple renaming as in Visibility and Expectation are the same. I have difficulty in accepting that. However my difficulty level reaches new heights with changing Time to Maturity as though the process of evolution (i.e. how something becomes mature or fit for purpose) had a linear relationship with Time.

I know, I know ... the Hype cycle isn't "real" as in it's not a scientific measurement of a physical characteristic but instead it is just aggregated opinion and therefore I'm being petty. It's just a visualisation of perceptions and is obviously evolving itself.

I understand all of this. 

It's just that it gives the impression of being actual physical measurement, a graph of something tangible that can directly investigated. Hence changing the axis drives me nuts.

I should shrug it off and look forward to Expectation over Maturity (which they seem to have missed) and bizarrely enough is the pattern which appears to be most "real". Another story, another day.

Thursday, January 17, 2013

A Pause ...

Ok, I've already done thirteen posts on this journey into strategy and mapping covering the basic of maps, the first map, some general lessons and revisiting that first map in more detail.

This brings me to a point that the reader should now have an idea of how to map an organisation and anticipate some basic forms of change.

The next sections will cover new forms of organisations, the fundamental importance of ecosystems, use of open as tactical weapon, numerous defensive strategies such as tower and moat along with attacking strategies and how to put together an overall battle plan. The mapping technique is essential to to this because without it those plays are reduced to more hand waving and unclear notions. It's the map which tells us the why of ecosystem for example.

Anyway, thank you for the feedback so far. Much appreciated and I hope you find this useful and not too slow. I'm now going to take a break for a bit, as other things require my attention but just to recap ...

---- Journey so far.

Post 1 : The start of my journey
A young CEO caught in the headlights of change.
http://blog.gardeviance.org/2013/01/there-must-be-some-way-out-of-here-said.html

Post 2 : The importance of maps 
An exploration of why maps are important and a question - where are our business maps?
http://blog.gardeviance.org/2013/01/the-importance-of-maps.html

Post 3 : There's too much confusion. 
My quest for maps in business begins in earnest and starts with the concept of value chains.
http://blog.gardeviance.org/2013/01/theres-too-much-confusion.html

Post 4 : Evolution, 
Explores how things evolve and why diffusion is only part of the puzzle. Evolution is an essential part to mapping an organisation.
http://blog.gardeviance.org/2013/01/evolution.html

Post 5 : A first map. 
Uses both value chain and evolution to create a rudimentary map of a business. In this case, Fotango in 2005.
http://blog.gardeviance.org/2013/01/a-first-map.html

Post 6 : Business they drink my wine
The journey from chaotic to linear and how the characteristics of activities, practices and data change as they evolve.
http://blog.gardeviance.org/2013/01/businessmen-they-drink-my-wine.html

Post 7 : Why one size never fits all
Examines why the change of characteristics impact techniques and how one technique is not suitable for all activities leading to debates such as six sigma vs agile
http://blog.gardeviance.org/2013/01/why-one-size-never-fits-all.html

Post 8 : Of perils and Alignment
Examines why changing characteristics create problems for business from the pitfalls and perils of outsourcing to the issues of business alignment.
http://blog.gardeviance.org/2013/01/of-perils-and-alignment.html

Post 9 : Everything Evolves.
Explores the common path of evolution whether Practice, Activites or Data and how practices can co-evolve to create inertia to change. Covers Cynefin, Co-Evolution and Inertia
http://blog.gardeviance.org/2013/01/everything-evolves.html

Post 10 : Evolution begets Genesis begets Evolution
Examining how there is a cycle of commoditisation and genesis with new activities being built on past activities. Covers componentisation, volume effects and disruption,
http://blog.gardeviance.org/2013/01/evolution-begets-genesis-begets.html

Post 11 : Inertia
Examining why customers and companies have inertia to change, what the causes and symptoms are and why it is so dangerous. Covers disruption of the past, transition to new and agency of the new.
http://blog.gardeviance.org/2013/01/intertia.html

Post 12 : Revolution
Using evolution and inertia we explore why revolutions (such as the industrial revolution) occur and what the common consequences of this are. Covers Kondratiev waves, prediction and time, disruption.
http://blog.gardeviance.org/2013/01/revolution.html

Post 13 : Revisiting our first map
Taking the lessons and principles we've covered and re-applying them to that first map of Fotango back in 2005, to explain how Fotango predicted the changes brought about by Cloud.
http://blog.gardeviance.org/2013/01/revisiting-our-map_17.html

Revisiting our Map

Let us now return to 2005 and put ourselves in the shoes of the CEO of software company that has a consultancy based business model that won’t scale under the given constraints due to a corporate wide freeze on recruitment,  has an uncertain future due to planned outsourcing but an existing value chain and a talented engineering team. This is easy for me seeing I just have to wind the clock back.

I’ve replicated that same value chain (or parts of it that will we examine) mapped against the state of evolution in figure 29

Figure 29 – Value Chain vs Evolution of Fotango


We already know that: -
  • All the activities, practices and data on the map are evolving due to consumer and supply competition.
  • Existing consumers and suppliers will have inertia to change.
  • The shift from products to utility services would initiate a change from peace to war. Many of those past giants will fail to act in a timely fashion, lulled into a false sense of gradual change whilst others will apply strategies such as cost reduction in a hope of recreating past norms.
  • Component activities that become more commodity-like will accelerate the genesis of higher order systems (componentization) and benefit from volume effects.
  • The combined benefits of increased agility, efficiency and the flow of capital towards new higher order systems will turn a trickle of adoption into a flood.
  • Commodity activities are suitable for outsourcing – for example infrastructure to utility providers. Novel practices will develop to deal with this.
Hence two of the above components stood out as being of immediate interest – infrastructure and platform. In both cases there was a large existing market mainly served by products. In both cases the activities were widespread and fairly well defined. In both cases business consumers were grumbling over the cost associated and no-one seemed to consider either activity as providing a differential value.

However, whilst we had some capital we didn’t have the investment capability to build a large-scale infrastructure service (though we ran our own small private virtual data centre). But a large-scale computer utility would be useful to us if we focused on the platform layer as many of the capital costs would be taken care of.

I talked to a couple of huge hardware manufacturers about this computer utility idea and unsurprisingly we were given the cold shoulder. Fortunately, I knew that because the act of computing infrastructure was so widespread and well defined that it was likely that someone would play that game soon. That someone wouldn’t be an existing hosting or hardware company (both with inertia) but instead a former consumer. In 2005, I expected that player to be Google but in 2006 it turned out to be Amazon.

Hence in anticipation of these changes we focused on commoditizing the platform space. We knew that consumers would have inertia to the change, so we looked for means of mitigating this. 

The fastest approach was to simultaneously offer a trial service (a public platform service) and allow businesses to build the system in their own data centres (a private platform service). This would give us time to build a customer base whilst we solved those concerns over lock-in, security of supply and pricing competition by creating a competitive market of suppliers with easy switching between them.

Such switching would require semantic interoperability i.e. you needed to be able to take your code and data from one platform supplier to another and know that it works. In order to make this happen, all the suppliers would have to be in effect running the same core system with competition based not on differentiation in features but in operation.

To achieve this, we would have to open source the system and provide some means of monitoring to ensure the suppliers were complying with the standard. We could play this assurance game with a trademarked image to show compliance.

Hence the plan was an open source platform play focused on creating a competitive market of suppliers with a higher order exchange that built upon lower order components supplied by future utility providers of infrastructure. We knew that inertia of existing platform suppliers would be high, so we had little to fear there and the competitive market would help reduce consumer concerns.

We also knew the importance of building a wide ecosystem of consumers and tools (more on ecosystems later), so everything had to be provided through APIs. This would also help with the exchange concepts.

Finally and most critical in this was speed of creating higher order systems, it had to be fast, it had to be accessible, it had to be easy, it had to provide useful metrics and it had to cover as wide a range of potential consumers as possible. However, developers (like everyone else) have inertia to change and there were going to be huge educational barriers. We needed a hook.

In 2005, the team started exploring and building out this space. It took three weeks until a sunny afternoon when I was sitting in the boardroom plotting our play for the hook to be found. Two of my outstanding engineers (they were all outstanding) walked in and said they could build a platform that used one language – JavaScript. The idea was developers would build entire applications front and back end in this single language. 

The CIO promptly came in to lend to support to the idea expecting some sort of fight over this “crazy” idea. A fight happened but not the type they were expecting. The hook of JavaScript was perfect and our argument stemmed from me pushing them to open source everything from the get go. Rather than resistance they found enthusiasm.

The JavaScript hook was essential because whilst many developers would have inertia, there are a vast number of front-end developers with JavaScript skills who considered the back end a world of mystery. By taking this away, giving them one language, removing all the unpleasant tasks of building a system (what is commonly called Yak Shaving) then we had a new audience who could embrace whilst others adapted.

We could give the world a commoditized platform built on APIs with a single language, entirely open sourced with a competitive market and exchange. We could remove all those horrible yak shaving tasks of capacity planning, worrying about scaling etc. We could give the world “pre-shaved” Yaks. 

And so, Zimki was born.

But what about the why we were doing this? After all the vagueness of why is where we started this whole discussion. Well, that bit is easy because we had a map. We knew our existing consultancy based value chain was going, we could see how the market would evolve, we were positioning ourselves to exploit this, to create new revenue streams around platforms and to take advantage of anticipated actions of new players in infrastructure and the inertia of past giants. Our map gave us the ability to see our choices and decide – the how, what and when was now harder than the why

We weren’t some clumsy general bombarding a hill because every other general in every other battle was bombarding a hill. We had precise and clear situational awareness, we knew where to attack. But how long did it take Fotango to map out these changes and so accurately predict the movement and impacts of cloud half a decade in advance? Months? Weeks? Roughly a few hours but we did go overboard. 

Now, whilst I’ve mainly talked about IT (principally because this story is based around a software company), the same exercise of mapping can be done with an insurance company, a law firm to a car parking company. All have activities, practices and data that are evolving.

So what happened with Fotango?

By 2006, a utility platform as a service provided through APIs was built. Libraries for common routines were created, a simple NoSQL object store developed, templating systems added, detailed metrics on pricing right down to individual function, automatic conversion of new routines to web services, a GUI, exchange capabilities and a host of useful stuff was ready. 

The speed of development was lightning fast. One system, a new form of wiki, was built and delivered live on the web from scratch in under an hour. Nothing came close in terms of speed and flexibility. We launched. 

At the same time Amazon launched and we couldn’t be happier. Not only had this big player immediately re-affirmed the market but also we could start to reposition the system to run on EC2. Our open source plans were announced in late 2006 and everything was timed for OSCON 2007.

I’ve plotted many of these changes in figure 30 showing inertia barriers, higher order components and anticipated actions.

Figure 30 – Examining the map and change


Our map also told us what to watch for. We knew that some of the higher order systems to be built would become highly valuable. Our platform actually gave us a way of determining diffusion and success through customer consumption, so we had the opportunity to spot these new sources.

We knew that practices would change (co-evolve) because of more utility provision and we had to keep an eye out for this and adapt rapidly. We knew the existing industry would resist and be dismissive. Education was going to be key to turn the trickle into the flood. 

We knew that the techniques used to build utility services were fundamentally different from genesis of the novel and new, so we re-organised around this. In fact we re-organised the entire business around the flow of evolution but more on that later.

By early 2007 we were riding high. My CIO had done the first installation of Zimki onto EC2. The open source plans were ready and we have demonstrated the exchange capabilities and how you could switch from private to public supplier. We had the keynote at OSCON and Zimki was rapidly growing. Our last event in 2006 had been packed 5 or 6 layers deep around our booth. It was insane. We had hit a home run and we were ready to take on those past giants and become the new Titans.

But we failed. 

Not because the strategy or map was wrong, it wasn’t. Today, the growth of services in the platform space shows this. We didn’t fail because of the engineering team or because of weaknesses in the technology as we were far ahead of any competitor and the team was outstanding.  We didn’t fail to anticipate future consumer needs or competitor actions - all of this was spot on. 

We failed because I failed. 

I had estimated in 2005 that this future utility market (called cloud today) would be worth $200 billion by 2016, something that more recent analyst reports have born out. However in 2005, it seemed like a crazy concept for many people that the world could change so rapidly. Some of those people included my board. 

It was too far out of their comfort zone, it seemed incredulous and despite running a profitable company I lacked the political capital to pull this transformation of. In their minds, IT was best dealt with through outsourcing whilst the parent company concentrated on their core products along with new products like TVs. They were uncomfortable with the whole notion, our tactical use of open source was counter to their normal experience of IP and they were set on their own path.

By the time of the keynote at OSCON, the open sourcing of Zimki was stopped, the plans to outsource the group became clear, Zimki and any notions of a management buy-out were a dead duck. These were perfectly rationale decisions based upon the parents focus on the current core and internal messaging and changes at that time.  This is where the power of mapping comes into play, because as Nokia has showed repeatedly – today’s core is not tomorrow’s. 

Would Zimki have succeeded if it continued? Well, ask VMware’s CloudFoundry, SalesForce’s Heroku or Goole’s AppEngine which followed much later. The answer is … we will never know. That time has passed.

By now, I’d hope the reader has some idea of the value of mapping value chains, how activities (and practices and data) evolve, how economies move through cycles, how characteristics change, why different techniques are needed at different stage of evolution and why companies have inertia. 

In terms of progression, let us imagine a karate grading system of strategy. If we accept that in the outside world there are black belts (1st dan and above) in strategy then assuming this was all new to you at the beginning, we’ve probably just earned our first junior belt (7th kyu, Yellow). We at least have some idea of how to map the environment, some notions of what an organization is and how things change. We have a long way to go and even at the end of this series, we will be barely scrapping past 5th kyu. But continue we will.

I’m going to use the mapping techniques and fundamentals we’ve developed to explore new forms of organization, the fundamental importance of ecosystems, use of open as a tactical weapon, numerous strategies from defensive strategies such as tower and moat along with attacking strategies and the basics of putting together a battle plan. The mapping technique is important because it will help us see why this stuff matters rather than the usual unclear and hand waiving notions that abound with ecosystems and the like.

I cannot emphasize enough that this series won’t make you a master strategist and these people do exist – I work with several at the Leading Edge Forum who are truly frightening in terms of capability. But this series might help you and your company to survive in today’s battles.

---

Post 13 of 200

Next post in series ... No reason to get excited

Previous post in series ... Revolution

Beginning of series ... There must be some way out of here