Monday, February 18, 2013

Openness, Innovation and Maps.

In an examination of the 500 most active open-source projects, Krzysztof Klincewicz calculated that 99 percent of them focused on an existing technology or modification for a new market whilst only 1 percent represented the creation of a genuinely new idea i.e. the rates of genesis are little or no different to proprietary approaches.

BUT, since evolution results in the development of new low-cost building blocks (components), it does increase the genesis of higher-order systems as a secondary effect, and open technologies are seen to substantially increase “innovation” overall by providing platforms on which the novel and new is created. 

An example would be the Apache Web Server that has, by providing a low-cost building block (component) for web serving, accelerated the genesis of higher-order systems such as web sites. In effect, Apache has provided a ‘platform
 for web innovation’. 

However, there is little difference between open vs proprietary approaches on those higher order systems that are created with this component and many “innovative” web sites are proprietary despite consuming underlying open components. Naturally, the use of an ‘open’ approach will drive the new higher order system rapidly to a more evolved state.

Hence an ‘open’ approach is not associated with increased rates of genesis for an activity but instead rapid evolution including feature completeness (through collaboration), market adoption (through reduced barriers to adoption) and genesis of higher order systems (a ‘platform for innovation’).

The problem with the term ‘innovation’ is that it is used to mean all of these things. An open approach enables some forms of 'innovation' (product, market and genesis of higher order systems through a platform) but does not make any significant impact on other forms of 'innovation' (i.e. the genesis of an activity).

If we look at our map, the effect of open can be more clearly seen and how driving an activity to a more evolved state along with efficiency can result in rapid generation of higher order systems (see figure 41)

Figure 41 Mapping view of Open’s impact


However, there is more to open technology than just this. Open technology approaches can be used to solve semantic interoperability issues by creating a standard and in turn can enable competitive markets of suppliers, associated exchanges and assurance industries. By driving evolution it can also be used to undermine a competitors barrier to entry or remove a differential a competitor may have. ‘Open’ can be used as a powerful recruitment tool, a negotiation tool or to circumnavigate an existing obstacle such as purchasing procedures. It can even be used to protect an existing value chain. 

Google’s core business is based upon its data value chain. It accesses data from many sources and then uses it to more accurately sell advertising space associated with specific words and actions. The company seeks to expand
into just about any market where data can be modeled to its advantage. Dominating the data value chain, the algorithms to model the data and the systems needed to run the algorithms are all critical to Google’s success and highly proprietary. While Google has been heavily involved in open technology and enabled ecosystems to flourish on its APIs, Google’s core systems and algorithms are guarded secrets. In general, Google uses open approaches when they help keep the data flowing.

The rapid growth of smart phones posed a threat to Google’s business. The iPhone’s dominance meant that a significant part of the data value chain
was being locked away in a ‘walled garden’. But by not open-sourcing iOS, Apple exposed itself to a counter-play around a common mobile OS and an ecosystem of hardware manufacturers. This is what Google’s Android has achieved, gaining the majority of the global unit market share, although Apple still makes the great bulk of the profits.

As with other ecosystems, this market benefits from higher rates of “innovation” over a single supplier. Apple, a company that once could have been thought of as leading the pack, today increasingly looks like a fast follower.

Whilst Android can be seen as a highly successful counter through open technology, it’s governed and developed through a company-controlled process. Google uses its Compatibility Test Suite (CTS) to ensure interoperability of devices and limit a collective prisoner’s dilemma scenario, where members of the Open Handset Alliance (OHA) all differentiate and thus weaken the core. CTS is behind the current Acer vs. Google row in China over Acer’s plans to offer a ‘forked’ version of the Android platform for the Chinese market. However success in limiting one threat to Google’s data value chain has also created a new and probably unanticipated threat as Amazon has taken Android and used it to create another ‘walled garden’ based on tablets: the Kindle Fire.

At the same time as this one battle is occurring, Facebook’s open technology project on building large data centres (known as open compute) can be simultaneously seen as driving efficiency, a tool for negotiation with its own suppliers and a means of weakening any company that depends upon highly efficient data centres as a barrier to entry into its own business. One such company is Google (see figure 42).

Figure 42 Playing the Game with Open Technology


The point to note is that open technology approaches can be used for more than just building a platform or encouraging efficiency. One question we should be asking is who actually thinks in these terms?


---

Post 18 of 200 on the Management and Strategy series.

Next post in series ... Openness vs Strategy

Previous post in series ... Open

Beginning of the Management and Strategy series ... There must be some way out of here


Open

When Linus Torvalds launched the Linux project in 1991, few people could have imagined that free software developed by open communities would ever match, let alone surpass, the efforts of industry giants such as IBM and Microsoft. Yet in a wide range of areas this has been precisely the case.

The full market value of this open technology is however impossible to calculate because it is not formally accounted for. How can we measure the value of Wikipedia’s commons of open content?  In one study by O’Reilly Media, the global value of open source to just the computer hosting market was estimated at over $100 billion, and this 
is only a part of the overall open technology industry. According to the Black Duck Research, there are over 600,000 open software projects using over 100 billion lines of code and drawing on 10 million person-years of effort. 

Studies have also shown that roughly
 90% of large organizations already use open source (and the rest probably don’t realize that they do) and 75 percent of the world’s top 10,000 websites are built on open-source technology. Whichever way you look, open technologies are having a powerful impact.

Along with these extraordinary achievements, it has also long been speculated that the ‘open’ meme would eventually spread to non-IT sectors. There are three main reasons why this was likely. First, software has become increasingly important in virtually every industry, and thus it seems logical that the dynamics of the software industry would spread to other sectors. Second, as the Internet is now the backbone of most modern businesses, much more open forms of community innovation are now possible in just about every industry. Third, cost and innovation pressures are now so great in so many sectors that new approaches must increasingly be considered.

While these forces have existed for some time, it appears that open approaches are now starting to become widespread across the broader economy. In addition to many open source software projects, we have recently seen a growing emphasis in universities on open science and open curricula; increasing government commitments to open data; and perhaps most intriguingly, impressive demonstrations of the power of open- manufacturing designs when combined with 3D printing. There is now ‘open’ activity at virtually every level of business and IT.

However, for many, the word ‘open’ as in ‘open source’ still conjures up concepts of hippy idealism where
people give away their work to others for nothing in a spirit of generosity and passion. In today’s world, this view is increasingly naive. An open approach is a powerful weapon in the hands of experienced strategists. It can be used to remove barriers to entry into an opponent’s business, encourage standardization around your own practice, develop an ecosystem to strengthen your position, as part of a land grab for new sources of value, and even as a source of new talent. 

I use the term open technology above because these ‘open’ approaches are not limited to open-source software but also open data, open hardware, APIs, open science and open processes.

But how did something that began two decades ago with a simple bulletin board message inviting people to collaborate on operating system project have such a powerful and transformative impact? 

It is essential to understand that when Linus Torvalds posted that message he asked others to contribute and comment. By doing so, Torvalds created
a community of contributors and launched a new ‘open’ method of community development.

The key is not only the openness of what is created (in terms of licensing of the code or data) but also the collaborative approaches to working. These two aspects when combined drive the evolution of any activity, practice or data. The openness drives ubiquity by removing barriers to adoption, the collaborative working drives feature completeness, understanding and hence certainty (see figure 40).

Figure 40 Impact of Open



The consequences of evolution should by now be familiar to the reader. As anything evolves it becomes more efficient and can enable rapid development of higher order systems through componentization effects. Hence the effects of making something open are four fold – rapid adoption, improved feature completeness, higher efficiency and rapid genesis of higher order systems.

Whilst these are entirely separate effects, they are often lumped together under the single term innovation, such as product innovation (feature completeness), marketing innovation (adoption) and the genuine creation of something novel and new (i.e. genesis). It’s worth keeping these separate and I’ll explain why in the next section.

---

Post 17 of 200 on the Management and Strategy series.

Next post in series ... Openness, Innovation and Mapping

Previous post in series ...  Ecosystems

Beginning of the Management and Strategy series ... There must be some way out of here


Friday, February 01, 2013

Something Shocking

This morning I was reading Glyn Moody's piece on the potentially huge FCO contract for Oracle.

I was pretty stunned by this, having as with many others seen the recent praise of Government IT by the NAO (National Audit Office), the praise of UK Gov IT by Tim O'Reilly and the reforms being undertaken. This felt like a huge step backwards though there maybe more to the story.

Just reading the story as is (and hence with none of the background) you could be forgiven in thinking the FCO has just stuck up two fingers to Francis Maude, The Cabinet Office, Government IT Strategy, The Austerity Agenda, George Osborne and the Treasury. Pretty shocking.

However, it wasn't the most shocking thing I heard this week.

During a session at Cloud Expo Europe (which was an excellent conference, outstanding as always and not to be confused with SYS-Con's vendor fest), another tale of IT horror in procurement appeared. This one floored me, so to speak.

The accusation was that a particular large company had transferred a highly expensive system (think lots and lots of millions) from one vendor to another not because of some real strategic reasoning but instead for the reason of CV boosting i.e. the CIO was leaving and wanted the vendor's name on his CV.

My initial reaction was this is just a joke, So, I tweeted this and received a flood of messaging from various sources all giving similar stories - "I saw a CIO force a massive contract and implementation entirely for such personal-reputation reasons".

You are kidding?

So I thought I'd put together a very quick survey which I'd run this evening and see what happened. The results will be published here.

RESULTS

First of all, a huge thank you for all those replying. I had 41 responses this evening and whilst this is an extremely small sample which may well be a biased, it does make interesting reading.

The graphs are as follows:-

Survey 1 - The question of Why in IT Strategy

This question asked whether you agreed that most IT strategy contains much about what, when and how but the why of action (the "real" strategy part) often appears vague and depends upon notions such as others are doing it or the business wants it. 85% of respondents agreed or strongly agreed with the statement provided.


Survey 2 - The question of CV boosting

This question examined whether the reason "Why" some purchasing decisions were made had nothing to do with strategy but instead were due to personal self interest such as boosting a CV with new skills. A quite shocking 73% of respondents agreed or strongly agreed with the statement. If this was actually representative of the industry, it would imply that an awful lot of technology investment is suspect.



Survey 3 - The question of influence.

The question examined how much "the business wants it", "other companies are doing it" or "CV boosting" impacts  IT purchasing decisions. Respondents indicated whether each of these factors impacted 0% of IT purchasing decisions or one of several bands (1-25%, 26-50% etc).

Interestingly over 60% of respondents indicated that "Other companies are doing it" impacted 50% or more of all IT purchasing decisions. Which implies that case studies must have quite a significant impact on choice and there's a strong "follow everyone else" mentality.

40% of the respondents view that "the business wants it" impacts 26% to 50% of IT purchasing decisions whilst 45% view it as impacting 50% or more of IT purchasing decisions. Hence implying the  business has a strong influence in what IT buys.

As for CV boosting, 44% of respondents indicated it impacted somewhere between 1% to 25% of all IT purchasing decisions whilst over 48% indicated it impacted 25% or more of all IT purchasing decisions. This would imply (if the sample is representative) that the practice is far more widespread than I had originally thought. Perhaps, I was very naive to be shocked by a CIO spending millions of a company's financial capital in order to make their CV look good.


Now, as I've said, this was an extremely small sample, it may well be affected by sample bias and the survey was very rough and ready. However, it does seem to warrant further investigation.

The reason for further investigation is the survey implies (if it is representative, which it may well not be) that much of IT strategy is anything but strategy and that purchasing decisions are being influenced by the business, what others are doing and self interest.

This is a far cry from the use of IT as a competitive and strategic weapon which is exhibited in some companies.

Sunday, January 27, 2013

Ecosystems

When you consider an organization and the value chains that describe it then there are five groups of other organizations and individuals that any company interacts with.

There are the inputs into the value chain (suppliers), the outputs of the value chain (consumers), the people that operate and manage components of the value chain (employees), equivalent organizations that the company competes and co-operates with (competitors and alliances) and sources of learning or potential improvement to the value chain (wider business and academic environment).

These groups are the company’s ecosystem.

Each of these ecosystems provides many opportunities for discovery, improvement and manipulation of the landscape. The techniques of which often change depending upon how evolved the activities, practices and data models are and how they are used.

For example, let us consider a company whose output is more of a completed product (e.g. a Kettle) rather than a component product (e.g. Nut and Bolt) and the product is sold to the public.

The consumer ecosystem (in this case public consumers such as you and I rather than other organizations) can provide information on improvement, quality control, reliability and price sensitivity. This is normally achieved through secondary sources i.e. not directly derived from interaction with the product itself but instead surveys, warranty cards, sales volume, customer services and it can even extend to co-creation of the product.

The ecosystem can also be influenced through marketing, branding and association of the product with other values (e.g. buying this kettle will make you look cool, save a rainforest etc).

An ecosystem of consumers provides ample opportunities for manipulation and learning. There exist plenty of learned tomes on this subject and so I will assume the reader is familiar with this already.

The same opportunities also exist with all the other ecosystems and various models exist for benefiting from this such as the whole Enterprise 2.0 approach, use of wikis and internal social media tools with the employee ecosystems.

In this section, I want to concentrate on four specific issues with ecosystems – one is a model known as ILC, the others are two factor markets, alliances and the focus of competition.

The ILC model
This model is most frequently used when the output of a value chain is a component of other value chains (e.g. in the technology industries - a software development suite used by other companies to develop other software products or provision of a software API which is consumed by multiple other systems).

The operation of the model is fairly simple. The supplier provides the component that others consume in their value chains hence creating an ecosystem. Through efficiency in provision, the provider encourages the ecosystem to create new activities (i.e. genesis) by reducing the cost of failure. Genesis by its natures is highly uncertain and risky; hence reducing the cost of failure becomes a way of encouraging innovation. 

As any of these new activities spread, the supplier can detect this diffusion through consumption of the underlying component thereby leveraging the ecosystem to spot future sources of wealth. The supplier then commoditizes these newly detected activities to components hence enabling the development of new higher order systems. 

In effect, the supplier eats part of the ecosystem (i.e. those diffusing higher order activities) in order to provide new components that help the ecosystem to grow.

For example, the provider of a software development environment as a product can monitor the rapid growth in consumption of the product (i.e. buying licenses) and investigate those industries to identify any new activities being built. This is not actually that effective a technique because the cost involved in monitoring is high and there are significant time delays.

But let us suppose you were a provider of utility computing infrastructure services (e.g. something like Amazon EC2). Then not only does the provision of these services enable rapid creation of higher order systems by encouraging experimentation through a reduction in the cost of failure but also the supplier has direct access to information on consumption.

Let us suppose that one of these new higher orders systems (e.g. “big data” systems built with hadoop) started to diffuse. Through consumption of the component infrastructure service you could detect this diffusion in close to real time and hence rapidly decide to commoditize any new activity to your own component service e.g. in this case by introducing something like Amazon Elastic Map Reduce.

Naturally, you’d be accused of eating the ecosystem if you did this repeatedly but at the same time your new component services would help grow the ecosystem and create new higher order services. 

The operation of the model is shown in figure 36 to 38 and it exploits componentization effects as well as the changing characteristics of activities as they evolve. 

In essence the supplier encourages others to innovate (take the high risk gamble associated with chaotic activities), leverages the ecosystem to spot diffusion of successful changes and then commoditizes rapidly to component services. The cycle of Innovation – Leverage – Commoditise (ILC) is then repeated for subsequent higher order systems.

Figure 36 – A Standard View of Evolution

Figure 37 – Ecosystems and ILC



Figure 38 – A Map View of ILC


The component services are in effect your “platform” (though I prefer the term “garden”) around which you carefully nurture and grow an ecosystem. Like any gardener you’d have to balance this eating (or harvesting) of the ecosystem with the benefits that new components bring in growing it and the overall health of the garden (i.e. level of disquiet over the occasional munching session). 

The effectiveness of this model depends upon a wide range of different factors: -

Scope of the component: how broadly useful the component is. Is it a specialized component (e.g. a software service given train times for a specific train station) or used in a wide variety of value chains (e.g. a nut and bolt or electricity or computing infrastructure). The essential measures here are volume (how much it is used) and variation (i.e. the number of value chains consuming it).

Speed of feedback: ideally information needs to be captured directly on consumption of the activity rather than through secondary sources such as surveys. For this reason, it’s ideally suited to the world of utility provision where the supplier can directly detect the consumption of a component.

Ability of the supplier to act: is the supplier able to capture the information and are they willing to leverage the ecosystem to its benefit?

Efficiency of provision: how efficiently is the underlying component provided? Critical to this game is reducing the cost of failure within the ecosystem and hence encouraging experimentation and creation of new activities by others. Certainly provision of a former products as utility services will gain benefits for reduced capital expenditure by consumers, however efficient provision will also exploit volume effects and the larger the ecosystem the higher the rate of genesis.

Management of the ecosystem: the act of commoditizing to new component services is one of eating the existing ecosystem (either through acquisition or copying). The purpose is to provide new component services that help the ecosystem to grow and increase the usefulness of the entire range of components provided by the supplier.  Care should be taken not to too aggressively eat the ecosystem otherwise organizations may become wary of developing with the components.

The effects of this model are extremely powerful but it needs to be managed carefully. For the supplier, the rate of innovation (i.e. genesis of novel activities) is no longer dependent upon the physical size of the supplier but the size of the ecosystem of consumers. 

Equally, the ability to spot new and useful activities is extended beyond the supplier and its interaction with its consumers to an ecosystem of consumers who in turn supply activities to other consumers i.e. this much wider ecosystem are all consuming the base component activity and providing information on diffusion. Finally, the efficiency of provision depends not only on the volume required by the consumers but also this wider ecosystem where consumers themselves are suppliers to other consumers.

Hence the rate of innovation, customer focus (i.e. spotting new and useful trends) and efficiency of the supplier all increase with the size of the ecosystem itself rather than the physical size (as in number of employees) of the supplier. 

Through the use of a model like ILC, it’s entirely possible for a company to appear to be simultaneously innovative, customer focused and efficient which is counter to the popular management doctrine of choose one. Furthermore the rates of each (if properly managed) can increase with the size of the ecosystem which itself can increase at faster rate than the physical size of the company. 

In other words, the bigger the company gets the more innovative, efficient and customer focused it becomes.

For many of the followers of my writings over the last decade this will be a “where’s the good stuff?” moment. However, for others this model might be quite surprising and hopes of competitive advantage might appear. So, I want to once again bring things down to earth because ILC can overexcite some people.

The origin of the technique above started around 2002. It was an essential part of the Zimki strategy in 2005 and so well described by 2010 that I included it in part in “The Better for Less” paper that in turn had some influence in formulating UK Government ICT strategy. ILC and techniques of exploiting ecosystems are powerful but becoming increasingly common. Using and growing them is more of survival today, not of gaining advantage. 

There are three other aspects of ecosystems that I also want to mention.

Two Factor Markets
The two-factor market is a special case of ecosystem that brings suppliers and consumers together (hence two factor). Examples would include a farmers market, an exchange and Amazon’s online retail site. These not only provide ample opportunity for exploitation but they have powerful network effects as the consumers attract suppliers and the suppliers attract consumers.
Alliances
In the cases where you’re either competing or may compete against a large and threatening ecosystem or if you simply want to prevent this scenario occurring or want to nullify any advantage then the only way to do this is to build a bigger ecosystem. However, you don’t have to do this alone but can operate in an alliance with a view of taking a “small piece of a big pie” rather than a “big piece of a small pie”.

In the case of Zimki, the stated purpose of open sourcing the technology was to create a large pool of suppliers that competed on service with switching between them in order to overcome consumer concerns on lock-in. The focus for Fotango was to take “a small piece of a big pie” whilst building an exchange (a two factor market) of Zimki suppliers and consumers.

Now, creating such alliances can be tricky because individual suppliers (especially those with a product mind-set) will attempt to differentiate on features rather than service which in turn will limit switching hence raising consumer concerns whilst weakening the overall ecosystem. Equally suppliers will also be concerned over any loss of strategic control or dependency upon a third party i.e. a captured rather than a free market.

Hence with Zimki, the technology could have been provided as a proprietary offering but each Zimki supplier would then have been dependent upon Fotango. We would in effect have exerted a tax on the market both in terms of the licensing of the technology and also in controlling the future direction, furthermore we would increase barriers to adoption due to the constraints. The upside is we would limit any differentiation by suppliers.

By open sourcing the technology, we would remove the constraints, barriers to adoption and any tax on the market. However, we would open the door to differentiation by the suppliers on feature rather than service thereby weakening switching and the overall ecosystem.

To balance this, we needed to use a technique of assurance through trademarked images.

By open sourcing the entire platform technology, we would enable other competitors to become Zimki providers, remove barriers to entry and help establish a market. The trademarked image was only to be available for those who would comply with our monitoring service and hence we could provide assurance that this provider hadn’t differentiated the service by function in a way that any consumer would now be unable to switch.

This mix of open sourced technology and assurance through monitoring and a trademarked image is a way of balancing the needs of suppliers (i.e. low barrier to entry, a free rather than a captured market controlled by one player), the needs of consumers (a competitive market with switching) and the needs of the company forming the market (a wide and healthy ecosystem which can compete against alternatives).

Recently, similar examples of such play have appeared e.g. CloudFoundry, a platform as a service offering provided by VMware, is not only open sourced but provides a trademarked assurance service through CloudFoundry Core. 

Equally, Google whose value chain around data was under potential threat from Apple and the walled garden created by IOS has an open source technology (Android), a trademarked image on Android, the Open Handset Alliance and a mechanism of assurance through Android’s compatibility test suite. The Android ecosystem has rapidly risen to now dominate the smartphone market.

The importance of the control mechanism and careful management is in negating any effective “collective prisoner dilemma” when the members of an alliance in act of self-mutilation attempt to differentiate in their own immediate interests weakening the entire ecosystem and their own long term interests in the process.

The game is also highly nuanced. For example, when facing an existing and effective competitive ecosystem it is often better to co-opt rather than differentiate from it in the first place, a model of “embrace and extend”.

Hence in the case of utility infrastructure provision, if you were to go up against Amazon and its well-developed and highly effective ecosystem then co-opting the ecosystem would be the first order of the day. Since the ecosystem (and all the higher order activities created) are built upon the standard interfaces (APIs) of Amazon this means in effect providing identical APIs in both form and function.

Fortunately, under both European and US Law, APIs are not currently copyrightable (being principles) whereas the code that implements it is (being expression). Hence APIs can be re-implemented through reverse engineering. This practice we’ve seen with groups such as Eucalyptus and CloudStack (a Citrix project which is part of the Apache Foundation) who have both clearly stated directions of emulating the Amazon APIs.

Creating a competitive alliance is not a simple task neither is competing against an established ecosystem. There are plenty of pitfalls including the breakdown of an alliance through a collective prisoner dilemma. However, as in the case of Android, when it works it’s a highly powerful tool.
The focus of competition
The last thing I wish to mention is the focus of competition. The process of evolution is driven by consumer and supplier competition and those consumers and suppliers can either be individuals (as in the general public) or companies. This is not a static situation but it is fluid.

For example, let us consider the use of computers. To begin with both the suppliers and consumers of computers were companies. The sale, provision and competition around computers with the first products (such as the IBM 650) were all focused on business to business (B2B).

However, computers were made of components such as processing, storage and then networks that were evolving and becoming more of a commodity. The rate of evolution of those different underlying components affected the path that computing took. For example, because processing and storage commoditized faster than the network, the industry went through a transition of Mainframes to Mini computers to PCs to Tablets. However, had networks commoditized faster relative to processing and storage then an entirely different path of Mainframes to Personal Terminals to Tablets would have been possible. The rate of evolution of components can alter the path that higher order systems take.

However, what also happened is that the focus of competition in part shifted from being governed by B2B to being governed by Business to Public Consumers (B2C) as companies sold personal computers. In the same way, email (which started as primarily an academic and then business focused tool) shifted to the public consumer market with the introduction of services such as AOL.

What is important to understand is the rate of evolution is not uniform between the business ecosystem and the public consumer ecosystem. Hence as the competition around email shifted to the public consumer market (with the introduction of services such as Yahoo and Google Mail) then the public consumer market developed highly commoditized email services. In many cases these were vastly more commoditized and efficient than the equivalent activity in the business ecosystem, which was often provided by products.

Pressure mounted for those business consumers of email to adapt (and in many cases adopt) these more “consumerized” services available to the members of the public. 

This shift of competition and hence evolution from being governed by B2B, where companies represent both the suppliers and consumers of the activity, to one governed by competition in the public consumer space is known as “Consumerization” (as described by Doug Neal, LEF in 2001). See Figure 39.

Figure 39 Consumerization


Now, not all activities undergo this process. Many activities remain governed by competition in one ecosystem i.e. between companies with companies representing being consumers and suppliers. An example of this would be Financial ERP systems.

Equally, consumerization is not a one-way street. Activities that evolve and are governed by competition in the public consumer space can be forced into the business ecosystem. An example of this would be radio broadcasting equipment, a once vibrant and rapidly developing activity in the public consumer space with many public hobbyists creating and sharing capabilities which was forced under the control of companies through the legislative control of the radio frequency spectrum. 

The point to note is that the rate of evolution can rapidly change if competition around an activity switches focus from the business to public consumer ecosystem. Now, I won’t detail all the aspects of ecosystems mainly because there are numerous books covering Enterprise 2.0, use of Social Media, Supply Chain relationships and the above should provide the reader with the basics for the mapping exercises latter in this work.

It’s enough to be aware that various forms of ecosystem exist, exploitation can have powerful effects, the rate of evolution of components can effect the path that higher order systems follow and the rate of evolution can rapidly change as the focus of competition around an activity switches from one ecosystem to another. By now, if you've followed the entire series, you should have a good appreciation of the complexity of change and also why without maps it's no wonder that strategy becomes vague hand waving.

One final note, you should also have some understanding of the difference between the terms Consumerization (the process by which the focus of competition shifts from business to consumer ecosystem), Commoditisation (the process of evolution for activities) and Commodification (the process by which an idea with social value becomes instantiated as an activity with economic value, an idea from Marxist Political Theory). Endless confusion abounds because those entirely different concepts are constantly jumbled together as though they were the same.

Since I’ve already mentioned open source in this section, I will now turn our attention to the use of open as a competitive weapon and then finally, we can get back to maps.

---

Post 16 of 200 on the Management and Strategy series.

Next post in series ... Open

Previous post in series ...  The Next Generation

Beginning of the Management and Strategy series ... There must be some way out of here


Saturday, January 26, 2013

The Next Generation

In 2005, I had the basics of evolution and mapping. By 2007, I had enough supporting data to call it a weak hypothesis (correlation, causation and thousands of data points). What I lacked beyond the use of the mapping technique in predicting market and competitor changes were more general predictions.

However, the cycle of change was pretty clear on the co-evolution of practice and how new organizations formed. The industry was already going through one change caused by the commoditization of the means of mass communication (e.g. The Internet) that had all the normal patterns plus a new form of organization, the Web 2.0.

What I wanted to know is could we catch the next wave. Would the shift of numerous IT based activities to more utility services create a new organizational form? Could I catch this?

Timing was critical and unlike my earlier work in genetics where populations of new bacteria are grown rapidly, I had to wait. So wait, I did.

By 2010, the signals were suggesting that this was happening, so at the LEF (Leading Edge Forum) we undertook a project in 2011 (published in the same year) to examine this. Using population genetics techniques, we were looking for whether a statistically different population of companies had emerged and their characteristics (phenotypes) were starting to diffuse. It was a hit or miss project, we’d either find the budding population or it was back to the drawing board.

We already knew two main populations of company existed in the wild - the Traditional enterprise and the Web 2.0. The practices from the Web 2.0 were already diffusing throughout the entire environment. Most companies used social media, they thought about network effects, used highly dynamic and interactive web based technology and associated technology practices. The two populations were hence blurring through adoption of practices (i.e. the Traditional were becoming more Web 2.0 like) but also partially because past companies had died. But was there now a Next Generation budding, a new Fordism?

In early 2011, I interviewed a dozen companies that we thought would be reasonable examples of Traditional and Web 2.0 and where a couple of highly tentative Next Generation might exist. We developed a survey from those companies, removed them from the sample population to be examined and then interviewed over 100 companies divided roughly equally among those that described themselves as Web 2.0 and those who called themselves more Traditional. We examined over 90 characteristics giving a reasonable volume of data.

From the cycle of change and our earlier interviews, we had guessed that our Next Generation was likely to be found in Web 2.0 group and in terms of strategic play they would tend to be focused on disruption (the war phase) rather than profitability (the peace phase). From our earlier interviews we had developed a method of separating out into candidate populations.

So, we separated the population sample out into these categories and looked at population characteristics - means and stand deviations. Were there any significant differences? Were the differences so significant that we could describe them as a different population i.e. in a sample of mice and elephants then there exist significant characteristics that can be used to separate out the two populations.

I ran our analysis and waited. It was an edgy moment, one of those I’m well used to. Had we found something or as per many attempts before had we found nothing? I tend to assume nothing and when there is something, I tend to doubt it.

The populations all contained a mixed of medium and huge companies and within this we found statistically significant population differences across a large number of the characteristics. I re-examined, looked through my work, tested, sought the advice of others and tested again - but it remained.

For example, I examined each company’s view on open source and whether it was primarily something that means relatively little to them, a mechanism for cost reduction, something they relied upon, something they were engaged in or whether open source was viewed as a tactical weapon to be used against competitors. The result is provided in figure 32 with the subdivision by population type. Whilst the traditional companies mainly viewed open source as a means of cost reduction and something they relied upon, this Next Generation viewed it as a competitive weapon and something they were heavily engaged in. The Web 2.0 group had a broader view from cost to weapon.

Figure 32 – Views on Open Source by Population type



This difference in population was repeated throughout many characteristics spanning strategy, tactics, practice, activities and form. The odds of achieving the same results due to random selection of a single population were exceptionally low. We had found our candidate Next Generation.

To describe this Next Generation, it is best to examine them against the more Traditional. Some of the characteristics show overlap as would be expected. For example, in examining the highest priority focus for provision of technology by a company whether it’s profitability, enhancement of existing products and services, innovation of new products and services, enabling other companies to innovative on top of their products and services or creating an engaged ecosystem of consumers then overlaps exists (see figure 33).

Figure 33 – Percentage of Companies ranking the following focus as high priority by population type.



Traditional companies were mostly focused on profitability (a peace phase mentality) whereas the Next Generation are mostly focused on building ecosystems.

In other areas, the differences were starker. For example, in an examination of computing infrastructure and whether the company tended to use enterprise class servers, more commodity servers or a mix of both (see figure 34)

Figure 34 – Type of Servers used by Population Type.
However, it should never be expected that there are no common characteristics or overlap but instead a significant difference in specific characteristics (i.e. Mice have two eyes, same as Elephants).

Using these populations, we then characterized the main differences between Traditional and Next Generation just to highlight the differences. There is also significant differences between Next Generation and Web 2.0 but naturally they are lesser than in comparison to Traditional Enterprises which formed in an earlier cycle of change.

Figure 35 gives the main differences (though not all) and we’ll go through several of these differences in turn.

Figure 35 – Difference between Next Generation and Traditional



Organizational Form

Structure
Traditional organizations used a departmental structure often by type of activity (IT, Finance, Marketing) or region. The next generation used smaller cell based structures (with teams typically of less than twelve) often with each cell providing services to others cells within the organization. Each cell operated fairly autonomously covering a specific activity or set of activities. Interfaces were well defined between cells.

Culture
In traditional organizations culture is seen as relatively fixed, difficult to change and often a source of inertia. In next generation the culture is viewed as more fluid and gameable.

Strategy / Tactical Considerations

Focus
Traditional organizations tend to focus on profitability (a peace phase mentality) whereas the Next Generation is primarily focused on disruption of pre-existing activities (a war phase mentality). This is not considered to be a long-term distinction.

Open Source (including Open Data, Open APIs etc)
In traditional organizations, the use of open systems (where source, data, APIs or other) is viewed primarily as a means of cost reduction. In some cases technology or data is provided in an open means. 

In Next Generation, open is viewed as a competitive weapon, a way of manipulating or changing the landscape through numerous tactical plays from reducing barriers to entry, standardization, eliminating the opportunity to differentiate, building an ecosystem and even protecting an existing value chain.

Learning
Traditional organizations tend to use analysts to learn about their environment and changes that are occurring. The Next Generation use ecosystems to more effectively manage, identify and exploit change (more on this in the next section).

“Big Data”
Traditional organizations use big data systems and are focused primarily on the data issue. The Next Generation is run by extensive use of modeling and algorithms. Whilst the focus is not on the data per se but the models, these systems are not simply used but run the company.

Practices & Activities

Architecture and Infrastructure
Traditional organizations tend to use architectural practices such as scale –up (bigger machines) for capacity planning, N+1 (more reliable machines) for resilience and single, time critical disaster recovery tests for testing of failure modes. These architectural practices tend to determine a choice for enterprise class machinery.

The Next Generation has entirely different architectural practices from scale-out (or distributed systems) for capacity planning, design for failure for resilience and use of chaos engines (i.e. the deliberate and continuous introduction of failure to test failure modes) rather than single, time critical disaster recovery test. These mechanisms enable highly capable systems to be built using low cost commodity components.

Development
Traditional companies tend to focus towards singular management techniques for development (e.g. Agile or Six Sigma) and often operate on a change control or regular process of updates. The Next Generation tends towards mixed methods depending upon what is being done and the process of development of novel aspects is usually continuous with no specific time period for releases.

The LEF published the work in Dec 2011 and since then we have observed the diffusion of many of these changes. However, I very much don’t want you to read the above list and get the impression that “this is how we create an advantage!” but instead to be realistic. The above characteristics are already diffusing and evolving, tens if not hundreds of thousands of people and their companies are well aware. You’ll need to adapt simply to survive. Any real advantage has already been taken and the only advantage to be gained is over those who are slower to adapt.

However, the point of this exercise is not what are the new organizational forms (many books have or are being written on this subject of the New Fords by others) but that a new organizational form could be predicted to emerge in the first place.

The model suggested this in 2005 but I had to wait until 2011 to confirm this in its first instance (such is the slow nature of experimentation with companies). The above appear to be the characteristics of the New Fordism though it’ll take a decade or more to really confirm this.

By which time, if the model holds then the next wave of change (related to commoditization of the manufacturing process itself) will itself have created a new Next Generation, a ”New New Fordism” so to speak.  In much the same way every previous wave has created its own Fords – the Systeme General, the Plymouth and the American System, Fordism etc.

Now, who are the New Fords and is there any pattern to where this evolution is heading? Well, the former I’ll keep to myself (though many will be able to name several who are) whilst the latter I’ll discuss briefly when we talk about the future. 

For now, it’s enough to know that co-evolution of practice can lead to new organizational forms and this is happening today. In the next section, I want to turn my attention specifically to the subjects of ecosystems and open source, after which we can revisit our map and get on with the really interesting stuff.

---

Post 15 of 200 on the Management and Strategy series.

Next post in series ... Ecosystems

Previous post in series ... No Reason to Get Excited

Beginning of the Management and Strategy series ... There must be some way out of here


No Reason To Get Excited

By now, the reader should understand that things are created (genesis) which are uncertain, rare, constantly changing and hence chaotic by nature. These things diffuse through society through various constantly improving iterations (evolution) driven by competition (consumer and supply). Ultimately they become a more common, well-defined and standardized (i.e. linear) commodity. 

This process of evolution impacts activities (things we do), practices (how we do things) and data (through models of understanding).

Where those things can become components of higher order systems (e.g. nuts and bolts with machines) then as they evolve (become more linear) they accelerate the genesis of those higher order systems through componentization. This extends our value chains. Hence evolution is associated with increasing efficiency (of what is evolving) and increasing rates of speed and agility in creation of higher order systems. Genesis begets evolution begets commodity components begets higher order genesis ad nauseum. 

The process is a continuous cycle that we commonly describe as “progress”.

The new higher order systems are sources of future wealth and hence we see flows of capital from the past to the new (creative destruction). However, the process is not smooth because practices tend to co-evolve with activities and hence we see inertia to change due to legacy constraints. 

Equally suppliers have inertia due to past success, so the later stages of evolution (in particular the switch from product to utility) are associated with new entrants.

However, the change is inevitable as consumers are in constant competition and the benefits of efficiency, increased agility in building higher order systems and new sources of wealth turn a trickle into a flood. All companies have to adapt just to stand still relative to an evolving and surrounding environment (Red Queen).

This pace of change will often catch out suppliers as they are lulled by consumer inertia to the change and the previous more peaceful, slow moving stage of relative competition. Hence we can describe the transition of competition around an activity as one of relative peace to one of war to one of wonder and creation of new higher order marvels. 

The peace state can be characterized as one of incumbent giants with relative competition where sustaining change exceeds disruptive change. The war state is one of new entrants, a fight for survival and where disruptive change exceeds sustaining.

However the progress from peace to war is not unexpected and there is no reason (from culture to inertia) for why the past giants cannot be prepared. Disruption, unlike the case of unexpected changes in the market, is entirely preventable but rarely is prevented.

Of course, the change reduces barriers to entry and allows for new things that can impact value chains in unexpected ways (from gas lamps to light bulbs, from naturally harvested ice to ice making machines). Hence some indirect disruption is unpredictable and the innovator’s dilemma runs rampant. 

This cycle of changing states (wonder, peace, war) created by the interaction of inertia and the economic pressures of evolution (efficiency, agility and new sources of wealth) which is itself driven by competition (user and supply) and the need to adapt to competition (Red Queen), appear at both a local and macro economic scale. 

The macro economic scale we tend to call Ages as in Industrial Age, Mechanical Age, Internet Age. Each has a time of Wonder, Peace and War.

In certain cases that which is evolving can accelerate the entire process for all things by improving communication e.g. postage stamp, telephone, printing press, the Internet. In all cases, the drive towards more evolved and higher order systems consumes greater quantities of energy (though our waste vastly outweighs this).

Beyond creating inertia, the co-evolution of practice with activities will result in new organizational forms from Fordism (the age of electricity) to Web 2.0 (the age of the Internet). In all cases, these new organizational forms are more adapted to this changing world of higher order systems and are more effective at managing the flow of change from chaotic to linear. 

Each age can be associated with the evolution of organisations themselves.

However, our systems are far from perfect. Our tendency to one size fits all (one of the solutions of Ashby’s Law of Requisite Variety) tends to create a yoyo between extremes. Whether project management (agile vs six sigma) to marketing (push vs pull) to structure (networked vs hierarchical). A better balance can be found through embracing both and as organization evolve we tend towards this balance.

Our confusion over this simple pattern stems mainly from terminology and our inability to see it. We use the word innovation to mean genesis, a feature differentiation of a product or even utility provision of a pre-existing model. Our use of the word hides the obvious pattern of evolution in a fog of “Everything’s an Innovation”.

The same problem extends to other parts of our language. The process of evolution (often called commoditization) is different from the process by which an idea gains economic value by implementation into a tradable thing (i.e. idea or concept turned into something real). Alas, the process that represents a conversion of social (idea) to financial (tradable thing) capital is called commodification and whilst it is entirely different from the process of evolution, the terms of commodification and commoditization are often used to mean the same thing. It’s a bit like using the word chalk to mean cheese.

Hence in a world where obvious patterns are clouded by the misuse of terms, where companies often compete without any means of understanding the landscape that they exist within, we often believe they things are a lot more random than they are. 

Strategy often becomes one of “do what others are doing” and vague hand waving notions. We grasp at concepts like inertia and disruptive innovation as though this explains all - “We couldn’t help ourselves it was an unexpected change, we were caught by the Innovator’s dilemma”.

In some cases you are, in many cases you are not. You could have survived.

And so the cycle continues, new activities that appear evolve creating new inertia barriers (due to success) and a new war results from the inevitable march of competition. The same lessons are repeated, new forms of organization appear and we marvel at the changes.

The plethora of new activities created also results in new forms of data we have yet to understand, it is unmodelled or unstructured (if you insist). We stare in amazement at our progress as though somehow this time of wonder was any more wondrous than any previous time of wonder. The cycle continues.

The cycle has occurred numerous times over the last three hundred years. Alas “the one thing we learn from history is that we never learn from history”. In the hope that we learn this time, I’ve drawn the cycle in figure 31 and I’ve taken the liberty of removing the axis of value chain and drawing it as cycle. Each time we move through the cycle, value chains extend.

Figure 31 – A Frequently Repeated Cycle.

So, let us bring ourselves to our modern day.

Driven by consumer and supply competition, the activity of computing infrastructure has evolved from products to more of a utility. It is so widespread and so well defined it can now support the volume operations needed.

New entrants not encumbered by pre-existing business models (such as Amazon) have initiated this change and a resultant state of war has developed in an environment that was once governed by relatively peaceful competition between incumbent product giants (Dell, HP, IBM). 

We see an explosion in the genesis of novel higher orders systems created on these utility services, a flow of capital into the new higher order systems and we marvel at the speed and agility of creation. Endless books are written on creative destruction, componentization and change.

As expected, practices have co-evolved with the activities. We talk of distributed systems, design for failure and chaos engines (or monkeys if you like). An entire movement known as “devops” has developed around these changes. 

Consumers of the past models of provision (i.e. computing products such as servers) have also shown inertia to change. Citing all the typical risks and the issue of the legacy estates, they want all the benefits of agility, efficiency and new sources of wealth but without the cost of transition due to re-architecture. They want the new world but provided in an old way. They want the old dressed up as the new. They talk of enterprise clouds that are more rental services than utility.

Many of these consumers are oblivious to the issue that those benefits (efficiency, agility, wealth) are also pressures for adaption which will force them to change as competitors do. It’s not a question of “If”, it never has been. It’s a question of “When”.

Their suppliers encumbered by past business models race to provide this “old” world dressed up as new. They, suffering from their own inertia, are unaware that the trickle to the new world will become a flood at a pace they are not expecting. They watch Amazon thinking it will tail off, that it’s really only for new start-ups and green field sites. This is wishful thinking.

Along with changing practices and movements such as “DevOps”, new forms of organization appear. New structures, new ways of operating diffuse and evolve. Tomorrow’s Fordism has been with us for many years and it’s spreading.

As expected, for any student of history, we have also seen an explosion (as in genesis) of new data. Whilst the scramble to provide “big data” systems focuses on the issues of volume, it is the un-modelled nature of the data that is key. It wasn’t simply the volume of natural history data or the explosion in the number of books through printing presses that changed our world; it was the models of understanding that altered us. 

This data will become modeled and we will progress in understanding but not without arguments of the Structured vs Unstructured or Dewey Decimal vs Cutter type beforehand. We blissfully ignore that all data starts as unstructured and it is through modeling that the structure is found.

It’s like our assumptions of innovation. It’s never the innovation of something that changes the world; it’s commodity provision as good enough components (e.g. nut and bolts, electricity, computing). 

It’s not the volume of data that matters; it’s our model of understanding.

So, cloud is all about utility provision of pre-existing activities to commodity components, explosions in the creation of higher order systems, new sources of wealth, new practices, new forms of organisations and disruption of past models stuck behind inertia barriers and indirect disruption through changing value networks and lowering barriers to entry? Yes.

This was all perfectly clear in 1966 when Douglas Parkhill wrote the book the “Challenge of the Computer Utility”. It was only a question of when. 

By 2005, the weak signals of “when” were screaming loud. The “when” was NOW! 

None of this should come as a surprise.

The CEOs of the past giants should have leaned over their shoulders and pulled down from their bookcases their “What to do when Computing Infrastructure is ready to be a utility” playbooks. These playbooks should have been crafted and refined over the decade beforehand when the weak signals shouted “getting closer”. 

By the time Amazon launched, the past Giants should have prepared to launch at a massive scale. Culture is gameable and should have been gamed.  Inertia is manageable and should have been managed.

By 2010, Amazon should have been crushed. The past giants should have dominated the market. They had all the advantages they needed. But that’s not what happened. Those past giants hadn’t mapped this change. 

They were not prepared for the expected

Many will suffer the same fate as previous companies who have failed to prepare for the expected from Blockbuster to Kodak. But before the normal round of excuses begin, the inevitable rush to safety of executives behind the “innovator’s dilemma” and claims of unexpected changes, let me blunt.

Those companies failed because their executives failed. Not culture, not inertia, not unexpected changes but instead a total failure of strategy. They were simply not up to the job or as Gandalf might say “fool of a Took”.

As I said in the beginning, this work is not about gaining advantage but about surviving and mostly that’s surviving the expected. The cycle continues today, as it has in the past and as it will tomorrow.

So on the assumption that you’re not one of those facing oblivion through some gross failure of past executive play, let us turn to the new forms of organization and practice that you’ll need to deal with today.
---

Post 14 of 200

Next post in series ... The Next Generation

Previous post in series ... Revisiting our Map

Beginning of series ... There must be some way out of here

Sunday, January 20, 2013

Gartner's Wandering Hype Cycle Axis

Many moons ago I deconstructed Gartner's hype cycle from the underlying market dynamics and demonstrated how it couldn't be based upon any form of physical measurement but instead the aggregation of analyst's opinion. The effects of the hype cycle pattern appear real enough, just the hype cycle graph isn't some actual measurement of change just perceptions.

As an aggregate of Gartner analyst's opinions this has some value if you assume analysts know more about the market than your own people. If not, then any large company should simply build their own hype cycles. However, that's not the point of this post.

The point of this post is that whilst all the hype cycles have identical patterns, shapes and phases the axis keep on wandering i.e. we have :-

1. Visibility over Maturity. 

2. Visibility over Time. 


3. Expectation over Time. 


You could make a case for this being simple renaming as in Visibility and Expectation are the same. I have difficulty in accepting that. However my difficulty level reaches new heights with changing Time to Maturity as though the process of evolution (i.e. how something becomes mature or fit for purpose) had a linear relationship with Time.

I know, I know ... the Hype cycle isn't "real" as in it's not a scientific measurement of a physical characteristic but instead it is just aggregated opinion and therefore I'm being petty. It's just a visualisation of perceptions and is obviously evolving itself.

I understand all of this. 

It's just that it gives the impression of being actual physical measurement, a graph of something tangible that can directly investigated. Hence changing the axis drives me nuts.

I should shrug it off and look forward to Expectation over Maturity (which they seem to have missed) and bizarrely enough is the pattern which appears to be most "real". Another story, another day.

Thursday, January 17, 2013

A Pause ...

Ok, I've already done thirteen posts on this journey into strategy and mapping covering the basic of maps, the first map, some general lessons and revisiting that first map in more detail.

This brings me to a point that the reader should now have an idea of how to map an organisation and anticipate some basic forms of change.

The next sections will cover new forms of organisations, the fundamental importance of ecosystems, use of open as tactical weapon, numerous defensive strategies such as tower and moat along with attacking strategies and how to put together an overall battle plan. The mapping technique is essential to to this because without it those plays are reduced to more hand waving and unclear notions. It's the map which tells us the why of ecosystem for example.

Anyway, thank you for the feedback so far. Much appreciated and I hope you find this useful and not too slow. I'm now going to take a break for a bit, as other things require my attention but just to recap ...

---- Journey so far.

Post 1 : The start of my journey
A young CEO caught in the headlights of change.
http://blog.gardeviance.org/2013/01/there-must-be-some-way-out-of-here-said.html

Post 2 : The importance of maps 
An exploration of why maps are important and a question - where are our business maps?
http://blog.gardeviance.org/2013/01/the-importance-of-maps.html

Post 3 : There's too much confusion. 
My quest for maps in business begins in earnest and starts with the concept of value chains.
http://blog.gardeviance.org/2013/01/theres-too-much-confusion.html

Post 4 : Evolution, 
Explores how things evolve and why diffusion is only part of the puzzle. Evolution is an essential part to mapping an organisation.
http://blog.gardeviance.org/2013/01/evolution.html

Post 5 : A first map. 
Uses both value chain and evolution to create a rudimentary map of a business. In this case, Fotango in 2005.
http://blog.gardeviance.org/2013/01/a-first-map.html

Post 6 : Business they drink my wine
The journey from chaotic to linear and how the characteristics of activities, practices and data change as they evolve.
http://blog.gardeviance.org/2013/01/businessmen-they-drink-my-wine.html

Post 7 : Why one size never fits all
Examines why the change of characteristics impact techniques and how one technique is not suitable for all activities leading to debates such as six sigma vs agile
http://blog.gardeviance.org/2013/01/why-one-size-never-fits-all.html

Post 8 : Of perils and Alignment
Examines why changing characteristics create problems for business from the pitfalls and perils of outsourcing to the issues of business alignment.
http://blog.gardeviance.org/2013/01/of-perils-and-alignment.html

Post 9 : Everything Evolves.
Explores the common path of evolution whether Practice, Activites or Data and how practices can co-evolve to create inertia to change. Covers Cynefin, Co-Evolution and Inertia
http://blog.gardeviance.org/2013/01/everything-evolves.html

Post 10 : Evolution begets Genesis begets Evolution
Examining how there is a cycle of commoditisation and genesis with new activities being built on past activities. Covers componentisation, volume effects and disruption,
http://blog.gardeviance.org/2013/01/evolution-begets-genesis-begets.html

Post 11 : Inertia
Examining why customers and companies have inertia to change, what the causes and symptoms are and why it is so dangerous. Covers disruption of the past, transition to new and agency of the new.
http://blog.gardeviance.org/2013/01/intertia.html

Post 12 : Revolution
Using evolution and inertia we explore why revolutions (such as the industrial revolution) occur and what the common consequences of this are. Covers Kondratiev waves, prediction and time, disruption.
http://blog.gardeviance.org/2013/01/revolution.html

Post 13 : Revisiting our first map
Taking the lessons and principles we've covered and re-applying them to that first map of Fotango back in 2005, to explain how Fotango predicted the changes brought about by Cloud.
http://blog.gardeviance.org/2013/01/revisiting-our-map_17.html