Monday, February 28, 2011

Private vs Enterprise Clouds

There is a debate raging at the moment between different types of clouds hence I thought I'd stick my oar in. First, let's clear up some simple concepts:-

What is cloud?
Cloud computing is simply a term used to describe the evolution of a multitude of activities in IT from a product to a utility service world. These activities occur across the computing stack.

These activities are evolving because they've become widespread and well defined enough to be suitable for utility provision, the technology to achieve this utility provision exists, the concept of utility provision is widespread and there has been a sea-change in business attitude i.e. a willingness to accept these activities as being a cost of doing business and commodity-like.

There is nothing special about this evolution, it's bog standard and many activities in many industries have undergone this change. During this change, the activities are moving from one model to another which creates a specific set of risks (both real and perceived) around trust in the new providers, transparency, governance issues and concerns over security of supply. There are other risks (including outsourcing and disruption risks) but these are irrelevant for the specific question of private vs enterprise clouds. Nevertheless, all of these "risks" help increase inertia to the change. For clarity's sake, I've summarised this evolution in figure 1.

Figure 1 - Lifecycle and Risk.
(click on image for larger size)


Why private clouds?

Private clouds (where the service is dedicated i.e. "private" to a specific consumer) are generally used as part of a hybrid strategy which combines multiple public sources with a private source. The purpose of a hybrid strategy is simply to mitigate transitional risks (concerns over governance such as data governance, trust etc). This is a normal supply chain management tactic and occurs frequently with this type of evolution. Even within the electricity industry you can find a plethora of hybrid examples in the early days of formation of national grids.

Fundamentally, a hybrid strategy mitigates risk but incurs the costs of both lesser economies of scale and additional resource focus when compared to a pure public play. It's a simple trade-off between benefits and risks which can often be justified for a time.


Why Enterprise Clouds?

Enterprise clouds need a bit more explaining and to understand why they exist we first have to start with architecture. To keep things really simple, I'm going to focus on infrastructure.

As per above, the use of computing infrastructure has undergone a typical evolution from innovation to commodity forms and more recently the appearance of utility services. In the earlier stages of evolution, the solution to architectural problems was bound to the physical machine. Scaling was solved through buying "a bigger machine" i.e. Scale-Up whereas resilience involved hot-swap components, multiple PSUs and other redundant physical elements (the N+1 model). This was essential because of the long lead time for replacement of any physical machine i.e. there existed a high MTTR (mean time to recovery) for a physical server. Applications therefore developed on the assumption of ever bigger and ever more resilient machines. These architectures spread (specific sets of knowledge behave just like activities) and became best practice.

As computing infrastructure became commodity-like, novel system architectures such as Scale-Out (aka as horizontal scaling) developed. These solved scaling by distributing the application over many more smaller and standardized machines. The new architectural solution was therefore "buy more machines" and not "buy a bigger machine". This scale-out architecture rapidly spread as infrastructure became more generally accepted as a commodity.

As we entered the utility phase, infrastructure has in effect become code – that is, we can create and remove virtual infrastructure through API calls. The MTTR of a virtual machine provided through a utility service is inherently lower than its physical counterpart and novel architectures called "design for failure" have emerged that exploit this. The technique involves monitoring a distributed system and then simply adding new virtual machines when needed.

In the cloud world application scaling and resilience are solved with software whereas in legacy it was often solved by physical means. This is a huge difference and I've summarized these concepts in figure 2.

Figure 2 - Evolution of Architectures
(click on image for larger size)


By necessity, public cloud infrastructure is based upon volume operations – it is a utility business after all. Virtual compute resources come in a range of standard sizes for that provider and are based upon low cost commodity hardware. These virtual resources are typically less resilient than their top of the line physical enterprise class counterpart but naturally they are vastly cheaper.

It should be noted that a ‘design for failure’ approach can take advantage of these low cost components to create a far higher level of resilience at any given price point through software.

By way of example, the general rule of thumb is that each 9 roughly doubles the physical cost. Hence let's take a scenario of a base machine designed to give a 99% up-time with a machine designed to provide 99.9% uptime costing twice the amount etc.

In the above scenario, using four base machines in a distributed architecture provides a theoretical up-time of 99.999999% though naturally it will suffer periods of degraded performance due to single, dual or triple machine failure. In a utility computing environment, this impact is negligible due to the low MTTR of creating new virtual machines and in such as case you have a low cost environment (4x base unit), high level of resilience for total failure (1 in 100 million) and fast recovery for periods of degraded performance due to the low MTTR.

Now compare this to a single physical machine (ignoring all scaling, network and persistence issues etc). The equivalent physical machine would be 16x more costly (assuming the 2x rule holds) and in the rare situation that complete system failure occurred, the MTTR would be high. In reality, MTTR would be high for all its components unless spares were kept.

Now for reasons of brevity, I'm grossly simplifying the problem and taking a range of liberties with concepts but the principle of using vast numbers of cheap components and providing resilience in software is generally sound. We've seen many examples of this including RAID. These cloud architectures are no different except they extend the concept to the virtual machine itself.

The critical point to understand is that these two extremes of architecture have fundamentally different models based upon the differences in the underlying components i.e. the architectural practice co-evolved with the activity. The cloud model is based upon volume operation provision of commodity good enough components with application architectures exploiting this through scale-out and design for failure. Now in the following figure I've tried to highlight this difference by providing a greyscale comparison between a traditional data centre and a public cloud on a number of criteria.

Figure 3 - Data Centre vs Cloud.
(click on image for larger size)


OK, now we have the basics let's look at the concept of Enterprise Cloud. The principle origin of this idea is that whilst many Enterprises find utility pricing models desirable, there is a problem when shifting legacy environments to the cloud. Now when companies talk of shifting an ERP system to the cloud they often mean replacing own infrastructure with utility provided infrastructure. The problem is that those legacy environments often use architectures based upon principles of scale-up and N+1 and infrastructure provided on a utility basis doesn't confirm to these high levels of individual machine resilience. The problem is simply people are trying to shift applications built with best architectural practice for product based infrastructure to a world where infrastructure is a utility which has its own but different best practice. It's not that legacy is wrong, it's just that in this case legacy means built with best practice for a product world and that practice is no longer relevant.

At which point the company has two options; either re-architect to take advantage of the volume operations through scale-out and design for failure concepts or demand for higher level resilient virtual infrastructure i.e. try and make the new world act like the old world.

This is where Enterprise Cloud comes in. It's like cloud but the virtual infrastructure has higher levels of resilience but at a high cost when compared to using cheap components in a distributed fashion. So why Enterprise cloud? The core reason behind Enterprise Cloud is often to avoid any transitional costs of redesigning architecture i.e. it's principally about reducing disruption risks (including previous investment and political capital) by making the switch to a utility provider relatively painless. However, this ease comes at a hefty operational penalty. The real gotcha' is those transitional costs for redesign increase over time as the system itself becomes more interconnected and used.

In principle, Enterprise Clouds are used to minimise disruptional risks but they will be ultimately subject to a transition to the new architecture because of high operational costs. There are other reasons for an "Enterprise Class" cloud but most of these, such as where data resides, can also be provided by using a "Private" cloud that is built using commodity components until suitable "Public" clouds are available.

The different economic models is what separates out private / public compute utilities from enterprise clouds, I've highlighted this on the greyscale in figure 4. Actually, I prefer to use the original term virtual data centre rather than enterprise cloud because that's ultimately what we're talking about.


Figure 4 - Enterprise vs Private Cloud.
(click on image for larger size)


Do Enterprise Clouds have a future? Sort of, but they'll ultimately & quickly tend towards niche (specific classes of SLAs, network speeds, security requirements etc). Their role is principally in mitigating disruption risks but the transitional costs they seek to avoid can only be done at an increasing operational penalty. It should be noted that there is a specific tactic where an enterprise cloud can have a particularly beneficial role: the "sweating" of an existing legacy system prior to switching to a SaaS provider (i.e. "dumping" the legacy). In most cases, Enterprise clouds will become an expensive folly.

Do Private Clouds have a future? Yes, they have a medium term (i.e. next few years) benefit in mitigating transitional risks (such as issues over data governance) through the use of a commodity based model. However, be careful what you're building and remember the impact of private clouds will diminish as the public markets develop. I say "be careful what you're building" but in practice this means don't build. The vast majority of companies lack the skills and management capability necessary and though you're trying to build a "commodity" based private cloud, the chances are you'll end up with some very "enterprise cloud" like.

Do Public Clouds have a future? Absolutely, this is the long term economic model and will become the dominant mechanisms. Public utilities should also encourage clearing houses, exchanges and brokerages to form. 

Last thing, I said I would focus on infrastructure to keep things simple. The rules change when we move up the computing stack ... but that's another post, another day.


--- 8th February 2016

Note to self.

Five years later and ... oh, my past self would not want to know.

People are still building private clouds and trying to push enterprise class cloud despite the obviousness of the changes. The whole co-evolution of practice finally became firmly established as DevOps but has gone a little bit out of control in terms of making somewhat grand claims of cultural changes. The points of inertia and the overstating of risks continues in corporations but we're finally getting through that point in the punctuated equilibrium that this will all get washed away.

Overall, it has been a torrid journey. Many are still confused on basic concepts such as the change of practice with activities. We're still having this discussion today! I have to caution that legacy isn't flawed as much as built with best practice for a product world. Many are trying to "create" new paradigm shifts in order to re-establish flagging business models by "having another go". In many cases, the strategic play of many former IT giants has been next to hopeless. There's quite a list of well known names circling the spiral of death by cost cutting to restore profitability whilst the underlying revenues are in decline.

It has become plainly clear that the level of blindness to the environment (i.e. poor situational awareness) is incredibly high in most corporations at the executive level and far worse than I could have possibly anticipated. Inertia will always need to be managed but that such companies failed to manage predictable changes with so much warning is ... stunning. 

Saturday, February 26, 2011

Will Cloud Computing help the business align to the market?

I was recently at a private conference of CIOs when in the midst of discussing activities such as CRM to ERP, it became clear that not only did everyone understand these applications, but also that every company had them. Almost all of those companies have expensive customization programmes in place, tailoring these systems to fit their needs. This behaviour is puzzling, but to explain why, we'll need to first look at the concept of business evolution.

All business activities share a common evolutionary pathway, a lifecycle; from the innovation of an activity, to custom built systems implementing that activity, to the provision of products, to eventually commodity provision and the appearance of utility services.

It is unusual to find activities that are commonplace, well understood, well defined and accepted as a cost of doing business being treated as though they were in an earlier stage of evolution with heavy customization. But each CIO told how that was exactly what was happening at their company, and the discussion became surreal when we discovered that many of the customizations being made to systems like CRM were common as well.

Surely, these activities are best served by being provided as a standardized commodity ideally through a market of utility services? Isn't that the whole point of cloud computing? So why aren't we all flocking to consume a host of common activities through the utility services of cloud providers? Well, a diverse range of companies already are.

However, several of the CIOs who had looked into the issue said these services didn't work for their company. On further questioning, it became clear that it wasn't IT but the business that was pushing for the customization in order to fit in with their way of working. Several CIOs had even used the standardized services of cloud providers to challenge this - asking the business to justify the additional orders of magnitude costs for a customized system by demonstrating the differential value that 'their way' made.

Whilst we often talk about business and IT alignment, this shouldn't mean IT just delivering what the business asks for. In circumstances where there is no differential value, then it's better to 'fit to the model' rather than 'fit the model to us'. We do the former all the time - from banking to electricity provision; we don't go and build our own customized solutions, we just find a way to work with the dominant market standards.

Will cloud computing help the business to treat common, cost-of-doing-business activities as though they are common and a cost of doing business? Or, will some businesses continue to spend vast sums customizing that which really makes no difference?

Could the cloud help the business re-align itself with the market?

Reprinted from LEF site

Friday, February 25, 2011

VMware as an acquisition target?

Back in 2009, I proposed that VMware (or more importantly its master EMC) would eventually divest itself of its virtualisation business. The reason for my thinking was as follows :-
  1. Whilst the majority of VMware's revenue was based upon its virtualisation technology, this was an area that was ripe for disruption through two points of attack - open source systems such as KVM and the formation of marketplaces offering utility based virtual infrastructure. The latter almost certainly requires open source reference models to avoid issues around loss of strategic control and when combined with aggressive service competition around price, this doesn't leave much room for a license based proprietary technology.

  2. A dominant position in the enterprise is no guarantee for future success - see Novell Netware and the IPX/SPX vs TCP/IP battle. Critical in such battles is the development of wide public ecosystems and inherently open source has a natural advantage. However, given the revenue position of VMware, it could not afford to undertake this route.

  3. The obvious route for VMware would be to develop a platform play, most likely an open source route with an extensive range of value add services - from assurance to management. The current business would be used to fund the development of this approach until such time as the company could split into two parts - virtualisation & platform.

  4. Given the likely growth of private clouds as a transitional model in the development of the cloud industry, VMware would position the virtualisation business in this space for a high value sale, benefiting from its strength in the enterprise. This is despite it being unlikely that VMware would become the defacto public standard, that the technology was likely to be disrupted, that hybrid clouds are a transitional strategy superseded by formation of competitive markets and that many "private clouds" would be little more than rebranded virtual data centres.

  5. During this time, there would be signals of confusion over the VMware strategy precisely because it would be using a time limited cash cow to fund a new venture whilst preparing to jettison the cash cow prior to disruption.
Given the confusion over cloud, the often central (but ultimately misconceived) role that virtualisation is given in the industry, the generally disruptive effects of this change and the wealth of many competitors then in my opinion with luck, timing and good judgement a buyer could be found.

So, in my opinion:
  • For VMware, it would mean creating a strong platform business funded by its current revenue stream before jettisoning the virtualisation business at high value prior to its disruption.

  • For the buyer, it would mean ... whoops ... well that's capitalism for you. Next time, pay more attention.
Of course, this is just my opinion but I haven't changed my view over the years. I'm expecting to see an increasingly clear division within VMware between platform and virtualisation in the next year or so.

I'm hence curious to know what others think? Do you believe that VMware would sell its virtualisation business?

Should I listen to my users?

Following on from my previous post on lifecycle, I thought I'd discuss the question of when to listen to users. I'm going to use my lifecycle and profile graphs to illustrate the points I'll be raising, so if this is not familiar to you then please read my most recent post on lifecycle.

Figure 1 - Lifecycle (click on image for higher resolution)

Figure 2 - Profile (click on image for higher resolution)

Before I start, a word of apology. All of this stuff is blindingly obvious and has been for many many years - hence please don't take offense.

Any new activity starts in the chaotic phase and then as it evolves through its lifecycle it enters a more linear phase. As it moves, its characteristics change from uncertain, deviating and a source of worth to known, standard and a cost of doing business.

Less confusion creeps in, let's just reiterate that a new activity is an innovation. Whilst we tend to abuse the term innovation, a feature differentiation is a feature differentiation, a process improvement is a process improvement and an operational efficiency is an operational efficiency e.g. you can call every process improvement, operational efficiency and feature differentiation an innovation if you want but good luck in trying to make sense of things if you do.

As explained in earlier posts, as an activity's characteristics change then the methods by which you can effectively manage it change as well i.e. we shift from agile to six sigma for example. This is why there is no one size fits all.

Equally, in the chaotic stage the approaches taken are about gambling, experimentation, potential future worth and novel practice (i.e. it's highly uncertain), however when that same activity has entered the linear stage it's all about conformity to standards, defined processes, price vs quality of service and best practices (i.e it's predictable). In between these two extremes is where ROI (return on investment) matters because unlike the chaotic stage a market exists to provide some basis for trending and unlike the linear stage, you have a choice over whether to implement it since it is not a cost of doing business.

When it comes to users then :-

  • In the chaotic stage the only reason why you would listen to potential users is the same reason why you might collaborate with others - serendipity i.e. the chance encounter of a better idea. Of course, whether the idea is better or not won't actually be known until you "experiment" i.e. you put it out into the market. You have as much chance as identifying the future successful innovation as any user and there are no methods of guaranteeing any innovation will be successful. Like it or not, you have to gamble. The rule of thumb is listen to yourself.

  • In the transition phase listening to users is essential because a market has established, users are becoming familiar with the activity, customer feedback and trending is possible and competitors can be analyzed. The rule of thumb is listen to your ecosystem.

  • In the linear stage, the activity is a commodity and its all about price vs QoS. Now, assuming you're not going to embark on a disinformation campaign and attempt to persuade users that a commodity is actually some form of innovation (a common tactic) then the only thing you need to really focus on is your position against competitors i.e. faster, more reliable, cheaper etc. Hence the rule of thumb is pricing & quality comparison.

So, should you listen to users? The answer is "yes and no", which one depends upon the stage of lifecycle of the activity in question.

[Added Note 26/02/11] I've just come across this Fred Wilson quote : "Early in a startup, product decisions should be hunch driven. Later on, product decisions should be data driven". It's pretty much spot on.

Friday, February 18, 2011

Deconstructing Gartner's Hype Cycle

This is a piece of work that I did many years ago but given my recent post on Lifecycle (nee evolution). I thought I'd revisit it. I will assume the reader is entirely familiar with the concepts of commoditisation.

In figure 1, I've taken part of the evolution curve and modeled onto it a differential benefit curve (differential value - cost of implementation). This latter curve shows how the benefit of an activity changes as it evolves from its early innovation (where it is a strain on company resources) to a late product stage where the activity is ubiquitous and of insignificant differential value between competitors.

Figure 1 - Lifecycle & Differential Benefit (click on image for higher resolution)



When a new activity appears, you often get whitepapers written about it. These are generally done at a time when the activity is showing a highly positive differential benefit. Obviously there is a delay between the collection of data, the publication of the whitepaper, a user reading the whitepaper, the decision to do something and then implementation of the activity.

By the time an activity is implemented, the actual differential benefit may be vastly different. This creates a delta for expectation i.e. a difference from what we thought we would get and what we got. Figure 2 provides a graphical notation of this.

Figure 2 - Delta in Expectation (click on image for higher resolution)

Modelling this delta, with an awful lot of assumptions, provides the expectation curve shown in figure 3. There are a complex set of assumptions and conditions around this, however for the purpose of this post then I'm happy to say this curve is somewhat valid ... caveat, caveat ... except where it's not :-)

For the sake of this post, let's just pretend it is. What the curve shows is the early stages start with low expectations (but possibly high hopes), expectations are then quickly exceeded and continuously rise to reach a plateau after which expectations rapidly become unfulfilled leading to a trough of disillusionment before eventually levelling.

Figure 3 - Expectation Curve (click on image for higher resolution)



This all sounds strangely familiar and so it should. The expectation curve quite neatly maps to Gartner's hype cycle - see figure 4. So, we have a potential basis for explaining the underlying forces behind the hype cycle except of course, we have all the assumptions and I'm talking about expectation and not visibility. I'm not even sure what visibility actually means but given it's a hype cycle, I'll assume high hopes (expectations) means high visibility. Clocking up another assumption here.

Figure 4 - Hype Cycle & Expectation Curve (click on image for higher resolution)




Does our new underlying "basis" of the hype cycle shed any new light on the subject? Well, yes. However before I show this, I'd like to examine the final stages of lifecycle.

The evolution of an activity from products to utility services invokes its own expectation curve not through differential value (the creation of a new activity) but operational efficiency (a more efficient means of providing an existing activity).

In figure 5, I've provided the later stages of lifecycle including the transition from products to utility services and modeled an operational benefit curve (operational efficiencies over competitors - cost) of a transition to utility services. Again, lots of assumptions.

Figure 5 - Lifecycle and Operational Benefit (click on image for higher resolution)


NB. this benefit curve is the same shape as the earlier differential curve being derived from the benefit created over competitors, number of competitors exploiting the change and a changing cost of implementation due to maturity. Hence the transition to utility services starts with a period of investment, a rapid benefit over competitors gained by those creating or exploiting such services and then a decline in benefit over competitors as more companies switch to a utility model.

The reason why I mention this, is that whilst Cloud Computing is all about volume operations for ubiquitous and well defined activities (i.e. use of computer resources in business) and is hence all about commodities, this transition will create a similar expectation curve around operational efficiency in much the same way that a genuine innovation creates an expectation curve around differential value. This is shown in figure 6, and the result is the same delta in expectation curve shown beforehand.

Figure 6 - Delta in Expectation (click on image for higher resolution)



Our first "insight" is Gartner's hype cycle tends to show both innovative activities and transitional effects on the same graph. We should be careful to distinguish between operational efficiency (doing the same thing better) and differential value (doing a new thing) lest we start to confuse Cloud Computing with Innovation.

Hence, in the following hype cycle I've highlighted several activities, including :-
  • cloud computing: more efficient provision of the existing activity of "using computer resources in business"
  • social network analysis: a relatively new activity and a potential differential

Figure 7 - Expectation and Hype Cycle (click on image for higher resolution)




Since we started with differential value or operational efficiency, we can now map "value" zones onto the hype cycle. I've done this below.

Figure 8 - Hype Cycle & Value Zones (click on image for higher resolution)



What this suggests is the early stages of the hype cycle has an increasing benefit. In the trough of disillusionment, there is still benefit to be gained but it's diminishing. However, in the slope of enlightenment and the plateau of productivity there is little or no differential or operational benefit over competitors since everyone else is doing it. That's our second "insight".

The lesson of this story has been known in military circles for a long time. An imperfect plan executed today is better than a perfect plan executed tomorrow i.e. if you wait until the activity can be easily and effectively implemented (the plateau of productivity), it'll provide little competitive benefit to you.

Fortune favours the brave.

[A final few comments]

To generate the expectation curve I had to create a model over time. This required lots of assumptions because the evolution (lifecycle) curve does not have a time axis (i.e. you can't predict when something will evolve). There are hence a couple of points I'd like to make clear.
  1. You can't simply overlay the expectation curves of different activities on top of each other - i.e. the axis of time is different (some are stretched, some are shortened). Gartner's curve doesn't define its time axis and we can therefore assume they're referring to a general shape which appears over an undetermined length of time.
  2. The Gartner curve specifically refers to the technology trigger. We can assume this is when the technology starts to spread and ignores any early stage effects (invention etc).
  3. If the Gartner curve was based upon the measurement of some physical property, it would be possible to reverse the process i.e. from Gartner curve to expectation curve to evolution lifecyle and accurately state where an activity was along the uncertainty axis. By very definition this is impossible. I can currently only state where something was in the past once it has become a commodity. Hence I have to conclude that Gartner's curve is not based upon some external measurement of physical property but instead it is more likely by a process of expert review (i.e. averaging where forecasters think something is on the curve) or even more simply, analysts placing dots on the curve.
  4. The Hype Cycle in its current form can have both novel activities and the evolution of the same activity to more commodity forms represented on the same position of the curve at different times i.e. a single activity may well go through the the peak of inflated expectations multiple times. In the first case, this will refer to the differential value of the novel activity but later on, the same activity will appear in the same position due to operational efficiency of a more commodity form. This occurs because the same activity can be given multiple different memes i.e. x86 architecture, client/server, hosting or cloud and whilst those memes are different, the underlying activity (i.e. provision of computing infrastructure) is the same. You cannot therefore use the Hype Cycle to determine evolution, notwithstanding the issue that it is time based and evolution cannot be measured over time (we have no crystal ball).
  5. The expectation curve matches the Gartner curve in certain circumstances. I cannot conclude much about the Gartner curve other than to say that its shape appears to have some validity in specific circumstance and I can approximate the value zones where the curve does match. This doesn't say anything about where the dots are placed on the curve just that the rise of increasing then diminishing then plateau of negligible benefits seems broadly right. That up, down, almost flat shape seems to have merit.
-- Update 10th Feb 2015

This uses an old form of the evolution graph. I've subsequently refined the terms innovation, custom built, product and commodity to genesis, custom built, product (+rental), commodity (+utility). The problem was that whilst I used "innovation" to mean the first ever attempt to put an activity in practice, the common use of the word "innovation" (used liberally to mean genesis of something, feature differentiation of a product, introducing a utility model for an existing activity) made that meaningless. Hence, I changed to use "Genesis" to re-assert the point.

-- Update 7th Sept 2015

Gartner Hype cycles no longer use Visibility vs Time but instead Expectations vs Time. The "time" axis seems to be only a generic sense of travel as the items on the hype cycle are given times to reach the plateau e.g.


I have no idea why they've made this change. Obviously, I was using an expectation curve and so again I broadly agree with the shape. I'm somewhat concerned that the axis changed but the curve didn't but given that the dots are just aggregated opinion, I suspect the curve is just an opinion as well. This doesn't mean it's not useful, as long you keep in mind it's just opinion all the way down.

Friday, February 11, 2011

Pioneers, Town Planners and those missing Settlers.

All business activities evolve, they share a fairly common lifecycle described in the following diagram. From innovation, to custom built examples, to productisation (including the appearance of rental services) and finally to commodity (including utility services).

Figure 1 - Lifecycle (click on image for higher resolution)

As those activities evolve, their properties change from a chaotic to a linear extreme. In the chaotic stage, the activity :-
  • deviates from what has existed before and is a novel practice.
  • is dynamic and constantly changing.
  • is rare and poorly understood.
  • has high levels of uncertainty and it is not possible to predict future outcomes.
  • has no market data, competitor analysis or well understood trends.
  • has characteristics which emerge as we learn about it.
  • is strongly affected by serendipity, chance encounters and discovery.
  • is a potential source of future worth, differential and hence competitive advantage.
  • is a gamble

By the linear stage, that same activity has evolved and:-
  • is mature and rarely changes.
  • is standardised with a wealth of best practice.
  • is commonplace and well understood.
  • has a high degree of certainty and known impacts.
  • has an abundance of market data, competitor analysis and trends are well known.
  • has well defined characteristics.
  • has well defined procedures and plans for implementation.
  • is a cost of doing business with little or no differential advantage except through operational efficiencies.
  • is a known quantity.

Now all businesses consist of a mass of activities, each of which may be at different stages of their lifecycle (stage of evolution). You can map a single business by examining the components involved in a line of business and their stage of lifecycle. You can also examine broader effects by plotting the frequency of activities at different stages of lifecycle thereby creating a profile for an organisation or an industry. This is shown in the figure below, to which the chaotic, linear and in-between stage of transition has been added.

Figure 2 - Profile (click on image for higher resolution)



The techniques which you use to manage each of the phases of profile (chaotic, transition, linear) are entirely different because the fundamental characteristics are different. Which is why no one size fits all approach to management exists. For example, agile development approaches are ideal for the innovation (chaotic) and early transition phases but are superseded by more structured approaches such as six sigma in the late transition and commodity (linear) stages. You can't apply one size fits all without either hampering innovation or impacting efficiency. You need multiple techniques, multiple types of people and even multiple cultures. Alas we ignore it.

In many areas of management, this creates a constant yo-yo between one extreme approach and another such as : agile vs six sigma, networked vs hierarchical, push vs pull. The answer is invariably you need a balance of both. The trick is to learn when to use each.

Given all this, here are my questions :-

1. Since lifecycle is constant and the properties of activities change as they evolve through their lifecycle, why do we organise ourselves around type of activities (e.g. IT, Finance, Operations) especially as a "typed" approach leads to outsourcing of inappropriate activities, misapplied techniques & alignment issues between groups?

2. Why don't we organise ourselves instead by lifecycle with specialist groups managing each stage of lifecyle regardless of the type i.e. an organisation based upon Pioneers, Settlers and Town Planners?

3. Most companies have Research & Development groups (equivalent to Pioneers) and common or shared service groups (equivalent to Town Planners) but Settlers seem to be invisible. Why is this? Who manages the transition from innovation to commodity in your organisation?


-- Update 8th March 2015

A bit of digging in the old memory banks, brings me to Robert X. Cringely's book, Accidental Empires reissued in 1996, page 235 - 238. Copying some quotes from that book (which I recommend people go buy and read), the ideas of pioneers, settlers and town planners are all there. I knew it had come from somewhere.

Think of the growth of a company as a military operation, which isn't a stretch, given that both enterprises involve strategy, tactics, supply line, communication, alliances and manpower.

Whether invading countries or markets, the first wave of troops to see battle are the commandos. Commando's parachute behind enemy lines or quietly crawl ashore at night. Speed is what commandos live for. They work hard, fast, and cheap, though often with a low level of professionalism, which is okay, too, because professionalism is expensive. Their job is to do lots of damage with surprise and teamwork, establishing a beachhead before the enemy is even aware they exist. They make creativity a destructive art.

[Referring to software business] But what they build, while it may look like a product and work like a product, usually isn't a product because it still has bugs and major failings that are beneath the notice of commando types. Or maybe it works fine but can't be produced profitably without extensive redesign. Commandos are useless for this type of work. They get bored.

It's easy to dismiss the commandos. After all, most of business and warfare is conventional. But without commandos you'd never get on the beach at all. Grouping offshore as the commandos do their work is the second wave of soldiers, the infantry. These are the people who hit the beach en masse an slog out the early victory, building the start given by the commandos. The second wave troops take the prototype, test it, refine it, make it manufacturable, write the manuals, market it, and ideally produce a profit.  Because there are so many more of these soldiers and their duties are so varied, they require and infrastructure of rules and procedures for getting things done - all the stuff that commandos hate. For just this reason, soldiers of the second wave, while they can work with the first wave, generally don't trust them, though the commands don't even notice this fact, since by this time they are bored and already looking for the door. While the commandos make success possible, it's the infantry that makes success happen.

What happens then is that the commandos and the infantry advance into new territories, performing their same jobs again. There is still a need for a military presence in the territory. These third wave troops hate change. They aren't troops at all but police. They want to fuel growth not by planning more invasions and landing on more beaches but by adding people and building economies and empires of scale.

Robert X. Cringely, Accidental Empires, 1996 (the reissued, I don't own the original).

What Robert called Commandos, Infantry and Police is what I later called Pioneers, Settlers and Town Planners. The two are identical except Robert was there much, much earlier. In fact 1993, a good ten years before I implemented the tri-modal structure,

I owe Robert Cringely a debt of thanks and hence the update.

-- Update 4th May 2016

These days I use terms such as uncharted to describe the more chaotic and industrialised to describe the more linear.