Friday, February 21, 2014

If I was Sauron - ODF vs OOXML

Sometime ago, I wrote about the rather tortuous path that UK government has faced over open standards and the likely battle that was going to develop over OpenXML vs ODF. Well it seems with MSFT rallying its supporters to respond the the Cabinet Office consultation then that battle is finally upon us.

When I published the post, I also wrote some notes on what I would do if I was a lobbyist in charge and how would I persuade a Government that it should lock itself in further. I called it rather jokingly - Sauron PR: 'we've got an eye on your future'

The notes were based upon a set of techniques from messaging, to a late surge, to creation of fear and to occupying the middle ground. The last point is of particular interest because one common technique is always to attempt to establish two extremes with your viewpoint as the centre ground. Hence, you try to create one extreme of proprietary, pro IPR and a counter extreme of open source, anti IPR and then promote yourself as the more reasoned middle. Of course, the extreme of open source and anti-IPR is almost entirely fictitious as open source is very much pro certain types of IPR (which is why we have open source licenses). But when it comes to lobbying and perception then you should never let a bit of reality spoil the party and if you can't find an extreme then you can always manufacture it.

Anyway the post from MSFT made me smile, it is almost golden. I don't know whether this is part of a crafted campaign but it hit many of the points that I would have raised hence I thought I'd take some time to go through it. 

It opens with ...

"You may not be aware, but the UK government is currently in the process of making important selections about which open standards to mandate the use of in future. These decisions WILL likely impact you; either as a citizen of the UK, a UK business or as a company doing or wanting to do business with government"

First it's absolutely spot on with these points. Yes, the UK Government is in the process of making selections and yes, it's supposed to have an impact. Governments don't tend to do things unless they intend to have an impact. The whole point of open standards is to enable a more competitive market where users have choice and know if they switch between one software system and another that things work. Any work you have to do in the switch is a cost of being locked into one system. As MSFT points out the UK government has significant lock-in to MSFT, estimated by them at around £500 million. That's an awful lot.

The UK Government obviously would like to have choice - we're talking word documents and spreadsheets after all and there's no sensible reason for the UK Gov to continue increasing it's liability by remaining locked into a format that isn't an open standard.

'An important current proposal relates to sharing and collaborating with government documents. The government proposes to mandate Open Document format (ODF) and exclude the most widely supported and used open standard for document formats, Open XML (OOXML).'

This is pure Machiavellian genius - if I was Sauron PR then I would hire this person straightaway.

When I write a .docx file in Office 2013 then the file has one of two possible OpenXML (aka OOXML) formats - transitional and strict.  Yes, you heard me right - OpenXML (OOXML) has two forms.

The transitional format is the default for Office 2013 out of the box and is also the version used to write .docx in Office 2010. It's absolutely right to say that transitional OpenXML is a popular format and many documents are written in transitional Open XML (well, almost and then there's issue of extensions). However, and this is the neat bit, the ISO approved 'open' standard is strict OpenXML and the transitional format is only supposed to have been ... wait for it ... transitional.

So, you can say that the Office default for .docx and hence one of the popular formats is transitional OpenXML and strict OpenXML is an ISO approved open standard. But of course any lobbyist worth their salt would reduce this by dropping the words strict and transitional to arrive at the popular format is OpenXML which is an ISO approved open standard. 

Did you see the trick?

Whilst OpenXML is most definitely a popular format and whilst OpenXML is an ISO approved open standard, the popular OpenXML (transitional) it is not the open standard but the 'very format the global community rejected in September 2007, and subsequently marked as not for use in new documents' whilst the ISO approved open standard of OpenXML (strict) is not the most popular - in fact, to save a document in .docx (strict) you have to navigate through the save options in Office 2013. 

It's a bit like the old gag of you're both witty and original, it's just a shame that the original bits aren't witty and the witty bits aren't original. However when it comes to perception though, this is pure PR genius. Of course, I'm ignoring the issue of extensions, how the standard itself is being modified and the whole question of it being an open standard.

'We believe this will cause problems for citizens and businesses who use office suites which don’t support ODF, including many people who do not use a recent version of Microsoft Office or, for example, Pages on iOS and even Google Docs. Microsoft Office has supported ODF since 2007, but adoption of OOXML has been more widespread amongst other products than ODF'

Ok, first of all industry has to adapt to defacto standards and there is no doubt that the not open transitional OpenXML format for .docx is fairly pervasive. However, proprietary formats create lock-in (as MSFT pointed out such lock-in will cost UK Gov around £500 million) and the Government consultation isn't about increasing lock-in but adopting an open standard in order to create a more competitive market. Fortunately, an open standard such as ODF exists and Microsoft Office and many others do support it. 

'This move has the potential to impact businesses selling to government, who may be forced to comply. It also sets a worrying precedent because government is, in effect, refusing to support another internationally recognised open standard and may do so for other similar popular standards in the future, potentially impacting anyone who wishes to sell to Government.'

I love this bit, it's pure fear and fantasy. 

By conflating strict OpenXML and transitional OpenXML to come up with the risible message of - the popular format is OpenXML which is an ISO approved open standard - then of course you can portray the actions of a Government which decides not to choose the popular and open choice as a source of concern.  However, fortunately the Government are no fools and are likely to know full well that transitional OpenXML which is both the popular and the default for Office 2013 and Office 2010 is not an open standard and its continued use will only increase lock-in. 

By allowing OpenXML to stand as an open standard then purely because of interactions with others (Office 2010 only writes transitional but reads strict whilst Office 2013 defaults to transitional) then transitional OpenXML (the not 'open' standard) for .docx will continue to grow. MSFT has had plenty of time to get rid of transitional OpenXML and it has chosen not to - for obvious reasons. That won't stop endless copy drones repeating the message at Government though.

'We believe very strongly that the current proposal is likely to increase costs'

Well, adoption of the open standard strict Open XML will require everyone to use Office 2013 and also make sure they're using a compatible operating system - windows 7 or 8. So, it's likely that choosing it will incur a lot of costs.

Naturally, because a lot of documents are in non open standard formats such as the transitional OpenXML format of .docx then there is high degree of lock-in and the move towards an open standard will have some impact. This liability created by proprietary formats will however only increase if we continue to use them.

Fortunately there are plenty of solutions. The are many alternative systems like Libre Office but also Microsoft Office 2013 is capable of writing and reading in ODF.  So, you could quite easily adopt ODF as the file format, allow people to use Microsoft Office and any other ODF system they wish. Which after all is the point of open standards.  It's the proprietary formats that have created lock-in costs which is why this liability needs to be managed and you don't manage it by continuing to use them.

'To be very clear, we are not calling for the government to drop its proposal to use ODF. Nor are we calling for it to use only Open XML. What we are saying is that the government include BOTH Open XML and ODF. To do so offers it most flexibility, the widest compatibility and the lowest Total Cost of Ownership for everyone – government, businesses and citizens alike.'

The above is a gem. This is what I call the reasoned middle ploy. Before I explain it I thought I'd better explain my personal position. 

Most of my working life deals with competition. I find that open standards are extremely useful in ensuring you have a competitive market. I view open source as an excellent means of driving an activity to a more industrialised form by reducing barriers to entry and encouraging collaboration. I view proprietary technology to have strengths in areas such as differentials. For me, open vs proprietary is always the wrong question as both have natural strengths and weaknesses.

For a mature activity (such as word processing) then ideally you want to create a competitive market where both multiple proprietary and open source solutions can compete. The adoption of an open standard is all about reducing lock-in and encouraging a competitive market, it's not about choosing one technology over another. I personally use Microsoft Office and I view that the product is more than good enough to compete in a freely competitive market based upon an open standard document format like ODF. 

Unfortunately OpenXML is not that standard because the most popular form transitional OpenXML which is the default is not open. The best we have for an open standard is ODF which has also been adopted by Portugal and organisations like NATO.

However, if I was the Sauron of lobbying then I'd be promoting the image of extremes (proprietary vs open) with our choice as the reasoned middle ground.  Being evil (as Sauron is) then I'd even get some of my cohorts to create a fictitious extreme. In this case you can't do that because the Government has been clear on its focus about competitive markets and it's not about technology solutions (open vs proprietary) but document formats.

Hence you're limited to promoting choice i.e. you should be free to choose any 'open standard' you wish even when the 'open standard' has two versions of which the popular one isn't open. That's the problem with OpenXML. It is unlikely to reduce lock-in and enable that competitive market because it contains the transitional OpenXML format (the not open, default version) which is likely to dominate.

Microsoft should have removed transitional OpenXML but it chose not to. It's only option is to persuade people to vote for it and gloss over the issue of strict vs transitional. This means it has to take a tone of being the reasonable ground even when its position isn't this. Which leads me to my final comment on the post.

'please do take a few minutes to have your voice heard and respond before the consultation closes on 26th February 2014.'

This is what I called the late surge. As Sauron then I would prepare lots of ground work beforehand, set-up a media storm and then whip up a frenzy.  Lobbyists tend to be a fairly smart and devious bunch (it comes with the turf). If your case is weak (and you know this) then with careful messaging, some fear, a bit of the reasoned middle and a late surge you can often sway the day. 

Does it surprise me that MSFT has left it to the last few days to rally the troops? Not really but it could be coincidence. Will they win? Will they convince UK Government to abandon its desire to see a competitive market formed based upon open standards which help reduce lock-in and allow for truly competitive products both open source and proprietary? 

Well, that really does depend upon you. It's time for you to do your part and take a bit of your time and respond to the consultation. You've got until the 26th February which isn't long.

As for Microsoft. Well, I happen to use Microsoft Office products particularly Microsoft Excel and so I hope they just adopt ODF and compete on better products. They're a great company. They've a new CEO - Satya Nadella - whom I once spent some time talking with over the whole issue of strategic play, competition and evolution (see below). He's a really smart cookie, a decent chap and I've got high hopes for them.

I do hope MSFT embraces a more positive approach to standards. Microsoft is more than good enough to compete using ODF. I wish they would just compete because they do a ton of cool stuff from Kinect to the much anticipated and somewhat magical illumiroom. I'm hoping for more magic in the future.

i.e. a bit more Gandalf and a lot less Sauron.

Sunday, February 16, 2014

The danger of mega trends ...

I recently read this McKinsey post on digital mega trends by Willmott Paul. Ok, to be honest I rarely read McKinsey reports or posts because they're not often relevant to my particular line of work but occasionally I do as general background (same with Gartner).  Normally I wouldn't respond but in this case I feel the need to because it's potentially dangerous despite its obvious attempt to be helpful.

The premise of the work is that 'Large digital players (e.g., Amazon, Alibaba) can create cost, talent and data advantages, which in turn can be used to price competitively, innovate rapidly and acquire further market share'. Whilst that's perfectly true it goes on to miss how this occurs and in this are the dangers.

To explain why, I'm going to need to cover some fairly basic stuff and for those of you who've read my blog extensively then this is the point to jump to the conclusion.

The Basics

Point 1) Activities, practices and data don't just diffuse but they also evolve through a common pathway (see figure 1) due to supply and demand competition causing multiple waves of diffusing and ever improving examples. As they evolve their properties change from uncharted to industrialised (see figure 2).

Figure 1 - Evolution (in this case applied to activities).

Figure 2 - Changing properties

Point 2) Organisations consists of many value chains built from multiple components whether activities, practices or data.  Those value chains constantly evolve but you can map out an organisation by examining value chain vs evolution at a point in time (see figure 3).  Such maps are effective in communication and you can use them to not only determine how you should treat something at a point in time but also for strategic gameplay and learning of economic patterns.

Figure 3 - Value chain vs Evolution map for HS2

Point 3) There are many core economic patterns. One of these is componentisation and how the evolution of a component not only increases efficiency but also can enable higher order systems to appear - see figure 4. This pattern (along with many others such as economic cycles, inertia, relative importance of strategy vs culture, creative destruction, co-evolution of practice, how new organisational forms appear) occur throughout history.

Figure 4 - Evolution begets Genesis

Ok, so our organisation (and competitors) consist of value chains built from evolving components (activities, practices and data) and as they evolve then not only do their properties change but they can enable new higher order systems (and new value chains) which become new sources of value but are highly uncertain by nature.

Point 4) The interplay of these forces creates an issue in competition called the Salaman and Storey Innovation Paradox.  As components evolve they become more efficient and you have to adapt to this in order to effectively compete today i.e. if you're a car manufacturer then you have to use common components like standard nuts and bolts, headlights, wheels, airbags and modular PCB's (exploiting the value chains of other providers in this space) rather than building your own from raw ingredients. However, at the same time that you have to efficiently treat certain components in order to be cost effective and survive today, you also need to differentiate with the novel in order to survive tomorrow by creating those future sources of perceived value e.g. self drive, automatic parking etc. Naturally past novel items - electric windows, seat belts, airbags - have become today's standard components.

This combination of efficient treatment and differentiation requires polar opposite styles of management in the same organisation, hence the paradox.

”Survival requires efficient exploration of current competencies and ‘coherence, coordination and stability’; whereas innovation requires discovery and development of new competencies and this requires the loosening and replacement of these erstwhile virtues”

So whenever you examine your value chain (or chains) which define your business then you have to compare with competitors and adapt to both more efficient provision and the creation of the novel (see figure 5).

Figure 5 - Efficiency and Differentiation.

From the above, for a value chain within in an industry (consisting of components A to F), a company is compared to its competitors. The competitors have a differential B which the company lacks and also more efficient provision of C. The company has an efficiency benefit in D. 

The company needs to consider both an efficiency drive around C and examine the inclusion of differential B into its offerings in order to remain competitive.

Point 5) The map can be manipulated. When comparing a company with its competitors you can accelerate the rate of evolution of a component through open means or de-accelerate the rate through limiting competition (e.g. patents, regulation, acquisition, use of constraints). You can also take a deliberate position and exploit competitors inertia to change. In the above example, you might choose to drive component C to a more utility service (C'), exploiting competitors inertia to change (due to exiting models and practices) and enabling more rapid development of higher order systems built on this component (see figure 6).

Figure 6 - Changing an Environment

Whenever you compare your value chains with competitors, you often find multiple opportunities and points of 'Where' you can attack either adopting of novel activities, removing inefficiencies or deliberate manipulation of the environment and exploitation of competitors constraints and inertia. Understanding 'Where' you can attack is essential for determining 'Why' as why is a relative statement (why here over there). Once you have determined where you can attack and from this derived why you would choose one course of action over another then the how, what and when become relatively trivial exercises.

Point 6) Predictability of what to do varies which makes management complex.  One of the major issues with scenario planning is the issue of predictability. For example certain changes (such as the evolution of a component) are highly predictable in terms of what is going to happen. i.e. you can say the shift from product to utility is involved with; disruption of past industries that are stuck behind inertia barriers, co-evolution of practice if the evolving component is an activity, reduction of potential barriers to entries in secondary industries and rapid increases in higher order systems and un-modelled data. 

This is how back in 2005, many of us were able to clearly and precisely predict the changes of cloud computing, the growth of devops and big data, the rapid increases in novel systems built upon this and disruption of past h/w vendors. More importantly, many of us were able to game this to our favour.

Unfortunately whilst what was going to happen is highly predictable, when this change will happen depends upon competitors actions and whether the component is suitable for provision in the next stage (i.e. ubiquitous and well defined enough), that technology exists, that the concept exist and that a prevailing attitude of dissatisfaction with the current mechanism of provision exists. Fortunately there are a range of weak signals you can use to determine this.

When it comes to the genesis of novel components then by their very nature they are highly uncertain and unpredictable. There is an inverse relationship between future differential value and certainty which means we always have to gamble. When examining a value chain you have to bear in mind that predictability varies with state of evolution, I've summarised this in figure 7.

Figure 7 - Predictability and Evolution.

This leads to a question of should we be a fast follower or first mover to change?  

Point 7) Should I be a first mover or fast follower? Being a fast follower to the genesis of a novel component has certain strong advantages in allowing others to expend research and development on the uncertain and then cherry picking only that which is starting to evolve and become successful. But should I be a fast follower to a component that is evolving from product to utility?

In direct contrast to novel components, there is a strong advantage in being the first mover to shift from product to utility due to componentisation effects. This is exemplified by a model known as ILC (innovate - leverage - commoditise) or what I call the 'Wardley Thompson Technique'.

By being a first mover to industrialise a component to a utility then assuming you allow for public consumption of this then you enable other companies to build higher order systems on top of it. Those higher order systems will contain many novel components and the more efficiently you provide the utility service then the more you will encourage others to build on it by reducing cost of failure and experimentation.

These other companies are your ecosystem. Fortunately for you, as examples of those novel higher order systems start to diffuse and new improved versions appear then you will be able to detect this through consumption of your utility service. This enables you to get others to innovate for you (i.e. deal with the uncertain) and then leverage consumption data in your ecosystem to spot success. Once success is spotted then you can then move to industrialise the new components to a utility service.

For example, if you provided utility services for infrastructure (such as Amazon EC2) then as others built novel big data systems on top of this then you could leverage the ecosystem to spot success and commoditise to a utility services (such as Elastic Map Reduce).  Whether Amazon uses such a model we won't know but the model has some profound impacts which are detectable.

Under the model your rate of apparent innovation, customer focus, efficiency, ability to maximise future opportunity and stability of revenue will all become dependent upon the size of your ecosystem rather than the physical size of your company. Effective exploitation of the model which requires extensive data analysis of your ecosystem, speed of data feeds (i.e. utility consumption data is far more effective than marketing surveys) and an ability to act means you can create a company which continuously and simultaneously appears to grow in terms of innovation, customer focus and efficiency at faster rate than physical size. Of course, the process of running the model does mean you will occasionally feed upon (or harvest) your ecosystem either through acquisition or copying. See figure 8.

Figure 8 - Innovate, Leverage and Commoditise

As a rule of thumb you always want to be a first mover to industrialise but a fast follower to the uncharted (i.e. genesis of the novel and uncertain).

Point 9) - Gameplay is not uniform. Despite many talking about the importance of strategy, the level of gameplay and situational awareness varies wildly between companies. In an examination back in 2011 of 160 different companies, the Players (which demonstrated high levels of situational awareness, strategic gameplay and action) and to a lesser extent Thinkers (demonstrated high levels of situational awareness and gameplay but less prone to action) significantly outperformed the Chancers and Believers (both show low levels of gameplay) in terms of market cap growth - see figure 9.

Figure 9 - Strategic Gameplay vs the Use of Openness to compete (action)

More details on this can be found here.

Point 10) - This is the tip of iceberg.  I've spent the past decade researching into this field and either using the patterns in anger within companies or teaching members of the LEF (a private research group) to manipulate their environment. There's a whole range of highly predictable patterns from economic cycles (e.g. peace, war and wonder) to how new organisations evolve along with a mass of different gameplay (from sweat and dump to tower and moat) and common economic effects from punctuated equilibriums to the Red Queen. There are also some very good game players out there along with a number of companies who have shockingly poor situational awareness at the executive layer. 

Even basic questions like the different forms of inertia (see figure 10) to 'culture eats strategy for breakfast' turn out to be complex and often vary with evolution (see figure 11).

Figure 10 - Different forms of inertia

Figure 11 - Culture vs Strategy


We operate in a highly complex environment where situational awareness is critical. Our companies are comprised of value chains consisting of masses of evolving components (from activities to data). The means by which we manage, how we govern and even strategic gameplay varies with how evolved those components are. Even the importance of strategic gameplay relative to culture varies with how evolved the components are.

Understanding both your position, the position of your competitors and where you can attack is critical in today's economic climate but the reality is that many companies appear to have poor situational awareness which is why these Chancers and their industries are quickly overwhelmed by more skillful Players.

Situational awareness requires an understanding of your environment and this is not something generic strategy advice can give you but instead it is something you have to acquire through an understanding of your industry. You have to learn to play the game, it is a skill like playing a game of chess.

The McKinsey post is a generic list of useful stuff e.g. digital firms have a cost advantage, large digital firms have better access to talent, digital leaders are amassing vast quantities of data, digital firms require fewer people to operate and consumers tend to use fewer brands online. 

Yes, it is absolutely correct that there is a difference between traditional and next generation firms (see figure 12)

Figure 12 - Delta between Traditional and Next Generation (2011)

However, the problem with the post (and the same problem with the above list that I produced) is that it might tempt companies to go - 'we need to be more like these digital firms', 'we need to be more like Silicon Valley'.

Don't get me wrong, I'm not having a dig here at Willmott as the post sets out a reasonable set of changes. The issue is that companies might just adopt it and here's the rub. By blindly attempting to emulate 'Amazon's example' and without good situational awareness then you're just as likely by implementing such actions to encourage evolution of components in your value chain, undermine barriers to entry, reduce constraints protecting your industry, make yourself a more attractive target for a player to attack by laying down groundwork for a utility model and potentially hasten your decline. Playing a game of chess without looking at the board and just copying others actions is a disaster in the making - it's like a general bombarding a hill because some report says that '67% of successful generals bombard hills'. 

You need to think very carefully about your environment before embarking on such actions. There are some extremely skilful players out there, it's easy to get massacred with poor situational awareness - be warned.

Friday, February 14, 2014

Does Maturity Matter?

In 2009, the designer Thomas Thwaites attempted to build a common household toaster from scratch. Beginning with mining the raw materials he aimed to create a product that is built from common and highly standardised components and sold for a few pounds in the local supermarket.  This ambitious project required “copper, to make the pins of the electric plug, the cord, and internal wires. Iron to make the steel grilling apparatus, and the spring to pop up the toast. Nickel to make the heating element. Mica (a mineral a bit like slate) around which the heating element is wound and of course plastic for the plug and cord insulation, and for the all important sleek looking casing”.

After nine months and at a cost of several thousands of pounds, Thomas finally managed to create a sort of toaster.  However along the journey he had been forced to resort to using all sorts of other complex devices – from microwaves to leaf blowers – in order to achieve his goal. 

Our society, the wondrous technologies that surround us and those that we create are all dependent upon the provision of standard components. Whenever you attempt to remove this and go back to first principles, such as building your own nuts and bolts for a home made toaster then the wheel of progress grinds very slowly and becomes very costly. 

But nuts and bolts weren’t always a standard component. The invention of the first screw thread is often cited as 400BC by Archytas of Tarentum (428 BC - 350 BC).  Early versions of this and the subsequent nut and bolt were custom made by craftsmen with each nut fitting one bolt and no other.  

In the 1800s, the introduction of Maudslay’s Screw Cutting lathe enabled repeated production of uniform nuts and bolts with the same threads where one nut fitted many bolts.  The artisan skill of building the perfect nut and bolt was replaced by more mass produced and interchangeable components.  Whilst those artisans might have lamented the loss of their industry, those humble components also enabled the rapid creation of more complex machinery and new industry.

Volume production of uniform mechanical components enabled faster building of ships, guns and other forms of machinery.  It also allowed for the introduction of novel manufacturing systems that took advantage of these components such as the Portsmouth System (which later became the American System).  Without this change of the artisan nut and bolt to more industrialised and mass produced forms then we would all be following the example of Thomas Thwaites and toasters would be a luxury few could afford.

However, the progression of the nut and bolt wasn’t smooth.  Whilst they could be manufactured in volume with inter-changeable components, the lack of any agreed standards thwarted general inter-changeability. For example, the railways of Great Britain all used different screw threads and whilst some companies' in-house standards spread within their industries, there was no agreed standard.

In 1841, James Whitworth collected a large number of samples from British manufacturers and proposed a set of standards including the angle of thread and threads per inches. This was rapidly adopted in industry and became known as the "British Standard Whitworth". But how much of an effect could this make?  The following quotation from an obituary to Joseph Whitworth in the Times, 24 January 1887, should be fairly illuminating.

“The Crimean War began, and Sir Charles Napier demanded of the Admiralty 120 gunboats, each with engines of 60 horsepower, for the campaign of 1855 in the Baltic. There were just ninety days in which to meet this requisition, and, short as the time was, the building of the gunboats presented no difficulty. It was otherwise however with the engines, and the Admiralty were in despair. Suddenly, by a flash of the mechanical genius which was inherent in him, the late Mr John Penn solved the difficulty, and solved it quite easily. He had a pair of engines on hand of the exact size. He took them to pieces and he distributed the parts among the best machine shops in the country, telling each to make ninety sets exactly in all respects to the sample. The orders were executed with unfailing regularity, and he actually completed ninety sets of engines of 60 horsepower in ninety days – a feat which made the great Continental Powers stare with wonder, and which was possible only because the Whitworth standards of measurement and of accuracy and finish were by that time thoroughly recognised and established throughout the country.”

The standardisation of basic mechanical components had a profound effect in enabling more complex systems such as ships to be built. But all those components had originated as something novel, new, different and without standards.  We live in a world where there’s a constant flow of change, where the novel and different becomes commonplace, standard and mature. These more industrialised components then enable novel systems of greater complexity and the cycle repeats.

In the Theory of Hierarchy[1], Herbert Simon showed how the creation of a system is actually dependent upon the organisation of its subsystems.  As an activity evolves and becomes provided as ever more standardised components, it not only allows for efficiency in use but also increasing speed of implementation, rapid change, diversity and agility of systems that are built upon it.

In other words, it’s faster and cheaper to build a house with more commodity components such as bricks, wooden planks and plastic pipes than it is to start from first principles with a clay pit, a clump of trees and an oil well.  Furthermore the diversity and volume of different housing structures is a consequence of these standard components. This is the same with electronics and every other field you care to look at.  It’s also the same with nature.

This doesn't mean that change stops with the standard components. Take for example brick making or electricity provision or the manufacture of windows, there is a still significant improvement hidden behind the "standard".  However the "standard" acts as an abstraction layer.  The float glass method introduced by Pilkington changed how windows were produced but not what windows were.  Equally, just because my electricity supplier has introduced new sources of power generation (e.g. wind turbine, geothermal) doesn't mean I wake up one morning to find that we're moving from 240V 50Hz to something else.

If the constant operational improvements were not abstracted behind the standard then all dependent higher order systems would need to continuously change.  For example, all consumer electronics would need to continuously change as operational improvements were made in electricity supply.  The entire system would either collapse in a mess or at the very least technological progress would be hampered. Hence standard interfaces once they’ve emerged rarely change. There are exceptions to this but it usually involves significant upheaval and often Government initiatives e.g. changing electricity standards, decimalisation and the changing of currency or even simply switching from analogue to digital transmission of TV.

The importance of separation by the introduction of an interface is equally relevant with evolution in biology.  The rapid growth and diversity of life is a function of the underlying standard building blocks that have evolved to allow higher order systems.  If there weren’t underlying components from DNA to RNA messaging to transcription to translation to basic cell structures within more complex organisms then you and I would never have evolved in the time frame.  The interfaces provide a separation from the evolution of higher orders to evolutionary improvements of lower orders and are critical to progress overall.

So let us now consider a business. An organisation consists of a mass of activities, practices and data but those don’t stand still as new things are constantly introduced and diffuse – someone invents a telephone, a computer, a fax machine or the nut and bolt.  These new objects not only diffuse but through waves of ever improving examples the activity they represent seems to mature – the custom built nut and bolt becomes the British Standard Whitworth.  It’s this maturation or evolution to a more industrialised form that enables profound change in building more complex systems. 

The humble nut and bolt enabled machinery like generators that in turn enabled standardised electricity supply and this in turn enabled lighting, radio and computing. Hence, when you consider a business it not only consists of a mass of activities, practices and data but all of this is evolving to a more industrialised form and as it does so it enables new activities, practices and data. 

This leads to another question which has a critical importance in understanding change.  What matters more in our society, the invention of something new or the provision of something in a more industrialised form?

I'll examine this in a latter post.

[1] Herbert Simon, The Architecture of Complexity, American Philosophical Society, Vol 106, 1962

Start of the series

The start of a journey ...

I'm currently working on a range of techniques to identify and respond to future changes in an oncoming ‘Age of Wonder’.  The title is somewhat of a conceit as there is nothing unique about the changes that are occurring. In fact, throughout history we’ve experienced many 'Ages of Wonder' and a more apt term would therefore be stage because it is a repeating pattern caused by general economic forces.

Behind the work is a question and like all good questions it starts a journey of discovery. The question was "How do we navigate through a future of change?"

In order to properly explore this subject then I need to lay out some ground work on the general forces that drive our society. To begin with, we need to ask ourselves the question of "What is change?"

In the 1962 book Diffusion of Innovation[1], Everett Rogers defined a model for how an innovation is adopted over time among the members of a social system. In this case, an innovation is defined as an idea, practice or object that is perceived as new. The idea of diffusion itself however wasn’t new but instead pioneered first by Gabriel Tarde in 1903. However, Rogers developed this work demonstrating how most changes showed a common S-Curve shape with adoption being through common groups (from innovators to early adopters to early majority to late majority to laggards). The only significant difference between innovations being the variance of the slope of the curve (see figure 1)

Figure 1 - A diffusion curve.

This pattern occurs both for the diffusion of an object (e.g. a specific product example of a phone such as Ericsson’s Bakelite Telephone introduced in the 1930s) and the diffusion of the activity that the object relates to (e.g. use of a phone). Whilst incredibly useful, there are a number of key considerations with the idea of diffusion which make it problematic for exploring the future.  These are: -

1)  The rate of diffusion is not constant: comparisons over time provide a wide range of adoption curves and a general observation that the diffusion of innovations is accelerating.

2)  Not all innovation spreads: even where an innovation has utility (usefulness), a number of factors can influence its adoption. As Geoffrey Moore noted[2] there is a chasm between the early adopters of an innovation and the early majority.

3) Diffusion is not continuous: highlighted by Christensen’s work[3] on disruptive innovation, the diffusion of one innovation can be disrupted by the introduction of a new technology that offers different performance attributes from those established in existing value networks.

4) Diffusion of an activity consists of multiple waves: innovations tend to spread through multiple waves of improved objects such as products. In the early stages of a technological change, this rate of improvement tends to be slow and then accelerates until reaching a more mature and slow improving stage[4]. One consequence of the diffusion and maturing of a technological innovation is that increased information about the technology reduces uncertainty[5] about the change. Each improved version increasingly adds definition, eventually providing a system that can be considered feature complete, mature and generally well understood.

Hence whilst diffusion is a powerful concept, it unfortunately doesn’t provide us with a means of understanding future change i.e. we cannot say how something will mature, we can only say that multiple waves of diffusion over an unspecified length of time is involved in something maturing.  

Of course, this leads to our next question "Does maturity matter?"

I'll examine this in the next post.

[1] Everett Rogers, Diffusion of Innovations, 4th Edition, Free Press, 1995
[2] Geoffrey A. Moore, Crossing the Chasm, Harper, 1991.
[3] Clayton M. Christensen, The innovator's dilemma. Harvard Business Press, 1997
[4] D.Sahal, Patterns of Technology Innovation, AddisonWesley, 1981
[5] Rogers and Kincaid, Towards a new Paradigm of Research, 1981

Thursday, February 13, 2014

The slow but overwhelming flood of progress

Back between 2004 - 2005, development in Fotango (a company I used to run) had improved significantly through the efforts of Artur Bergman (now CEO of Fastly), James Duncan (now with UK Gov) and a host of others.

1) our infrastructure was provided by a virtualised environment known as the Borg. A mass of racks providing standardised virtual machines (i.e. 'commodity' infrastructure provided with Xen) all controlled by the Borg Queen who's job it was to create virtual machines on demand, configure them (we happened to use CFEngine), install them and monitor the applications and estate health.

2) we'd been using test driven development for a considerable amount of time and had built extensive test scripts for our applications. 

3) the process of delivering applications to live was relatively simple. A developer pushed an application from the code repository (we used subversion, Chia-lang Kao used to work for us) to production, the test scripts confirmed the system was in an acceptable state and the Borg Queen took over. Part of the configuration files determined what other services were consumed, the destination of the system and it managed graceful replacement and rollback if necessary. We had nightly rebuilds and validation of the entire estate.

4) the organisation itself was broken into three core groups which had started with IT. We had development (pioneers) who created the novel. Frameworks (settlers) who identified common patterns for provision of new web services. Systems (town planners) who managed the estate in terms of core systems and services (i.e. Borg, testing agents, monitoring agents etc).

5) we recruited from around the world and mined open source communities (especially the Perl world) for talent. LPM used to even call us the Borg as we assimilated so many ... well, it's your own fault for being so good.

6) the company had extensively used web services for many years, most of the systems ran on web services and we'd even started experimenting with the idea of providing a more extensive list of public web services. Later on in 2005 this became our utility platform service known as Zimki, one of the first public PaaS environments which provided a server side Javascript development environment with common services (nosql like object store etc), migration between different Zimki installations, automagic translation of functions to web services and even billing information down to the function. The closest you would come to Zimki today would be a combination of CloudFoundry and the Node.js buildpack. The vision behind it was all 'Pre-shaved Yaks' and there are long list of people to thank for its creation, especially Tom Insam (who later built doplr and lanyrd) and Mark Fowler.

But even in 2005 we had our own legacy IT, a lot of horrors that remained from the past such as an ill fated SAN effort and a tortuously complex past platform.  We'd also gone through some painful lesson e.g. agile everywhere. However, we weren't alone with this and we continued to improve, implementing by end 2006 a more open working environment, hackdays (every other Friday), agile design (paper prototyping), BYOD (in fact we had a store cupboard of help yourself macs in case you needed it), events (opening up our Old Street offices to the local tech community ... this was back in the days before Old Street became the technical powerhouse it has) and a host of other techniques (including mapping).

For us, all the above was just normal, it was how stuff was done. So why do I mention this? 

Well, today all the rage is about continuous deployment, design for failure, cloud, server side javascript, PaaS, nosql, open source, BYOD and agile techniques. Whilst these fields have progressed extensively in the last decade, it always surprises me to hear how many companies are so far behind the game. It does appear that you can measure the delta between the leading edge and the laggards in terms of decades of technology. It's like Enterprise 2.0, a coined term in 2006 by Andrew Mcafee to describe a set of technology changes that Euan Semple was busily implementing in the BBC before 2001 and for which some companies are only just getting started today.

However, the really fascinating part of this is how the changes tend to be exponential and create a punctuated equilibrium with the past. So what starts as a trickle becomes a flood. That's the thing about cloud, devops, big data, enterprise 2.0 and all these related topics. Today is not about the beginning of a subject but the overwhelming flood that you have to adapt to. Alas for some companies it's already too late.

Out of interest, it was the work we did back then that enabled me to develop the models of how organisations evolve and later test this was happening.  I mention that because Matt Asay's article on the Future of DIY IT is spot on. That flood is upon us and 'traditional IT is dead. Not just a little bit dead. Dead dead.' 

Ok, technically it won't completely disappear (things rarely do) as it'll whimper on in some niches for quite a time. There's still niches for modern day swordsmiths but they're niches. Have no illusions though, the flood is here and like it or not, you're going to have to change.

Tuesday, February 11, 2014

This is the age of disruption ... oh, give it a rest.

What we can demonstrate

1) Companies consists of many value chains comprising of components (activities, practices and data) that evolve due to competition.

2) As those components evolve their properties changes from uncharted to industrialised

3) As those components evolve we develop inertia to change.

4) The interplay of inertia and competition creates three economic states for any component - peace, war and wonder.

5) When those components have a broad effect (i.e. are part of many value chains) then those changes are seen at a macro economic scale. We call these ages or more appropriately Kondratiev waves.

6) Every age begins with commoditisation of a pre-existing act, disruption of past industries stuck behind inertia barriers (the causalities of war), co-evolution of practice (leading to new forms of organisation), an explosion in higher order and novel systems (the wonder),  reduction of barriers to entry and rapid increases in unmodelled data.

7) After a period of re-organisation which occurs during the war and wonder stages then the effected industries settles down and the novel activities created mature. Past practices and companies unable to adapt die off.

This is how, back in 2005, we knew that cloud computing (commoditisation of a range of IT acts) would lead to explosion in higher order systems, co-evolution of practice (devops), rapid increases in unmodelled data (Big Data), new forms of organisation (next generation), reduction of barriers to entry and disruption of past industries unable to adapt.

This is nothing more than a repetition of every cycle that we've been through countless times before ... and guess what, commoditisation of manufacturing through 3D printing will have exactly the same effects. Figure 1 gives a simplified overview of that repeating cycle.

Figure 1 - A repeating cycle (2007-2008 research)

Now, sometimes a means for communication becomes more of a commodity (postage stamp, phone, internet etc) and as a result the entire process of evolution speeds up. It's not that we've become more innovative but instead the speed at which something genuinely novel becomes a commodity has accelerated.

Ok, so what's wrong with the 'Age of Disruption'.

For starters it's not an age. It's nothing more than an phase of an economic cycle. If you really want to call it the 'Age of Disruption' then you should call it the 'Age of Disruption v6.0' because it's about the 6th major one we've been through in the last 300 years. Actually, we've been through several hundred more smaller and localised ones, so in construction you could probably call it the 'Age of Disruption v57.0' - it's a really poor title.

It's also worth remembering that the reason why companies are being disrupted is not because of some unexpected change. This form of disruption is highly predictable and preventable. Alas, those companies being disrupted on the most part share one common characteristics - very poor strategic play.

In figure 2 (from a project a few years back), I examined the level of strategic play versus willingness to use openness as a competitive tool among over 100+ different companies. The thing worth noting are those companies showing high level of strategic play (in particular the Players) had high market cap growth over the last seven years. Those which showed low levels of strategic play (in particular the Chancers) have low levels of market cap growth, often negative and with some going bankrupt.

Figure 2 - Market Cap, Strategic Play (2011-2012 research)

Notes on graph

1) Each quadrant is given a label - thinkers (high strategic play, low levels of openness used as a mean to compete), players (high strategic, high open), believers (low strategic, high open) and chancers (low strategic, low open)

2) I've marked two groups in grey, the top group shows higher market cap growth and the bottom group shows lower levels of market cap growth. The players were strongest and the chancers were weakest.

3) Size of bubbles equals volume of companies.

Whilst culture is important all the time, Strategic gameplay appears more important in specific states of the economic cycle. IT is in one of those states when it matters, the state of war.

However, we're already entering the state of wonder, those past players will be cleared out and a new dominant form of organisation (as per Fordism in Electricity Age, American System in Mechanical Age) will emerge. This new form, which we call the next generation have the following characteristics (see figure 3).

Figure 3 - The Next Generation (2010-2011 research)

The first problem with the 'Age of Disruption' is it implies something new. It isn't. It's just a repetition of what has happened before governed by exactly the same mechanics (competition, evolution and inertia) with the same results. So, please take the effort to make it clear that this is not unusual.

The second problem is that this type of disruption is predictable. When we talk of disruption as in product vs product substitution then that's hard to predict and the interaction of culture and inertia becomes critical for survival for a company. In this case, because the change is predictable then the issue of culture and inertia are solvable well in advance and strategic gameplay is what matters most. 

The fundamental reason why companies WILL be disrupted by this predictable change is not because of culture and inertia (i.e. we need to be more like a Silicon Valley company won't help you) but instead because you've got a bad case of poor strategic play. 

So, if you wan't to call it an age (which it really isn't) then a more accurate title might be the 'Age with quite a number of out of their depth CEOs driving companies needlessly to disruption'. Now, obviously this isn't going to be a popular title and so I'm sure the 'Age of Disruption' will stick and numerous other factors (culture, inertia, unexpected change, customers) will be blamed as the principle cause of failure.

However, just remember that this 'Age of Disruption' is a repeating phenomenon throughout history and don't confuse it with the type of unpredictable product vs product disruption (such as the different format hard drives that Christensen talked about).

Oh, and what follows next? An age of wonder, followed by an age of peace, followed by an age of war (aka age of disruption), followed by an age of wonder, followed by .... and on, and on, and on.

A pet favourite - Inertia

Organisations consist of a mass of evolving activities, practices and data. As those components evolve from uncharted to industrialised then their properties change. It's that change of properties which means that one size fits all methodologies don't work despite organisations endlessly pursuing simple techniques (agile vs six sigma, insource vs outsource, push vs pull, network vs hierarchical, Hayek vs Keynes etc).

Figure 1 provides a simple view of organisation demonstrating a set of components at different states of evolution, for more on mapping read here or watch the detailed video on the LEF site. Figure 2 provides a list of common characteristics.

Figure 1 - An organisational map

Figure 2 - Changing properties

Now maps are not only useful for increasing situational awareness and an understanding of how things should be managed but they can also be used for various forms of scenario planning, strategic gameplay and economic learning.

For example, by using mapping then it can be quickly discovered that as components evolve they also go through different economic states - one of peace, one of war and one of wonder.  Depending upon how widespread the components are in other value chains, then sometimes these states manifest themselves as macro economic waves (known as Kondratiev waves) but in the majority their effect is usually much more localised (e.g. a specific industry). More details on this can be found here.

The different economic states are important because in the state of peace we experience that sustaining change tends to exceed disruptive change whilst in the state of war then disruptive change tends to exceed sustaining. The states also have different levels of predictability, for example the war state is highly predictable in terms of what is going to be happen but just not when (it depends upon actors actions). This means you can be prepare for the war state many years in advance but alas we often find companies being disrupted by highly predictable and ultimately defensible change. Cloud computing is an example of this.

You can also very roughly characterise the different states with wonder being breakthrough, peace being incremental and war being disruptive. Naturally our response to this depends upon the components in our value chain, how evolved they are, competitors actions and the economic state. This is all part of gameplay.

Unfortunately, if you can't see the map then business is like playing a game of chess without seeing the board - every action is haphazard - sometimes outsourcing works, sometimes it doesn't. It doesn't have to be like this.

The economic states are also useful in predicting how organisations will evolve,  It's no coincidence that every major age starts with commoditisation of a pre-existing act, that commoditisation results in a state of war and co-evolution of practice and those companies that survive have a different set of characteristics. The Electricity Age didn't start with the Parthian battery but with Tesla and Westinghouse and it led to Fordism. The Internet Age gave us Web 2.0. The Mechanical Age gave us the American system. Cloud has given us the Next Generation.

With experience of mapping, it turns out that many aspects (not all) of change are surprisingly predictable, defendable against and manageable.  Even those areas of high uncertainty such as the genesis of the novel and new can be managed by exploiting others (see ecosystems).  Of course, for those who don't look at the chessboard then everything seems highly confusing, random and difficult to understand.

In such circumstances, terms like ecosystem are completely misunderstood, strategy is often a mess of operational, tactical, purchasing and implementation details with little or no 'why'. Managers grab onto truisms like 'culture eats strategy for breakfast' or 'be a fast follower' or simply grasp the latest fad - agile vs six sigma,  'Open by default' without any understanding. These Chancers only survive because competitors are equally as blind. 

These Chancers also tend to get caught out by inertia to change. Now inertia is critical to gameplay and can be used to set up an environment where the competitor's biggest threat becomes itself.  The use of inertia is one of my favourite specialities. It's far beyond the more common tactical plays (tower and moats, ecosystems, alliances etc), the dark arts (manipulation of constraints, misdirection etc), altering competition (the use of open as a weapon, changing buyer vs supplier relationships, lowering barriers to entry) and the use of effective management. There's a special type of wicked delight that comes from setting in play a situation where a competitor self implodes.

However inertia is much more than just a tool, it also a key part of governing economic states. The states occur due to an interplay of competition (supply and demand) which drives evolution combined with the build up of inertia due to existing models, relationships, business and practice. Understanding when and how to use inertia is essential to the finer points of gameplay. Of course there are many forms of inertia.

In figure 3, I've characterised many of the main forms of inertia. All of them (with practice) can be exploited and certainly are things which an organisation should watch for.

Figure 3 - Types of Inertia.

Friday, February 07, 2014

Something for the future II

A list of companies in two groups. I'm putting this here in order to return to the lists in 2020.

It's a prediction test (useful for me, hence I'm putting it somewhere public) but probably not useful for anyone else. There is no obvious relevance to the order (it's actually two different scoring systems).

This is simply two lists.

[System 1.0 No Change]

Group 1
  1. Amazon
  2. Google
  3. Samsung
  4. BP
  5. Baidu
  6. China Telecom
  7. EMC
  8. Lloyds Bank
  9. ARM
  10. NetFlix
  11. Ebay
  12. Yahoo
  13. Intel
  14. Facebook
  15. BAE Systems
  16. Lenovo
  17. Salesforce
  18. Time Warner
  19. Huawei
  20. Canonical
  21. Citrix
  22. Fastly
  23. Bromium
  24. Opscode
  25. Juniper Networks
[System 2.0 Moderate Change]

Group 2
  1. DuPont
  2. GSK
  3. Microsoft
  4. WalMart
  5. Berkshire Hathaway
  6. Goldman Sachs
  7. Barclays Bank
  8. Walt Disney
  9. Cable and Wireless
  10. Canon
  11. Dell
  12. SAP
  13. IBM
  14. Twitter
  15. Cisco
  16. Oracle
  17. CouchBase
  18. HP
  19. Puppet Labs
  20. RedHat
  21. Apple
  22. PistonCloud
  23. Rackspace
  24. Nokia
  25. Zynga

Thursday, February 06, 2014

Openstack ... this is the year! Again!

Every year, I hear people tell me that this is the year that OpenStack will make it big and challenge Amazon.

Every year, I repeat the same response.

If OpenStack is ever going to be big, it's going to need to create a competitive public market. It needs a player (or players) with a brutal commodity focus and smart gameplay to make a massive investment (today, probably around the order of $5 - $10 billion+) in order to create public clouds that are AWS clones. It's going to need those players to lead the project and go head to head with Amazon.

It doesn't need a company trying to enforce its own API regardless of the market as part of an on-ramp to its own services.

It doesn't need a primary focus on private cloud, a transitional space which will come under increasing pressure over the years. It must look at the public market i.e. AWS and GAE.

It doesn't need endless discussions and a collective prisoner's dilemma differentiating with each other.

It's doesn't need a free for all and a lack of strong leadership.

As more time passes and without that player(s) emerging then despite all the noise about OpenStack, my view remains the same

Tuesday, February 04, 2014

Context, Situation, Components, PaaS, Dead or Alive … it's all semantics isn't it?

tl;dr Caveat Emptor

Figure 1 – HS2

Figure 1 provides a visual view of an IT system related to the HS2 project.  The map is created by taking a value chain of components required to meet some user need and then plotting those components against how evolved they are. This is not a permanent view but a snapshot in time because the many components that make up the system are evolving due to competition (both supply and demand side).  

The components themselves include activities, practices and data and as each component evolves it moves from one defined set of characteristics (known as uncharted) to another (known as industrialised). Since the characteristics of the component change with evolution then how you treat a component depends upon how evolved it is.

On Situational Awareness
Situations or events occur in a context i.e. there are surrounding facts pertaining to the actual event. Being aware that an event has context is known as contextual awareness. Being aware of what that context was is known as situational awareness.

Knowing that the state of evolution impacts characteristics and hence changes how a component should be treated is known as contextual awareness i.e. we know that the context, the state of evolution, has an influence. Knowing that a component like Infrastructure is in a commodity stage of evolution is situational awareness i.e. we know what the context is and how it should be treated.

Contextual and Situational awareness are not the same.

On Composability
Each component is normally part of one or more value chains.  The value chain described by the map is therefore composed of many components and we describe it as being composable.

However each value chain is normally a component of one or more other value chains i.e. the output of one (e.g. brick manufacture) is normally a part of another (e.g. housebuilding). Hence the entire value chain may in fact be part of a larger composable system.

Furthermore the components of the map may indeed represent their own value chains.  Hence when you look at the map in figure 1, the component ‘Web Site’ is in fact likely to be an entire value chain consisting of many components (from content, data to web farm).

On Componentisation
From figure 1, at the top of the value chain is the user need that we are attempting to provide. At the bottom of the value chain are the myriad of sub components that are consumed to enable this.  There is a link between evolution and value chain.  It is known as componentisation (from Herbert Simon's Theory of Hierarchy). 

As components evolve to provide ever more mature and standard components then they enable the rapid development of higher order systems i.e. standard nuts and bolts enabled machines, standard building materials enabled housebuilding, standard electricity provision enabled consumer goods.

Syntax and Semantics within a composable system.
The components of a system need to interact (i.e. communicate) with each other. There are two important forms of this interaction – semantic and syntactic.

Take a simple tap. The tap has certain physical properties such as size and weight and an interface for use i.e. an angular force. We apply a clockwise force to the tap and it turns (we hope). The syntax refers to the interfaces i.e. the method by which we communicate. In this case the message between one component (such as ourselves or some controlling machine) and the tap is through the application of angular force.

Semantics refers to the understanding of meaning. For example, I apply a clockwise force because I wish the tap to turn off.  My meaning is ‘turn off’, the method of communication is ‘clockwise force’. Whether the tap actually turns off or on depends upon how it’s designed and the screw thread. So it’s quite possible that I might mean ‘turn off’, I apply a clockwise force to convey this ‘message’ but the tap ‘understands’ it to mean turn on.

In the above the syntax might be understood but the meaning is not. Hence when talking about a system we often to refer to the level of syntactic and semantic interoperability between components.

In computing terms, syntactic interoperability refers to such things as parameter passing mechanisms and timing assumptions and relates to the ability of one component to communicate with another. Semantic interoperability refers to the issue of common understanding or meaning of what is communicated between the different components e.g. when you pass data to an API that the receiving system understands the data in the same way.

On Substitution
Whilst composability is the ability to assemble components into various combinations of systems,  substitution is the ability to substitute one of those components for another.

Take any Meccano set. You have a mass of different components that can be assembled through an instruction set (a booklet on how to build) into various forms with components which have interfaces (e.g. application of angular force) and which operate as expected (e.g. clockwise force means tighten nut). 

If you look at the set you also have many duplicate items i.e. many identical nuts and bolts with the same apparent properties. However, the instruction set doesn't tell you which one of the identical pieces to use at a specific point and instead you can use any of the 'identical' pieces. This is a concept known as substitution.

I italicise 'identical' because the components aren't actually the same, they are just different instances (substantiations) of the same thing i.e. you're not replacing a nut or bolt with the same nut and bolt but a different nut or bolt which is hopefully identical. Substitution simply refers to changing one substantiation of a component in a system for another substantiation of the same component. Syntax and semantics are again important here.

For example, I can substitute one substantiation of a component (i.e. tap which is tightened by force) for a syntactically compatible substantiation of the same component (i.e. a tap which is tightened by force) but whose semantics are different and hence it operates in a different way (i.e. use of anti-clockwise force to open rather than clockwise). When you substitute a component for one that is not syntactically and semantically compatible then this often requires a change to the overall system. Such a change means work.

For example, suppose you have a chemical plant and you change one component for a syntactically but not semantically compatible version. Well, at the point your control system might want to cut off flow to a reaction chamber, you might get a nasty surprise and hence you're going to have to spend a bit of time adapting the entire system to this changed component. 

The greater the degree of syntactic and semantic compatibility that exists between different versions of the component the less work is needed to change a system. Since work normally involves time and resources then the sensible answer is not to redesign a plant but to use a component which is compatible (i.e. an identical replacement).

The same effects are important in computing. For example, let us assume I'm using Amazon EC2. Let us suppose I decide to change one m1.large machine instance for another then if both instances are not syntactically and semantically compatible then this will require a change to the overall system including management systems, orchestration etc. 

Fortunately with Amazon EC2 both machine instances are syntactically and semantically compatible from the point of view of the user. This is even true over regions. Hence a machine instance in one region uses the same API and it operates in the same way as another region, as opposed to having different EC2 APIs for different regions.

Of course, if they weren't syntactically and semantically compatible then the work needed could be alleviated by the introduction of a translation system i.e. an abstraction of the actual interfaces and provision of a common interface which translates to the various incompatible forms. This is always inefficient compared to compatibility between the different substantiations of the same component. 

I emphasise 'from the point of view of the user' because it is entirely possible that the systems that Amazon runs in different regions aren't actually syntactically and semantically compatible and the Amazon EC2 API is acting as a translation layer to different underlying interfaces. From the point of a user you won't know unless Amazon tells you. 

Unfortunately, whilst we have high degrees of compatibility within and between different regions of AWS, you have varying degrees between AWS and other cloud providers. Hence you’re always faced with an issue of work in substitution or use of a translation layer unless you stick with the one provider.

So why would you consider using another provider? Well, substitution is also important in business terms for various reasons such as pricing competition, balancing buyer and supplier power and second sourcing options.  However, substitution is equally important in economic terms. 

If we consider componentisation and its ability to enable us to rapidly develop higher order systems then that depends upon three factors :- 

1) higher order system being composed of components
2) interactions between the components
3) substitution of components for different compatible instances

Without the ability to replace a component with a different substantiations of the same component then in Meccano terms there would only be one instance of every type of nut and bolt i.e. every nut and bolt would be different. It would be the equivalent of having only one instance of a brick and hence every brick being different.  Under such circumstances it would be impossible to have common architectural plans. You could not rebuild any model that I built as you'd have to use different components. This would incur severe costs in terms of work in building any higher order system.

We even have a name for highly interoperable components that can be substituted for compatible versions - we call these commodities.  It's the provision of commodity components like bricks, electricity, nuts and bolts that has enabled rapid higher order system development and created the wealth of architectural building, consumer electronics and mechanical devices that we experience today.

It should be noted that large degrees of variation in compatibility of underlying susbsystems can have seriously negative effects on our ability to build. We call this sprawl. However, one nut and bolt doesn’t fit all purposes  (i.e. we require specific properties for medical components) and there are also issues with systemic failure (i.e. if all rice was the same type then the entire species can be eliminated by a single type of pathogen).

Hence for reasons of stability and agility then systems normally tend towards a limited range of types of a component with each type having defined semantic and syntactic interoperability with other components of the system. Each type of component also has syntactic and semantic compatibility between multiple substantiations of the same component ideally through multiple sources. 

Hence with nuts and bolts we have a range of standard types with defined properties.  Each standard type is produced in volume with ‘identical’ nuts and bolts produced by multiple providers.

It's important to understand that these commodities represent a limitation of choice i.e. there is a range of types for bricks or nut and bolts. It's that limitation of choice which enables our agility in building higher order systems.

On Degeneracy
There is another very important term we also need to consider and this (in biology) is known as degeneracy.  It's the ability of one component to take on the role of another component within a system. In engineering terms, it's the ability to redeploy a system for another purpose such as turning your refrigerator into a heating device.  Degeneracy is very important for adaptability to changing circumstances i.e. turning a wagon train into a defensible but makeshift fort.

On Context
If we look at figure 1 again, we can see it contains many components interacting with each other and higher order systems built from lower order subsystems. However, the overall characteristics of the components change as they evolve.

In the genesis state, a component is relatively unique. Our models of understanding are only just developing (i.e. it's a time of exploration). These components are not ideal for building higher order systems but they can consume lower order components. They generally show low levels of syntactic and semantic interoperability with other components and there is little or no substitution.

As the component evolves (due to competition) and we start to see custom built examples then our model of understanding of what this component is matures.  In this stage we normally see early forms of interoperability with other components along with  attempts of building higher order systems with it. For example, the early custom-built generators (such as the hippolyte Pixxi) were used to conduct all sorts of experiments in creating higher order systems like lighting.

As the component continues to evolve then we start to see the first products. Syntactic and semantic interoperability with other components starts to improve. Increasingly the product is used as a component of something else, for example Siemens generators being used to power machinery. Our models of understanding of what the component is become reasonably mature and even common understanding appears with expected norms of behaviour. We see early examples of substitution but syntactic and semantic compatibility between different substantiations of the same component from different providers is rare.  However, the importance of communication with other systems often leads to standards for communication between components. Hence whilst products can often be used and communicated to in the same way, substitution of one for another is often complex. 

As the component continues to evolve it eventually becomes more of a commodity and suitable for utility provision. Whilst our understanding of the component is very mature and expected norms commonplace, there is a period of transition during this change as we move from a product mentality (one of feature differentiation) to a commodity mentality (one of operational efficiency).  In this stage, syntactic and semantic interoperability with other components is well established. Syntactic and semantic compatibility between different substantiations of the same component from different providers develops strongly over time.  This provision of standard forms of the component with standard interfaces enables a rapid acceleration of building higher order systems.

The connection between these is provided in figure 2

Figure 2 - Evolution, Syntax and Semantics.

The Importance of Evolution
It’s important to understand the process of componentisation and how evolution enables this to happen in order to make sense of the change that occurs around us in business.  If you have any doubts about evolution then I suggest you pick a commodity item and go to your local library and look up its history. You’ll find that the types of publications around the item have changed over time from ‘wonder’ through ‘building’ through ‘operation and maintenance’ through to ‘use’. I’ve expanded the certainty axis from a standard evolution graph (figure 3) into figure 4 in order to give you some pointers.

Figure 3 – Evolution 

Figure 4 – Evolution and Type of Publication

There are those who would have you believe that evolution doesn’t exist and that the progress from genesis to commodity doesn’t happen or it's governed by some magic that only they know the secrets of.  Contrary to such mysticism, evolution is not a belief but instead a model of a surprisingly simple, repeatable and discoverable process driven by competition. You can discover it for yourself by taking that trip to the library and simply looking at how things have changed e.g. from early abundant guides on 'How to Build a Radio Set' to later dominance by use such as radio listings like the Radio Times. 

You live in a world where yesterday's rare wonders becomes today's invisible, commonplace and commodity subsystems.

Evolution also has impacts. It creates cycles of change, it causes inertia, it drives things to a more standardised form, it can be manipulated and its course accelerated through open means. If you don’t understand how things evolve then it’s practically impossible to gain strong situational awareness in business. The lack of this has demonstrable negative impacts.

A Case in Point
I wanted to specifically outline the importance of limitation of choice in the above because there is a current vogue for arguments over ‘PaaS is dead’, ‘App Containers are PaaS’ and ‘App Containers are the future of PaaS and all other PaaS is old hat’. 

Most of these these arguments appear to be based upon a flawed understanding of evolution, componentisation and the importance of limitation. So, I want to first look at an example of what a PaaS should be – such as Heroku, google app engine or cloud foundry. 

If you examine a system like Cloud Foundry then its focus is on the developer rapidly creating higher order systems.  This is achieved through a limitation of choice in underlying components i.e. specific buildpacks, defined services for common activities etc. The focus of the developer is pushed towards writing code, building data and consuming services.

Of course, with any new system that is built then the code and data can be packaged into a product and ultimately, if Cloud Foundry observes the ecosystem carefully then new services can be determined from this and provided to all.

Now, containerisation per se (examples being docker, warden, LXC) is a reasonable approach to isolation of compatibility issues in underlying infrastructure systems. PaaS environments like Cloud Foundry use containerisation under the covers.  I’ve summarised this in figure 5 and it’s an example of what I would describe as a strong PaaS play combining componentisation, limitation and consideration of how things evolve.

Figure 5 – PaaS and Limitation

However, an alternative view is the idea of App Containerisation. Unfortunately, containerisation is often conflated with shipping containers and how that industry changed through their use. However, shipping changed not because of the introduction of containers (there were a plethora of different shaped cargo containers in the past) but through the limitation of choice and the introduction of highly standardised containers.

The problem with the idea of App Containers is the application, the framework, the configuration is all contained within it and rather than limiting choice it allows for a wide range of permutations (see figure 6). This is often promoted as flexibility but such flexibility is the antithesis of componentisation and any desired agility in creating higher order systems. 

Figure 6 – App Containers

Admittedly App Containers are better than what exists in some firms today i.e. building everything yourself where everything is flexible but it’s far weaker in comparison to a PaaS that limits choice for what are common activities. Now, there are ways of solving the problems of App Containers by harvesting the ecosystem to identify common App Containers and weeding out all counter examples but that’s a highly skilled and difficult game.  Given that App Containers are often touted in Private PaaS environments then it is also unlikely to occur in many firms.

In all probability, as with the past where every major company seemed to have their own home grown linux distro then we’re likely to see major companies have their own permutations of application, development framework and configuration for every single activity even when it’s common. The sprawl will become horrendous and that sprawl will have negative consequences and cost.

The Pig and the Brick House
The best example of this was a thought model used by Herbert Simon to explain componentisation and it's based upon the story of the three pigs and the big bad wolf. Imagine you're a pig and you've decided to build a brick house.

Now unfortunately you've only time to do twelve things before the big bad wolf appears. Those things could include make a brick or cement two things together. Anything which isn't completed and stable is blown down. The house has four walls, each wall is ten bricks high and ten bricks long.

Now, unfortunately you need to start by building bricks from raw ingredients but fortunately each brick is a complete unit. So in the first turn before the wolf turns up you can build twelve bricks. Now, you'll need 4 (walls) x 10 x 10 (bricks) equals 400 bricks. Hence it'll take 34 turns and visits of the Wolf to get enough bricks. In fact in the last turn, you'll have 8 moves to do other stuff.

Now, you have a problem. If you try to build the whole house in one go then whatever you put together gets blown down by the wolf when it visits. You've got 400 bricks and you can't turn that into a house in one turn! As an alternative you can first cement ten bricks together into a line. Each line is stable component which will resist the power of the wolf. Whatever you do with the other bricks doesn't matter but at least after the next visit you have a stable line of bricks.

After ten more visits of the wolf, you have ten stable lines and so you can use the next visit to create a stable Wall cementing all ten stable lines together (one of top of the other). Repeating this process then after the 78th visit of the wolf you have 4 walls. Before the 79th visit you can cement your four walls together to create your stable component of a house and then be safe from the Wolf forever more.

The point of this thought experiment is to show that building through stable subsystems is essential for development of higher order systems. In practice it has an exponential effect. 

An even better scenario for the above would be a BricksRUs service where you can just buy-in pre-built bricks, lines and walls.  Of course, those pre-built components will limit your choice - a line is 10 bricks, a wall is 10 lines and a brick is a specified shape and size etc. But use of it will accelerate your rate of development. I could even have my house built in one turn, before the Wolf even gets there.

The key thing to understand is there exists a trade-off between flexibility of the lower order and agility in building higher order systems. The limitation of choice is essential for rapid development of more complex systems.

In the case of systems like Cloud Foundry then they are deliberately limiting choice by provision of defined subsystems e.g. services to be consumed, development environment to build in etc. This is a really good thing to do and is analogous to the BricksRUs example.  In the case of App Containers there is no limitation nor enforcement of such and there is only flexibility - you can make any brick you want, any shape, any material etc. This is a really bad idea as the permutations here are vast and this is what causes sprawl and limits agility.

I cannot emphasise enough how important an understanding of how things evolve, how things change with evolution, the benefits of componentisation and the necessity of limitation of choice are to navigating a safe path through the turmoil of today. 

PaaS has a bright future when we’re talking about Heroku, GAE, Azure, Cloud Foundry and equivalent systems. I'm also heavily in favour of industrialising apps and any common component where possible, something which CSC's Dan Hushon has recently talked about. Also, I'm very positive about the future of underlying tools and components like Docker.

Unfortunately there’s a lot of stuff trying out there to pass itself of as PaaS and a lot of misunderstanding on componentisation. Whilst components like Docker are extremely useful (and deserve to spread), there are those trying to portray it as a key defining characteristic of a PaaS.

Forget it, Docker will become a highly useful but also invisible component of PaaS and the success of PaaS will depend upon the limitation of choice and certainly not the exposure of underlying systems like Docker to end users. It’s extremely easy to take a path that will lead you down a route of sprawl. There are some very exceptional edge cases where you will need such flexibility but these are niches. I'm afraid some businesses however will probably get suckered into these dead ends. 

Hence the message of today is ... Caveat Emptor.