Friday, February 21, 2014

If I was Sauron - ODF vs OOXML

Sometime ago, I wrote about the rather tortuous path that UK government has faced over open standards and the likely battle that was going to develop over OpenXML vs ODF. Well it seems with MSFT rallying its supporters to respond the the Cabinet Office consultation then that battle is finally upon us.

When I published the post, I also wrote some notes on what I would do if I was a lobbyist in charge and how would I persuade a Government that it should lock itself in further. I called it rather jokingly - Sauron PR: 'we've got an eye on your future'

The notes were based upon a set of techniques from messaging, to a late surge, to creation of fear and to occupying the middle ground. The last point is of particular interest because one common technique is always to attempt to establish two extremes with your viewpoint as the centre ground. Hence, you try to create one extreme of proprietary, pro IPR and a counter extreme of open source, anti IPR and then promote yourself as the more reasoned middle. Of course, the extreme of open source and anti-IPR is almost entirely fictitious as open source is very much pro certain types of IPR (which is why we have open source licenses). But when it comes to lobbying and perception then you should never let a bit of reality spoil the party and if you can't find an extreme then you can always manufacture it.

Anyway the post from MSFT made me smile, it is almost golden. I don't know whether this is part of a crafted campaign but it hit many of the points that I would have raised hence I thought I'd take some time to go through it. 

It opens with ...

"You may not be aware, but the UK government is currently in the process of making important selections about which open standards to mandate the use of in future. These decisions WILL likely impact you; either as a citizen of the UK, a UK business or as a company doing or wanting to do business with government"

First it's absolutely spot on with these points. Yes, the UK Government is in the process of making selections and yes, it's supposed to have an impact. Governments don't tend to do things unless they intend to have an impact. The whole point of open standards is to enable a more competitive market where users have choice and know if they switch between one software system and another that things work. Any work you have to do in the switch is a cost of being locked into one system. As MSFT points out the UK government has significant lock-in to MSFT, estimated by them at around £500 million. That's an awful lot.

The UK Government obviously would like to have choice - we're talking word documents and spreadsheets after all and there's no sensible reason for the UK Gov to continue increasing it's liability by remaining locked into a format that isn't an open standard.

'An important current proposal relates to sharing and collaborating with government documents. The government proposes to mandate Open Document format (ODF) and exclude the most widely supported and used open standard for document formats, Open XML (OOXML).'

This is pure Machiavellian genius - if I was Sauron PR then I would hire this person straightaway.

When I write a .docx file in Office 2013 then the file has one of two possible OpenXML (aka OOXML) formats - transitional and strict.  Yes, you heard me right - OpenXML (OOXML) has two forms.

The transitional format is the default for Office 2013 out of the box and is also the version used to write .docx in Office 2010. It's absolutely right to say that transitional OpenXML is a popular format and many documents are written in transitional Open XML (well, almost and then there's issue of extensions). However, and this is the neat bit, the ISO approved 'open' standard is strict OpenXML and the transitional format is only supposed to have been ... wait for it ... transitional.

So, you can say that the Office default for .docx and hence one of the popular formats is transitional OpenXML and strict OpenXML is an ISO approved open standard. But of course any lobbyist worth their salt would reduce this by dropping the words strict and transitional to arrive at the popular format is OpenXML which is an ISO approved open standard. 

Did you see the trick?

Whilst OpenXML is most definitely a popular format and whilst OpenXML is an ISO approved open standard, the popular OpenXML (transitional) it is not the open standard but the 'very format the global community rejected in September 2007, and subsequently marked as not for use in new documents' whilst the ISO approved open standard of OpenXML (strict) is not the most popular - in fact, to save a document in .docx (strict) you have to navigate through the save options in Office 2013. 

It's a bit like the old gag of you're both witty and original, it's just a shame that the original bits aren't witty and the witty bits aren't original. However when it comes to perception though, this is pure PR genius. Of course, I'm ignoring the issue of extensions, how the standard itself is being modified and the whole question of it being an open standard.

'We believe this will cause problems for citizens and businesses who use office suites which don’t support ODF, including many people who do not use a recent version of Microsoft Office or, for example, Pages on iOS and even Google Docs. Microsoft Office has supported ODF since 2007, but adoption of OOXML has been more widespread amongst other products than ODF'

Ok, first of all industry has to adapt to defacto standards and there is no doubt that the not open transitional OpenXML format for .docx is fairly pervasive. However, proprietary formats create lock-in (as MSFT pointed out such lock-in will cost UK Gov around £500 million) and the Government consultation isn't about increasing lock-in but adopting an open standard in order to create a more competitive market. Fortunately, an open standard such as ODF exists and Microsoft Office and many others do support it. 

'This move has the potential to impact businesses selling to government, who may be forced to comply. It also sets a worrying precedent because government is, in effect, refusing to support another internationally recognised open standard and may do so for other similar popular standards in the future, potentially impacting anyone who wishes to sell to Government.'

I love this bit, it's pure fear and fantasy. 

By conflating strict OpenXML and transitional OpenXML to come up with the risible message of - the popular format is OpenXML which is an ISO approved open standard - then of course you can portray the actions of a Government which decides not to choose the popular and open choice as a source of concern.  However, fortunately the Government are no fools and are likely to know full well that transitional OpenXML which is both the popular and the default for Office 2013 and Office 2010 is not an open standard and its continued use will only increase lock-in. 

By allowing OpenXML to stand as an open standard then purely because of interactions with others (Office 2010 only writes transitional but reads strict whilst Office 2013 defaults to transitional) then transitional OpenXML (the not 'open' standard) for .docx will continue to grow. MSFT has had plenty of time to get rid of transitional OpenXML and it has chosen not to - for obvious reasons. That won't stop endless copy drones repeating the message at Government though.

'We believe very strongly that the current proposal is likely to increase costs'

Well, adoption of the open standard strict Open XML will require everyone to use Office 2013 and also make sure they're using a compatible operating system - windows 7 or 8. So, it's likely that choosing it will incur a lot of costs.

Naturally, because a lot of documents are in non open standard formats such as the transitional OpenXML format of .docx then there is high degree of lock-in and the move towards an open standard will have some impact. This liability created by proprietary formats will however only increase if we continue to use them.

Fortunately there are plenty of solutions. The are many alternative systems like Libre Office but also Microsoft Office 2013 is capable of writing and reading in ODF.  So, you could quite easily adopt ODF as the file format, allow people to use Microsoft Office and any other ODF system they wish. Which after all is the point of open standards.  It's the proprietary formats that have created lock-in costs which is why this liability needs to be managed and you don't manage it by continuing to use them.

'To be very clear, we are not calling for the government to drop its proposal to use ODF. Nor are we calling for it to use only Open XML. What we are saying is that the government include BOTH Open XML and ODF. To do so offers it most flexibility, the widest compatibility and the lowest Total Cost of Ownership for everyone – government, businesses and citizens alike.'

The above is a gem. This is what I call the reasoned middle ploy. Before I explain it I thought I'd better explain my personal position. 

Most of my working life deals with competition. I find that open standards are extremely useful in ensuring you have a competitive market. I view open source as an excellent means of driving an activity to a more industrialised form by reducing barriers to entry and encouraging collaboration. I view proprietary technology to have strengths in areas such as differentials. For me, open vs proprietary is always the wrong question as both have natural strengths and weaknesses.

For a mature activity (such as word processing) then ideally you want to create a competitive market where both multiple proprietary and open source solutions can compete. The adoption of an open standard is all about reducing lock-in and encouraging a competitive market, it's not about choosing one technology over another. I personally use Microsoft Office and I view that the product is more than good enough to compete in a freely competitive market based upon an open standard document format like ODF. 

Unfortunately OpenXML is not that standard because the most popular form transitional OpenXML which is the default is not open. The best we have for an open standard is ODF which has also been adopted by Portugal and organisations like NATO.

However, if I was the Sauron of lobbying then I'd be promoting the image of extremes (proprietary vs open) with our choice as the reasoned middle ground.  Being evil (as Sauron is) then I'd even get some of my cohorts to create a fictitious extreme. In this case you can't do that because the Government has been clear on its focus about competitive markets and it's not about technology solutions (open vs proprietary) but document formats.

Hence you're limited to promoting choice i.e. you should be free to choose any 'open standard' you wish even when the 'open standard' has two versions of which the popular one isn't open. That's the problem with OpenXML. It is unlikely to reduce lock-in and enable that competitive market because it contains the transitional OpenXML format (the not open, default version) which is likely to dominate.

Microsoft should have removed transitional OpenXML but it chose not to. It's only option is to persuade people to vote for it and gloss over the issue of strict vs transitional. This means it has to take a tone of being the reasonable ground even when its position isn't this. Which leads me to my final comment on the post.

'please do take a few minutes to have your voice heard and respond before the consultation closes on 26th February 2014.'

This is what I called the late surge. As Sauron then I would prepare lots of ground work beforehand, set-up a media storm and then whip up a frenzy.  Lobbyists tend to be a fairly smart and devious bunch (it comes with the turf). If your case is weak (and you know this) then with careful messaging, some fear, a bit of the reasoned middle and a late surge you can often sway the day. 

Does it surprise me that MSFT has left it to the last few days to rally the troops? Not really but it could be coincidence. Will they win? Will they convince UK Government to abandon its desire to see a competitive market formed based upon open standards which help reduce lock-in and allow for truly competitive products both open source and proprietary? 

Well, that really does depend upon you. It's time for you to do your part and take a bit of your time and respond to the consultation. You've got until the 26th February which isn't long.

As for Microsoft. Well, I happen to use Microsoft Office products particularly Microsoft Excel and so I hope they just adopt ODF and compete on better products. They're a great company. They've a new CEO - Satya Nadella - whom I once spent some time talking with over the whole issue of strategic play, competition and evolution (see below). He's a really smart cookie, a decent chap and I've got high hopes for them.

I do hope MSFT embraces a more positive approach to standards. Microsoft is more than good enough to compete using ODF. I wish they would just compete because they do a ton of cool stuff from Kinect to the much anticipated and somewhat magical illumiroom. I'm hoping for more magic in the future.

i.e. a bit more Gandalf and a lot less Sauron.

Sunday, February 16, 2014

The danger of mega trends ...

I recently read this McKinsey post on digital mega trends by Willmott Paul. Ok, to be honest I rarely read McKinsey reports or posts because they're not often relevant to my particular line of work but occasionally I do as general background (same with Gartner).  Normally I wouldn't respond but in this case I feel the need to because it's potentially dangerous despite its obvious attempt to be helpful.

The premise of the work is that 'Large digital players (e.g., Amazon, Alibaba) can create cost, talent and data advantages, which in turn can be used to price competitively, innovate rapidly and acquire further market share'. Whilst that's perfectly true it goes on to miss how this occurs and in this are the dangers.

To explain why, I'm going to need to cover some fairly basic stuff and for those of you who've read my blog extensively then this is the point to jump to the conclusion.

The Basics

Point 1) Activities, practices and data don't just diffuse but they also evolve through a common pathway (see figure 1) due to supply and demand competition causing multiple waves of diffusing and ever improving examples. As they evolve their properties change from uncharted to industrialised (see figure 2).

Figure 1 - Evolution (in this case applied to activities).

Figure 2 - Changing properties

Point 2) Organisations consists of many value chains built from multiple components whether activities, practices or data.  Those value chains constantly evolve but you can map out an organisation by examining value chain vs evolution at a point in time (see figure 3).  Such maps are effective in communication and you can use them to not only determine how you should treat something at a point in time but also for strategic gameplay and learning of economic patterns.

Figure 3 - Value chain vs Evolution map for HS2

Point 3) There are many core economic patterns. One of these is componentisation and how the evolution of a component not only increases efficiency but also can enable higher order systems to appear - see figure 4. This pattern (along with many others such as economic cycles, inertia, relative importance of strategy vs culture, creative destruction, co-evolution of practice, how new organisational forms appear) occur throughout history.

Figure 4 - Evolution begets Genesis

Ok, so our organisation (and competitors) consist of value chains built from evolving components (activities, practices and data) and as they evolve then not only do their properties change but they can enable new higher order systems (and new value chains) which become new sources of value but are highly uncertain by nature.

Point 4) The interplay of these forces creates an issue in competition called the Salaman and Storey Innovation Paradox.  As components evolve they become more efficient and you have to adapt to this in order to effectively compete today i.e. if you're a car manufacturer then you have to use common components like standard nuts and bolts, headlights, wheels, airbags and modular PCB's (exploiting the value chains of other providers in this space) rather than building your own from raw ingredients. However, at the same time that you have to efficiently treat certain components in order to be cost effective and survive today, you also need to differentiate with the novel in order to survive tomorrow by creating those future sources of perceived value e.g. self drive, automatic parking etc. Naturally past novel items - electric windows, seat belts, airbags - have become today's standard components.

This combination of efficient treatment and differentiation requires polar opposite styles of management in the same organisation, hence the paradox.

”Survival requires efficient exploration of current competencies and ‘coherence, coordination and stability’; whereas innovation requires discovery and development of new competencies and this requires the loosening and replacement of these erstwhile virtues”

So whenever you examine your value chain (or chains) which define your business then you have to compare with competitors and adapt to both more efficient provision and the creation of the novel (see figure 5).

Figure 5 - Efficiency and Differentiation.

From the above, for a value chain within in an industry (consisting of components A to F), a company is compared to its competitors. The competitors have a differential B which the company lacks and also more efficient provision of C. The company has an efficiency benefit in D. 

The company needs to consider both an efficiency drive around C and examine the inclusion of differential B into its offerings in order to remain competitive.

Point 5) The map can be manipulated. When comparing a company with its competitors you can accelerate the rate of evolution of a component through open means or de-accelerate the rate through limiting competition (e.g. patents, regulation, acquisition, use of constraints). You can also take a deliberate position and exploit competitors inertia to change. In the above example, you might choose to drive component C to a more utility service (C'), exploiting competitors inertia to change (due to exiting models and practices) and enabling more rapid development of higher order systems built on this component (see figure 6).

Figure 6 - Changing an Environment

Whenever you compare your value chains with competitors, you often find multiple opportunities and points of 'Where' you can attack either adopting of novel activities, removing inefficiencies or deliberate manipulation of the environment and exploitation of competitors constraints and inertia. Understanding 'Where' you can attack is essential for determining 'Why' as why is a relative statement (why here over there). Once you have determined where you can attack and from this derived why you would choose one course of action over another then the how, what and when become relatively trivial exercises.

Point 6) Predictability of what to do varies which makes management complex.  One of the major issues with scenario planning is the issue of predictability. For example certain changes (such as the evolution of a component) are highly predictable in terms of what is going to happen. i.e. you can say the shift from product to utility is involved with; disruption of past industries that are stuck behind inertia barriers, co-evolution of practice if the evolving component is an activity, reduction of potential barriers to entries in secondary industries and rapid increases in higher order systems and un-modelled data. 

This is how back in 2005, many of us were able to clearly and precisely predict the changes of cloud computing, the growth of devops and big data, the rapid increases in novel systems built upon this and disruption of past h/w vendors. More importantly, many of us were able to game this to our favour.

Unfortunately whilst what was going to happen is highly predictable, when this change will happen depends upon competitors actions and whether the component is suitable for provision in the next stage (i.e. ubiquitous and well defined enough), that technology exists, that the concept exist and that a prevailing attitude of dissatisfaction with the current mechanism of provision exists. Fortunately there are a range of weak signals you can use to determine this.

When it comes to the genesis of novel components then by their very nature they are highly uncertain and unpredictable. There is an inverse relationship between future differential value and certainty which means we always have to gamble. When examining a value chain you have to bear in mind that predictability varies with state of evolution, I've summarised this in figure 7.

Figure 7 - Predictability and Evolution.

This leads to a question of should we be a fast follower or first mover to change?  

Point 7) Should I be a first mover or fast follower? Being a fast follower to the genesis of a novel component has certain strong advantages in allowing others to expend research and development on the uncertain and then cherry picking only that which is starting to evolve and become successful. But should I be a fast follower to a component that is evolving from product to utility?

In direct contrast to novel components, there is a strong advantage in being the first mover to shift from product to utility due to componentisation effects. This is exemplified by a model known as ILC (innovate - leverage - commoditise) or what I call the 'Wardley Thompson Technique'.

By being a first mover to industrialise a component to a utility then assuming you allow for public consumption of this then you enable other companies to build higher order systems on top of it. Those higher order systems will contain many novel components and the more efficiently you provide the utility service then the more you will encourage others to build on it by reducing cost of failure and experimentation.

These other companies are your ecosystem. Fortunately for you, as examples of those novel higher order systems start to diffuse and new improved versions appear then you will be able to detect this through consumption of your utility service. This enables you to get others to innovate for you (i.e. deal with the uncertain) and then leverage consumption data in your ecosystem to spot success. Once success is spotted then you can then move to industrialise the new components to a utility service.

For example, if you provided utility services for infrastructure (such as Amazon EC2) then as others built novel big data systems on top of this then you could leverage the ecosystem to spot success and commoditise to a utility services (such as Elastic Map Reduce).  Whether Amazon uses such a model we won't know but the model has some profound impacts which are detectable.

Under the model your rate of apparent innovation, customer focus, efficiency, ability to maximise future opportunity and stability of revenue will all become dependent upon the size of your ecosystem rather than the physical size of your company. Effective exploitation of the model which requires extensive data analysis of your ecosystem, speed of data feeds (i.e. utility consumption data is far more effective than marketing surveys) and an ability to act means you can create a company which continuously and simultaneously appears to grow in terms of innovation, customer focus and efficiency at faster rate than physical size. Of course, the process of running the model does mean you will occasionally feed upon (or harvest) your ecosystem either through acquisition or copying. See figure 8.

Figure 8 - Innovate, Leverage and Commoditise

As a rule of thumb you always want to be a first mover to industrialise but a fast follower to the uncharted (i.e. genesis of the novel and uncertain).

Point 9) - Gameplay is not uniform. Despite many talking about the importance of strategy, the level of gameplay and situational awareness varies wildly between companies. In an examination back in 2011 of 160 different companies, the Players (which demonstrated high levels of situational awareness, strategic gameplay and action) and to a lesser extent Thinkers (demonstrated high levels of situational awareness and gameplay but less prone to action) significantly outperformed the Chancers and Believers (both show low levels of gameplay) in terms of market cap growth - see figure 9.

Figure 9 - Strategic Gameplay vs the Use of Openness to compete (action)

More details on this can be found here.

Point 10) - This is the tip of iceberg.  I've spent the past decade researching into this field and either using the patterns in anger within companies or teaching members of the LEF (a private research group) to manipulate their environment. There's a whole range of highly predictable patterns from economic cycles (e.g. peace, war and wonder) to how new organisations evolve along with a mass of different gameplay (from sweat and dump to tower and moat) and common economic effects from punctuated equilibriums to the Red Queen. There are also some very good game players out there along with a number of companies who have shockingly poor situational awareness at the executive layer. 

Even basic questions like the different forms of inertia (see figure 10) to 'culture eats strategy for breakfast' turn out to be complex and often vary with evolution (see figure 11).

Figure 10 - Different forms of inertia

Figure 11 - Culture vs Strategy


We operate in a highly complex environment where situational awareness is critical. Our companies are comprised of value chains consisting of masses of evolving components (from activities to data). The means by which we manage, how we govern and even strategic gameplay varies with how evolved those components are. Even the importance of strategic gameplay relative to culture varies with how evolved the components are.

Understanding both your position, the position of your competitors and where you can attack is critical in today's economic climate but the reality is that many companies appear to have poor situational awareness which is why these Chancers and their industries are quickly overwhelmed by more skillful Players.

Situational awareness requires an understanding of your environment and this is not something generic strategy advice can give you but instead it is something you have to acquire through an understanding of your industry. You have to learn to play the game, it is a skill like playing a game of chess.

The McKinsey post is a generic list of useful stuff e.g. digital firms have a cost advantage, large digital firms have better access to talent, digital leaders are amassing vast quantities of data, digital firms require fewer people to operate and consumers tend to use fewer brands online. 

Yes, it is absolutely correct that there is a difference between traditional and next generation firms (see figure 12)

Figure 12 - Delta between Traditional and Next Generation (2011)

However, the problem with the post (and the same problem with the above list that I produced) is that it might tempt companies to go - 'we need to be more like these digital firms', 'we need to be more like Silicon Valley'.

Don't get me wrong, I'm not having a dig here at Willmott as the post sets out a reasonable set of changes. The issue is that companies might just adopt it and here's the rub. By blindly attempting to emulate 'Amazon's example' and without good situational awareness then you're just as likely by implementing such actions to encourage evolution of components in your value chain, undermine barriers to entry, reduce constraints protecting your industry, make yourself a more attractive target for a player to attack by laying down groundwork for a utility model and potentially hasten your decline. Playing a game of chess without looking at the board and just copying others actions is a disaster in the making - it's like a general bombarding a hill because some report says that '67% of successful generals bombard hills'. 

You need to think very carefully about your environment before embarking on such actions. There are some extremely skilful players out there, it's easy to get massacred with poor situational awareness - be warned.

Friday, February 14, 2014

Does Maturity Matter?

In 2009, the designer Thomas Thwaites attempted to build a common household toaster from scratch. Beginning with mining the raw materials he aimed to create a product that is built from common and highly standardised components and sold for a few pounds in the local supermarket.  This ambitious project required “copper, to make the pins of the electric plug, the cord, and internal wires. Iron to make the steel grilling apparatus, and the spring to pop up the toast. Nickel to make the heating element. Mica (a mineral a bit like slate) around which the heating element is wound and of course plastic for the plug and cord insulation, and for the all important sleek looking casing”.

After nine months and at a cost of several thousands of pounds, Thomas finally managed to create a sort of toaster.  However along the journey he had been forced to resort to using all sorts of other complex devices – from microwaves to leaf blowers – in order to achieve his goal. 

Our society, the wondrous technologies that surround us and those that we create are all dependent upon the provision of standard components. Whenever you attempt to remove this and go back to first principles, such as building your own nuts and bolts for a home made toaster then the wheel of progress grinds very slowly and becomes very costly. 

But nuts and bolts weren’t always a standard component. The invention of the first screw thread is often cited as 400BC by Archytas of Tarentum (428 BC - 350 BC).  Early versions of this and the subsequent nut and bolt were custom made by craftsmen with each nut fitting one bolt and no other.  

In the 1800s, the introduction of Maudslay’s Screw Cutting lathe enabled repeated production of uniform nuts and bolts with the same threads where one nut fitted many bolts.  The artisan skill of building the perfect nut and bolt was replaced by more mass produced and interchangeable components.  Whilst those artisans might have lamented the loss of their industry, those humble components also enabled the rapid creation of more complex machinery and new industry.

Volume production of uniform mechanical components enabled faster building of ships, guns and other forms of machinery.  It also allowed for the introduction of novel manufacturing systems that took advantage of these components such as the Portsmouth System (which later became the American System).  Without this change of the artisan nut and bolt to more industrialised and mass produced forms then we would all be following the example of Thomas Thwaites and toasters would be a luxury few could afford.

However, the progression of the nut and bolt wasn’t smooth.  Whilst they could be manufactured in volume with inter-changeable components, the lack of any agreed standards thwarted general inter-changeability. For example, the railways of Great Britain all used different screw threads and whilst some companies' in-house standards spread within their industries, there was no agreed standard.

In 1841, James Whitworth collected a large number of samples from British manufacturers and proposed a set of standards including the angle of thread and threads per inches. This was rapidly adopted in industry and became known as the "British Standard Whitworth". But how much of an effect could this make?  The following quotation from an obituary to Joseph Whitworth in the Times, 24 January 1887, should be fairly illuminating.

“The Crimean War began, and Sir Charles Napier demanded of the Admiralty 120 gunboats, each with engines of 60 horsepower, for the campaign of 1855 in the Baltic. There were just ninety days in which to meet this requisition, and, short as the time was, the building of the gunboats presented no difficulty. It was otherwise however with the engines, and the Admiralty were in despair. Suddenly, by a flash of the mechanical genius which was inherent in him, the late Mr John Penn solved the difficulty, and solved it quite easily. He had a pair of engines on hand of the exact size. He took them to pieces and he distributed the parts among the best machine shops in the country, telling each to make ninety sets exactly in all respects to the sample. The orders were executed with unfailing regularity, and he actually completed ninety sets of engines of 60 horsepower in ninety days – a feat which made the great Continental Powers stare with wonder, and which was possible only because the Whitworth standards of measurement and of accuracy and finish were by that time thoroughly recognised and established throughout the country.”

The standardisation of basic mechanical components had a profound effect in enabling more complex systems such as ships to be built. But all those components had originated as something novel, new, different and without standards.  We live in a world where there’s a constant flow of change, where the novel and different becomes commonplace, standard and mature. These more industrialised components then enable novel systems of greater complexity and the cycle repeats.

In the Theory of Hierarchy[1], Herbert Simon showed how the creation of a system is actually dependent upon the organisation of its subsystems.  As an activity evolves and becomes provided as ever more standardised components, it not only allows for efficiency in use but also increasing speed of implementation, rapid change, diversity and agility of systems that are built upon it.

In other words, it’s faster and cheaper to build a house with more commodity components such as bricks, wooden planks and plastic pipes than it is to start from first principles with a clay pit, a clump of trees and an oil well.  Furthermore the diversity and volume of different housing structures is a consequence of these standard components. This is the same with electronics and every other field you care to look at.  It’s also the same with nature.

This doesn't mean that change stops with the standard components. Take for example brick making or electricity provision or the manufacture of windows, there is a still significant improvement hidden behind the "standard".  However the "standard" acts as an abstraction layer.  The float glass method introduced by Pilkington changed how windows were produced but not what windows were.  Equally, just because my electricity supplier has introduced new sources of power generation (e.g. wind turbine, geothermal) doesn't mean I wake up one morning to find that we're moving from 240V 50Hz to something else.

If the constant operational improvements were not abstracted behind the standard then all dependent higher order systems would need to continuously change.  For example, all consumer electronics would need to continuously change as operational improvements were made in electricity supply.  The entire system would either collapse in a mess or at the very least technological progress would be hampered. Hence standard interfaces once they’ve emerged rarely change. There are exceptions to this but it usually involves significant upheaval and often Government initiatives e.g. changing electricity standards, decimalisation and the changing of currency or even simply switching from analogue to digital transmission of TV.

The importance of separation by the introduction of an interface is equally relevant with evolution in biology.  The rapid growth and diversity of life is a function of the underlying standard building blocks that have evolved to allow higher order systems.  If there weren’t underlying components from DNA to RNA messaging to transcription to translation to basic cell structures within more complex organisms then you and I would never have evolved in the time frame.  The interfaces provide a separation from the evolution of higher orders to evolutionary improvements of lower orders and are critical to progress overall.

So let us now consider a business. An organisation consists of a mass of activities, practices and data but those don’t stand still as new things are constantly introduced and diffuse – someone invents a telephone, a computer, a fax machine or the nut and bolt.  These new objects not only diffuse but through waves of ever improving examples the activity they represent seems to mature – the custom built nut and bolt becomes the British Standard Whitworth.  It’s this maturation or evolution to a more industrialised form that enables profound change in building more complex systems. 

The humble nut and bolt enabled machinery like generators that in turn enabled standardised electricity supply and this in turn enabled lighting, radio and computing. Hence, when you consider a business it not only consists of a mass of activities, practices and data but all of this is evolving to a more industrialised form and as it does so it enables new activities, practices and data. 

This leads to another question which has a critical importance in understanding change.  What matters more in our society, the invention of something new or the provision of something in a more industrialised form?

I'll examine this in a latter post.

[1] Herbert Simon, The Architecture of Complexity, American Philosophical Society, Vol 106, 1962

Start of the series

The start of a journey ...

I'm currently working on a range of techniques to identify and respond to future changes in an oncoming ‘Age of Wonder’.  The title is somewhat of a conceit as there is nothing unique about the changes that are occurring. In fact, throughout history we’ve experienced many 'Ages of Wonder' and a more apt term would therefore be stage because it is a repeating pattern caused by general economic forces.

Behind the work is a question and like all good questions it starts a journey of discovery. The question was "How do we navigate through a future of change?"

In order to properly explore this subject then I need to lay out some ground work on the general forces that drive our society. To begin with, we need to ask ourselves the question of "What is change?"

In the 1962 book Diffusion of Innovation[1], Everett Rogers defined a model for how an innovation is adopted over time among the members of a social system. In this case, an innovation is defined as an idea, practice or object that is perceived as new. The idea of diffusion itself however wasn’t new but instead pioneered first by Gabriel Tarde in 1903. However, Rogers developed this work demonstrating how most changes showed a common S-Curve shape with adoption being through common groups (from innovators to early adopters to early majority to late majority to laggards). The only significant difference between innovations being the variance of the slope of the curve (see figure 1)

Figure 1 - A diffusion curve.

This pattern occurs both for the diffusion of an object (e.g. a specific product example of a phone such as Ericsson’s Bakelite Telephone introduced in the 1930s) and the diffusion of the activity that the object relates to (e.g. use of a phone). Whilst incredibly useful, there are a number of key considerations with the idea of diffusion which make it problematic for exploring the future.  These are: -

1)  The rate of diffusion is not constant: comparisons over time provide a wide range of adoption curves and a general observation that the diffusion of innovations is accelerating.

2)  Not all innovation spreads: even where an innovation has utility (usefulness), a number of factors can influence its adoption. As Geoffrey Moore noted[2] there is a chasm between the early adopters of an innovation and the early majority.

3) Diffusion is not continuous: highlighted by Christensen’s work[3] on disruptive innovation, the diffusion of one innovation can be disrupted by the introduction of a new technology that offers different performance attributes from those established in existing value networks.

4) Diffusion of an activity consists of multiple waves: innovations tend to spread through multiple waves of improved objects such as products. In the early stages of a technological change, this rate of improvement tends to be slow and then accelerates until reaching a more mature and slow improving stage[4]. One consequence of the diffusion and maturing of a technological innovation is that increased information about the technology reduces uncertainty[5] about the change. Each improved version increasingly adds definition, eventually providing a system that can be considered feature complete, mature and generally well understood.

Hence whilst diffusion is a powerful concept, it unfortunately doesn’t provide us with a means of understanding future change i.e. we cannot say how something will mature, we can only say that multiple waves of diffusion over an unspecified length of time is involved in something maturing.  

Of course, this leads to our next question "Does maturity matter?"

I'll examine this in the next post.

[1] Everett Rogers, Diffusion of Innovations, 4th Edition, Free Press, 1995
[2] Geoffrey A. Moore, Crossing the Chasm, Harper, 1991.
[3] Clayton M. Christensen, The innovator's dilemma. Harvard Business Press, 1997
[4] D.Sahal, Patterns of Technology Innovation, AddisonWesley, 1981
[5] Rogers and Kincaid, Towards a new Paradigm of Research, 1981

Thursday, February 13, 2014

The slow but overwhelming flood of progress

Back between 2004 - 2005, development in Fotango (a company I used to run) had improved significantly through the efforts of Artur Bergman (now CEO of Fastly), James Duncan (now with UK Gov) and a host of others.

1) our infrastructure was provided by a virtualised environment known as the Borg. A mass of racks providing standardised virtual machines (i.e. 'commodity' infrastructure provided with Xen) all controlled by the Borg Queen who's job it was to create virtual machines on demand, configure them (we happened to use CFEngine), install them and monitor the applications and estate health.

2) we'd been using test driven development for a considerable amount of time and had built extensive test scripts for our applications. 

3) the process of delivering applications to live was relatively simple. A developer pushed an application from the code repository (we used subversion, Chia-lang Kao used to work for us) to production, the test scripts confirmed the system was in an acceptable state and the Borg Queen took over. Part of the configuration files determined what other services were consumed, the destination of the system and it managed graceful replacement and rollback if necessary. We had nightly rebuilds and validation of the entire estate.

4) the organisation itself was broken into three core groups which had started with IT. We had development (pioneers) who created the novel. Frameworks (settlers) who identified common patterns for provision of new web services. Systems (town planners) who managed the estate in terms of core systems and services (i.e. Borg, testing agents, monitoring agents etc).

5) we recruited from around the world and mined open source communities (especially the Perl world) for talent. LPM used to even call us the Borg as we assimilated so many ... well, it's your own fault for being so good.

6) the company had extensively used web services for many years, most of the systems ran on web services and we'd even started experimenting with the idea of providing a more extensive list of public web services. Later on in 2005 this became our utility platform service known as Zimki, one of the first public PaaS environments which provided a server side Javascript development environment with common services (nosql like object store etc), migration between different Zimki installations, automagic translation of functions to web services and even billing information down to the function. The closest you would come to Zimki today would be a combination of CloudFoundry and the Node.js buildpack. The vision behind it was all 'Pre-shaved Yaks' and there are long list of people to thank for its creation, especially Tom Insam (who later built doplr and lanyrd) and Mark Fowler.

But even in 2005 we had our own legacy IT, a lot of horrors that remained from the past such as an ill fated SAN effort and a tortuously complex past platform.  We'd also gone through some painful lesson e.g. agile everywhere. However, we weren't alone with this and we continued to improve, implementing by end 2006 a more open working environment, hackdays (every other Friday), agile design (paper prototyping), BYOD (in fact we had a store cupboard of help yourself macs in case you needed it), events (opening up our Old Street offices to the local tech community ... this was back in the days before Old Street became the technical powerhouse it has) and a host of other techniques (including mapping).

For us, all the above was just normal, it was how stuff was done. So why do I mention this? 

Well, today all the rage is about continuous deployment, design for failure, cloud, server side javascript, PaaS, nosql, open source, BYOD and agile techniques. Whilst these fields have progressed extensively in the last decade, it always surprises me to hear how many companies are so far behind the game. It does appear that you can measure the delta between the leading edge and the laggards in terms of decades of technology. It's like Enterprise 2.0, a coined term in 2006 by Andrew Mcafee to describe a set of technology changes that Euan Semple was busily implementing in the BBC before 2001 and for which some companies are only just getting started today.

However, the really fascinating part of this is how the changes tend to be exponential and create a punctuated equilibrium with the past. So what starts as a trickle becomes a flood. That's the thing about cloud, devops, big data, enterprise 2.0 and all these related topics. Today is not about the beginning of a subject but the overwhelming flood that you have to adapt to. Alas for some companies it's already too late.

Out of interest, it was the work we did back then that enabled me to develop the models of how organisations evolve and later test this was happening.  I mention that because Matt Asay's article on the Future of DIY IT is spot on. That flood is upon us and 'traditional IT is dead. Not just a little bit dead. Dead dead.' 

Ok, technically it won't completely disappear (things rarely do) as it'll whimper on in some niches for quite a time. There's still niches for modern day swordsmiths but they're niches. Have no illusions though, the flood is here and like it or not, you're going to have to change.

Tuesday, February 11, 2014

This is the age of disruption ... oh, give it a rest.

What we can demonstrate

1) Companies consists of many value chains comprising of components (activities, practices and data) that evolve due to competition.

2) As those components evolve their properties changes from uncharted to industrialised

3) As those components evolve we develop inertia to change.

4) The interplay of inertia and competition creates three economic states for any component - peace, war and wonder.

5) When those components have a broad effect (i.e. are part of many value chains) then those changes are seen at a macro economic scale. We call these ages or more appropriately Kondratiev waves.

6) Every age begins with commoditisation of a pre-existing act, disruption of past industries stuck behind inertia barriers (the causalities of war), co-evolution of practice (leading to new forms of organisation), an explosion in higher order and novel systems (the wonder),  reduction of barriers to entry and rapid increases in unmodelled data.

7) After a period of re-organisation which occurs during the war and wonder stages then the effected industries settles down and the novel activities created mature. Past practices and companies unable to adapt die off.

This is how, back in 2005, we knew that cloud computing (commoditisation of a range of IT acts) would lead to explosion in higher order systems, co-evolution of practice (devops), rapid increases in unmodelled data (Big Data), new forms of organisation (next generation), reduction of barriers to entry and disruption of past industries unable to adapt.

This is nothing more than a repetition of every cycle that we've been through countless times before ... and guess what, commoditisation of manufacturing through 3D printing will have exactly the same effects. Figure 1 gives a simplified overview of that repeating cycle.

Figure 1 - A repeating cycle (2007-2008 research)

Now, sometimes a means for communication becomes more of a commodity (postage stamp, phone, internet etc) and as a result the entire process of evolution speeds up. It's not that we've become more innovative but instead the speed at which something genuinely novel becomes a commodity has accelerated.

Ok, so what's wrong with the 'Age of Disruption'.

For starters it's not an age. It's nothing more than an phase of an economic cycle. If you really want to call it the 'Age of Disruption' then you should call it the 'Age of Disruption v6.0' because it's about the 6th major one we've been through in the last 300 years. Actually, we've been through several hundred more smaller and localised ones, so in construction you could probably call it the 'Age of Disruption v57.0' - it's a really poor title.

It's also worth remembering that the reason why companies are being disrupted is not because of some unexpected change. This form of disruption is highly predictable and preventable. Alas, those companies being disrupted on the most part share one common characteristics - very poor strategic play.

In figure 2 (from a project a few years back), I examined the level of strategic play versus willingness to use openness as a competitive tool among over 100+ different companies. The thing worth noting are those companies showing high level of strategic play (in particular the Players) had high market cap growth over the last seven years. Those which showed low levels of strategic play (in particular the Chancers) have low levels of market cap growth, often negative and with some going bankrupt.

Figure 2 - Market Cap, Strategic Play (2011-2012 research)

Notes on graph

1) Each quadrant is given a label - thinkers (high strategic play, low levels of openness used as a mean to compete), players (high strategic, high open), believers (low strategic, high open) and chancers (low strategic, low open)

2) I've marked two groups in grey, the top group shows higher market cap growth and the bottom group shows lower levels of market cap growth. The players were strongest and the chancers were weakest.

3) Size of bubbles equals volume of companies.

Whilst culture is important all the time, Strategic gameplay appears more important in specific states of the economic cycle. IT is in one of those states when it matters, the state of war.

However, we're already entering the state of wonder, those past players will be cleared out and a new dominant form of organisation (as per Fordism in Electricity Age, American System in Mechanical Age) will emerge. This new form, which we call the next generation have the following characteristics (see figure 3).

Figure 3 - The Next Generation (2010-2011 research)

The first problem with the 'Age of Disruption' is it implies something new. It isn't. It's just a repetition of what has happened before governed by exactly the same mechanics (competition, evolution and inertia) with the same results. So, please take the effort to make it clear that this is not unusual.

The second problem is that this type of disruption is predictable. When we talk of disruption as in product vs product substitution then that's hard to predict and the interaction of culture and inertia becomes critical for survival for a company. In this case, because the change is predictable then the issue of culture and inertia are solvable well in advance and strategic gameplay is what matters most. 

The fundamental reason why companies WILL be disrupted by this predictable change is not because of culture and inertia (i.e. we need to be more like a Silicon Valley company won't help you) but instead because you've got a bad case of poor strategic play. 

So, if you wan't to call it an age (which it really isn't) then a more accurate title might be the 'Age with quite a number of out of their depth CEOs driving companies needlessly to disruption'. Now, obviously this isn't going to be a popular title and so I'm sure the 'Age of Disruption' will stick and numerous other factors (culture, inertia, unexpected change, customers) will be blamed as the principle cause of failure.

However, just remember that this 'Age of Disruption' is a repeating phenomenon throughout history and don't confuse it with the type of unpredictable product vs product disruption (such as the different format hard drives that Christensen talked about).

Oh, and what follows next? An age of wonder, followed by an age of peace, followed by an age of war (aka age of disruption), followed by an age of wonder, followed by .... and on, and on, and on.

A pet favourite - Inertia

Organisations consist of a mass of evolving activities, practices and data. As those components evolve from uncharted to industrialised then their properties change. It's that change of properties which means that one size fits all methodologies don't work despite organisations endlessly pursuing simple techniques (agile vs six sigma, insource vs outsource, push vs pull, network vs hierarchical, Hayek vs Keynes etc).

Figure 1 provides a simple view of organisation demonstrating a set of components at different states of evolution, for more on mapping read here or watch the detailed video on the LEF site. Figure 2 provides a list of common characteristics.

Figure 1 - An organisational map

Figure 2 - Changing properties

Now maps are not only useful for increasing situational awareness and an understanding of how things should be managed but they can also be used for various forms of scenario planning, strategic gameplay and economic learning.

For example, by using mapping then it can be quickly discovered that as components evolve they also go through different economic states - one of peace, one of war and one of wonder.  Depending upon how widespread the components are in other value chains, then sometimes these states manifest themselves as macro economic waves (known as Kondratiev waves) but in the majority their effect is usually much more localised (e.g. a specific industry). More details on this can be found here.

The different economic states are important because in the state of peace we experience that sustaining change tends to exceed disruptive change whilst in the state of war then disruptive change tends to exceed sustaining. The states also have different levels of predictability, for example the war state is highly predictable in terms of what is going to be happen but just not when (it depends upon actors actions). This means you can be prepare for the war state many years in advance but alas we often find companies being disrupted by highly predictable and ultimately defensible change. Cloud computing is an example of this.

You can also very roughly characterise the different states with wonder being breakthrough, peace being incremental and war being disruptive. Naturally our response to this depends upon the components in our value chain, how evolved they are, competitors actions and the economic state. This is all part of gameplay.

Unfortunately, if you can't see the map then business is like playing a game of chess without seeing the board - every action is haphazard - sometimes outsourcing works, sometimes it doesn't. It doesn't have to be like this.

The economic states are also useful in predicting how organisations will evolve,  It's no coincidence that every major age starts with commoditisation of a pre-existing act, that commoditisation results in a state of war and co-evolution of practice and those companies that survive have a different set of characteristics. The Electricity Age didn't start with the Parthian battery but with Tesla and Westinghouse and it led to Fordism. The Internet Age gave us Web 2.0. The Mechanical Age gave us the American system. Cloud has given us the Next Generation.

With experience of mapping, it turns out that many aspects (not all) of change are surprisingly predictable, defendable against and manageable.  Even those areas of high uncertainty such as the genesis of the novel and new can be managed by exploiting others (see ecosystems).  Of course, for those who don't look at the chessboard then everything seems highly confusing, random and difficult to understand.

In such circumstances, terms like ecosystem are completely misunderstood, strategy is often a mess of operational, tactical, purchasing and implementation details with little or no 'why'. Managers grab onto truisms like 'culture eats strategy for breakfast' or 'be a fast follower' or simply grasp the latest fad - agile vs six sigma,  'Open by default' without any understanding. These Chancers only survive because competitors are equally as blind. 

These Chancers also tend to get caught out by inertia to change. Now inertia is critical to gameplay and can be used to set up an environment where the competitor's biggest threat becomes itself.  The use of inertia is one of my favourite specialities. It's far beyond the more common tactical plays (tower and moats, ecosystems, alliances etc), the dark arts (manipulation of constraints, misdirection etc), altering competition (the use of open as a weapon, changing buyer vs supplier relationships, lowering barriers to entry) and the use of effective management. There's a special type of wicked delight that comes from setting in play a situation where a competitor self implodes.

However inertia is much more than just a tool, it also a key part of governing economic states. The states occur due to an interplay of competition (supply and demand) which drives evolution combined with the build up of inertia due to existing models, relationships, business and practice. Understanding when and how to use inertia is essential to the finer points of gameplay. Of course there are many forms of inertia.

In figure 3, I've characterised many of the main forms of inertia. All of them (with practice) can be exploited and certainly are things which an organisation should watch for.

Figure 3 - Types of Inertia.