Sunday, November 16, 2014

How we used to organise stuff ..

In the early days, we used to apply one size fits all methodology because we didn't know better (see figure 1). We used to yo-yo between methods but in 2002, most of us in the open source world had gone "all agile" whilst the enterprise was mainly "all waterfall and all outsourcing". Under such a model, the IT organisation is one large department and one attitude. Be warned, single methods fail to deal with evolution and tend to run into excessive change control cost overruns.

Figure 1 - circa 2002 - One size rules all, OK!

By 2004, in the open source world, we had learned that one size fits all didn't work. We had started to work towards the use of multiple methods. We knew agile was suitable in the early stages of evolution of an act but we also knew six sigma was better for more evolved activities. The foolhardy amongst us had started to organise by this bimodal fashion with groups such as common services or systems and development (see figure 2). Under such a model, the IT organisation is two groups and two polar opposite (usually competing) attitudes. Be warned, bimodal structures tend to fail to hand over activities and lead to lack of industrialisation of common components, spaghetti junction and platform rewrites. 

Figure 2 - circa 2004 - Bimodal rules all, OK!

By 2005 to early 2006, many of us had learned that the jump between the novel and the industrialised was too large. We needed a third group and a different set of methods / techniques and attitudes to cope with this. Structures that took this into consideration started to appear, in many cases we often formalised the "missing" group in our bimodal structures e.g. we went from development and common services to development, framework and common services. This three party system was applicable across the entire business as a pioneer, settler and town planner structure (see figure 3). Under such a model, the IT organisation is three groups and three overlapping attitudes which can be made to work in concert through the use of theft (i.e. enforced evolution). Be warned, such trimodal structures still tend to create large groups creating communication and other issues.

Figure 3 - circa 2005 / 2006 - Trimodal rules all, OK!

By 2007 to 2009, we had learned that we needed to divide our organisations into self organising cell based structures e.g. a starfish model, two pizza models etc (see figure 4). Under such a model, the IT organisation is many small groups but no specific attitude and no means of dealing with evolution. Be warned, whilst many of the communication and growth issues are better handled, the lack of means of managing evolution can create problems.


Figure 4 - circa '07 to '09 - Cell based structure rules all, OK!

By 2012 to early 2013, a few of us had started to look at combining both cell based structures and trimodal concepts, heavily using techniques developed in the military to create more adaptive structures (see figure 5). Under such a model, the IT organisation is many small groups and three overlapping attitudes which can be made to work in concert through the use of theft (i.e. enforced evolution). 

Figure 5 - circa 2012 / 2013 - Adaptive structure rules all, OK!
By 2014 ... be warned, this process is ongoing. The above leads naturally to administrative structures (covering training, culture and attitude) along with what can be loosely described as "battle groups" formed to implement a line of business. The closest there is to something that remotely resembles this sort of structure can be found in military organisations. The majority of organisations are still at the very beginning of one department and one attitude. 

Three things to note ...

1) Don't think for one second anyone knows the best way of organising stuff - we don't. All we know are better ways than the past. Anyone who tells you they have the perfect way is talking horse.

2) Mapping an environment really helps with organisation or at least the exploration of the possible ways we might organise.

3) The use of mapping and high levels of situational awareness is a necessity for coping with evolution unless you have exceptionally talented people who understand that things evolve (i.e. they have their own mental models, can cope with inertia etc). Without extremely good situational awareness then I'd suggest you go for a cell based structure and hire rock star developers who work well with others.

Friday, November 14, 2014

How to get to Strategy ... in ten steps!

I was asked by someone whether I could help with a particular strategy. My response, was rather simple ... you do steps 0-9, I'll help you with ten ... all in graphical form. There really isn't much point to start with step 10 until you've done all the previous steps.










Thursday, November 13, 2014

Bimodal IT - the new old hotness

I've recently come across a concept known as Bimodal IT. I couldn't stop howling with laughter. It's basically 2004 dressed up as 2014 and it is guaranteed to get you into a mess. So we better go through it.

I'm going to start with a map. Now, as we know the map of a business contains many components organised into a chain of needs (the value chain) with the components at differing states of evolution. As components evolve, their properties change (from uncharted to industrialised) and different methods of management become appropriate. Hence components in the uncharted space use in-house, agile techniques, quick release cycles, highly iterative etc. Whereas those more industrialised components tend to be six sigma, ITIL, long release cycles, heavily standardised etc.

When it comes to organising then each component not only needs different aptitudes (e.g. engineering + design) but also different attitudes (i.e. engineering in genesis is not the same as engineering in industrialised). To solve this, you end up implementing a trimodal structure known as pioneers, settlers and town planners which is governed by a process of theft. Ok, this is all old hat, circa 2005 and I've summarised it in figure 1. 

Figure 1 - Map of components in a business


So lets speed on and discuss what's wrong with bimodal.

The problem with bimodal (e.g. pioneers and town planners) is it lacks the middle component (the settlers) which performs an essential function in ensuring that work is taken from the pioneers and turned into mature products before the town planners can turn this into industrialised commodities or utility services. Without this middle component then yes you cover the two extremes (e.g. agile vs six sigma) but new things built never progress or evolve.

For example, you start off with a new platform designed for the future on top of which your pioneers build new things. But unfortunately pioneers aren't usually so good at making rock solid and mature products, they're more pioneering. Of course, your town planners won't want to industrialise something which isn't a rock sold and mature product, it's too uncertain for volume operations. You end up with this stalemate that the new thing never gets turned into an industrialised service. Instead what happens, is something even newer gets built on top of your non industrialised component. And this continues with new things built on layers of components that are not fully industrialised. It becomes kludge on top of kludge on top of kludge.

Eventually, after five years or so then this creates spaghetti junction. It is an environment which becomes unmanageable - performance sucks, reliability disappears and costs spiral. You get everyone to together and the new idea is always to build the new "platform for the future". You then spend vast sums of money, create the new platform and repeat this exercise all over again.

This has occurred in almost every company that I've seen in the last decade using bimodal approaches for any period of time. I've seen many billions blown on the same mistake of pursuing solutions which fail to cope with evolution - we will have R&D over here and shared services over here and it will all work! Think again. Its why people use alternatives such as a cell based structures or a trimodal system (e.g. pioneers, settler and town planners) or a combination of both. 

The whole purpose of the settlers is to productise i.e. to take the early stage work of the pioneers, and iteratively mature it until the town planners can finally industrialise it. They are absolutely essential to the functioning of such structures. Remove it at your peril.

Of course, bimodal is still better than the madness of one size fits all (e.g. agile everywhere or six sigma everywhere). But you're still going to end up with SNAFU. So, think trimodal. Be a bit more 2005 than 2004, even though it's 2014. Create a virtuous cycle of theft (see figure 2).

Figure 2 - Pioneers, Settlers and Town Planners


On that note, I can't believe people are flogging Bimodal IT as something new in 2014, that's taking the piss. Next week Gartner researchers will be telling us they've just invented the "internet" or the "wheel". Be under no illusion, bimodal plus time can go horribly wrong - seen it, done it, bought the T-Shirt.  Let me just reiterate that point. I'm telling you from the experience of having used both bimodal and trimodal forms over the last decade that bimodal is not what you want to do.

Apparently though Gartner's bimodal model has both pioneers and settlers in 'mode 2' group and Town Planners in 'mode 1' group. Whilst the language Gartner use - industrialisation, uncertain, non linear - is all familiar, the justification of combining these groups to form a bimodal is highly suspect in my view.

When I wrote my Butler Group Review Article in May 2008 on the need for use of multiple methods within an organisation (e.g. Agile and Waterfall), we already knew back then that you couldn't organise by this bimodal fashion because of the problems it had caused. You need that missing middle (the settlers) that dealt with the transition.

Now, I haven't read Gartner's recent research on this subject (I'm not a subscriber) and it seems weird to be reading "research" about stuff you've done in practice a decade ago. Maybe they've found some magic juice? Experience however dictates that it'll be snake oil and you really need to separate out the three groups. I feel like the old car mechanic listening to the kid saying that his magic pill turns water into gas. I'm sure it doesn't ... maybe this time it will ... duh, suckered again.

--- Update 14th Nov 2014
One thing I haven't made clear in this post and need to is where did the idea of a three party, trimodal way of working such as pioneer, settler and town planner come from?

From my perspective then the origin of the idea was James Duncan, now a CTO within UK Gov.  It started all those year ago, back in the days of Fotango when I was CEO. I was frustrated by several things - how the organisation worked, the lack of any competitive map to describe our environment , the compete blah of what was called strategy etc.  I'd shown James my work on mapping and evolution. We worked together on this to refine the whole Fotango plan. It was a time of exploration for me and James was my cohort in this (which is why I use the term Wardley-Duncan Map occasionally.  Well, credit is due).

However, it was James (who was the CIO) that proposed a three party structure within IT. It sounded right, I was happy to go with it and I was delighted by the outcomes that resulted. I then started to fold the rest of the company into this model. Now, I'm not saying others didn't have a similar model just from my perspective it was James Duncan who started this particular aspect of my journey. Sometime around 2004.

It's also true that back then I couldn't fully explain why the trimodal approach worked so well - in fact many of our successful experiments were hard to explain. All we had was the practice and the result. It took a bit longer for all the aspects of mapping and evolution to become clearer before the reasons why it was a good move stood out. I mention this because mapping, the use of multiple methods, the work on ecosystems, open source and organisations - all started with practice & observed outcomes first. Understanding why came a bit later.

--- Update 14th Nov 2014
One final thing to mention, my disapproval of bimodal IT assumes that CIOs are trying to use it to fix a problem by using a flawed approach. I've now come across an example where it is more being used to "bolt on innovation" in much the same way that some companies are not adapting to digital but instead "bolting on a CDO". This is akin to adding lipstick to the pig.

I'd argue that this indicates an additional organisational need which is removal of its technology leadership. In such cases, a CDO is not only needed but should replace the CIO. But how do you now?

Well, next time you have the chance to chat with the CIO, just ask what they think of the Bimodal IT approach. If they're positive, that's fair enough it's not too deadly a sign. But use the vernacular and ask "When our Sprinters have created something new, then overtime it'll become like the stuff the Marathon runners provide. How are we going to manage that?"

If the response is "they'll just hand it over" or you get hand waiving warble or you get any sign of confusion then you know you probably own a well lipsticked pig.

Monday, November 10, 2014

The Enterprise is slower than you think

In the late 1990s, I had taken an interest in 3D printing. It was one of the main reasons I joined Fotango (a small but failing online photo startup) because of my interest in the distribution of images.

In 2001, I was the CIO of Fotango and we were acquired by Canon. 

By 2002, we became an open source and agile (XP) development shop. We extensively used and provided web services. We lived online. We started to get involved more directly in open source projects particularly Perl. We built a centre of gravity to attract talent to the barren technological wasteland that was Old Street.

By 2003, I was the CEO. We operated an environment which had started to use mixed methods (i.e. we learned that Agile wasn't appropriate everywhere). We had introduced paper prototyping for design and worth based development techniques (known as outcome based techniques today).  We focused on the user need. We had BYOD, a wireless office, remote working. We had replaced the company intranet with a wiki and had started to explore alternative mechanisms of communication (core parts of what is now called Enterprise 2.0). We built multiple systems for others and we were profitable. 

By 2004, we had started developing our own private IaaS (though it wasn't called that back then). The system, which included creation of virtual machines through APIs and configuration management tools (based upon CFEngine) was known as Borg. We had developed continuous deployment mechanisms, extensive monitoring and started mapping our own environment to determine new game plays and opportunities. We had introduced hack days and ran mini conferences. We had introduced conference funding, started to promote our open source work and started working on the idea of providing public API services.

By 2005,  we had simplified internal procedures such as HR (removed timesheets, holiday forms etc) and looked towards using commodity services where possible. We had launched the first public Platform as a Service (known as Zimki). We focused on industrialisation of key aspects of building a platform. We understood how to play an open source game, create a competitive market and exploit an ecosystem and their network effects. We had converted most projects to outcome based metrics, we had started to introduce a new organisational structure based on evolution, we had common web services running through dozens of large public facing systems.

So why do I mention this? 3D printing, continuous deployment, agile, mixed development methods, open source, building centres of gravity, cloud, building and stitching together small discrete web services (microservices), ecosystems, IaaS, PaaS, BYOD, Enterprise 2.0, Hack days, focus on user needs, outcome based approaches ... these are all 'hot' words in the Enterprise today. 

But these "words" weren't born out of ideas but practice from a decade ago. Oh, if you think we were first - you must be kidding. We thought we were slow compared to our compatriots and competitors.

That is I'm afraid the point. I hear these words often spoken as something new within the Enterprise. Well, it maybe new to an Enterprise and it maybe new to some of your competitors but don't kid yourself that you're doing anything other than trying to catch up with where the edge of the market was a long time ago. The cutting edge of the Enterprise market is about a decade behind the edge of the market. An early adopter in the Enterprise world is still a laggard.

But why? Competition. 

You don't need to be near the edge unless your competitors are there as well. Which is probably why some traditional enterprise companies do so badly when companies like Amazon or Google move into their space. They're not prepared for the level of competition needed.

But that's why we need to adopt "3D printing, continuous deployment, agile, mixed development methods, open source, building centres of gravity, cloud, building and stitching together small discrete web services (microservices), ecosystems, IaaS, PaaS, BYOD, Enterprise 2.0, Hack days, focus on user needs, outcome based approaches" I hear some cry or more importantly their anointed consultants cry. 

It won't help you. Instead you need to think of the above list as stuff you've been doing for a decade (where the puck was) and work out what new things you'd have built on top of this during that time (where the puck is). Then you need to look forward five years (where the puck will be). That's where you need to be heading.

Thursday, October 16, 2014

Of Peace, War and Wonder vs Company Age.

One of the more interesting discussions in recent times has been Prof Jill Lepore’s arguments against Clayton Christensen’s concept of disruptive innovation. In her now famous New Yorker article, Lepore argued that disruptive innovation doesn’t really explain change, but is instead mostly an artefact of history, a way of looking at the past and is unpredictable. This really is a non-argument because both Christensen and Lepore are correct. The problem stems from the issue that there are two forms of disruption - one of which is predictable and one which isn't.

The two main forms of potential disruption are product to product substitution and product to utility business model substitution. 

With product to product substitution then the predictability of when (depends upon individual actors’ action) and what (genesis of some new feature or capability) is low (see figure 1). This means a new entrant can at any time create a disruptive product but a company will have no way of ascertaining when that will occur or what it will be.  So whilst disruption will occur (as Christensen points out), it is unpredictable (as Lepore points out).  Apple’s iPhone disrupting the Blackberry is a good example of this type of disruption.

Figure 1 - Predictability.


With product to utility substitution the “what” and “when” can be anticipated. Hence a new entrant can more effectively target a change to disrupt others. However, it also means an existing player can effectively mount a defence having prior knowledge of the change and time to prepare. Fortunately for the new entrants, the inertia faced by incumbents in terms of existing business models, developed practices, technological debt, behavioural norms, financial incentives to Wall Street expectations and self interest are often insurmountable, so the start-ups often win.  Hence, whilst the change is entirely defendable against (with often many decades of prior warning) companies fail to do so. This form of disruption is entirely predictable and it is here where Christensen's theory excels.

Now product to utility substitution is a key part of the commonly repeating cycle of peace, war and wonder that we've discussed extensively about. The 'war' element is highly predictable and the cycle occurs both at a macro and micro-economic level.

You can even model out the potential impact of this cycle on company age. Creating a simulation with 1,000 actors, assuming all actors are in competition, that the largest companies start with an age of 45 years, there exists some unpredictable disruption from product to product substitution and no peace/war/wonder cycle then you can graph out the emergent change of company age with the top 400 due to new entrants and previously successful companies failing. I've provided the output of such a simulation in figure 2.

Figure 2 - No Peace / War / Wonder cycle



In the above the average age remains fairly constant over time (shown as sequence steps of the model on the x-axis with each step of the model being analogous to a year). This is because whilst companies age, there is some substitution by new entrants counteracting the growing age. 

This requires a set of specific conditions including a moderate level of disruption from product to product substitution (3% p.a.) but I'll use this simulation as our base line. By adding in the peace / war and wonder cycle, starting with a condition of 30 steps (e.g. years) for an act to evolve from genesis to commodity and 10 steps (e.g. years) for a commodity to disrupt an existing industry then the following pattern (figure 3) emerges.

Figure 3 - Company Age with Peace / War and Wonder Cycle



What's happening now is a constant undulation in average company age as the environment moves through these cycles. It constantly attempts to return to a higher average age but the constant 'war's and disruption by new entrants (on top of the normal product to product substitution) keeps this in check.

Of course, one of the interesting aspect of the peace, war and wonder cycle is that this not only affects all activities, practices and data but these components can be communication mechanisms. Such communication mechanisms (e.g. telephone, postage stamp, Internet etc) will increase the rate of diffusion of information which impacts the speed at which evolution occurs. This in turn accelerates the speed of the peace, war and wonder cycle.

Rather than a case of we are becoming more 'innovative' as a species, it appears that the speed at which things industrialise (i.e. evolve to more commodity and utility forms) and hence the rate at which we are forced to adapt and move onto the next wave has accelerated. If you now add this communication impact into the simulation (i.e. assume some of those peace, war and wonder cycles impact communication causing a subsequent higher speed of future cycles) then the following pattern emerges (see figure 4)

Figure 4 - Company Age with Peace / War and Wonder Cycle plus Communication impacts.


What's happening is the system is constantly trying to maintain an age but the peace / war and wonder cycle is causing oscillations arounds this (due to new entrants and failure of past giants). However, the acceleration of the cycle (due to commoditisation of means of communication) is causing a shift downwards to a lower age (and a new stable plateau around which age will oscillate).

I mention this because the same pattern in a simulation - which is derived from supply and demand competition causing evolution and the interplay with inertia causing the peace, war and wonder cycle  - can also be seen in the graph of S&P500 company age over time by Foster (see figure 5)

Figure 5 - Variation in average company age with S&P500


You have the same undulation that is caused by peace, war and wonder cycles plus a decline in average age which would be expected from commoditisation of the means of communication and acceleration of the cycle (e.g. Telecommunication, the Internet etc).

So, why mention this? Well, I'd argue that what we're experiencing is all perfectly normal. The system is rebalancing to a new company age around which it will oscillate. There are many ways of countering this effect by exploiting predictable change (which is unknown to many) and through the use of ecosystems but that's another post for another day.

The good news is that most companies have appalling situational awareness and so it's very easy to exploit. I've only been involved in three startups (all sold to large companies) and I've used these techniques in working for Canonical (we stole the cloud from RedHat) and Governments. It's amazing how much power a little situational awareness and understanding of common economic patterns gives you.

--- Update 17th October

After talking with Florian Otel, a smart cookie and always a good chat, I decided to run the simulation above several times to see if we could get it to mimic real life.

After a bit of trial an error, I set the starting company age at 50 years, with each step of the model representing 270 days, a low product to product substitution rate (1%), a higher rate of disruption from the 'war' phase of the cycle (13%), a base time to industrialise of 30 years, a time to disrupt once industrialisation starts of 15 years, a set of peace/wonder and wonder waves each impacting communication, a 7 year rolling average of the top 400 companies, 1,000 competing companies (actors) and an initiation time of April 1937.

I ran the simulation 10 times, because each step in the simulation is probability based and each actor therefore has a chance to age or be disrupted or disrupt others in each step. Hence after each simulation is completed, different actors have died or taken over etc. No two simulation runs are identical.

Figure 6 provides the overlapping result of 7 year rolling average of company age for all ten simulations on a single graph. A very strong emergent pattern can be seen which doesn't do a bad job of mimicking real life.

Figure 6 - Approximation of Real Life through Simulation


It's not a bad approximation to Foster's graph but it's by no means perfect. There's some variation in the simulation when re-run (as can be seen by the different lines in the graph). The times are not perfect (often being out from real life by many years) nor is the shape identical however it should be remembered that in the simulation the agents actions are random but in real life we have the ability to anticipate change.

I've overlapped both Foster and in the Simulation on the same company age / time axis scale in Figure 7.

Figure 7 - Comparison of Foster to Actor (Agent) based Simulation


The simulation also pointed to three critical underlying dates when the industrialisation of communication mechanisms started to occur. These are 1967, 1991 and 2008 but obviously do keep in mind the simulation isn't perfect.

Now, not a lot can be inferred by this, an agent (actor) based simulation which creates an emergent pattern which approximates Real Life and points to the formation of the internet (1991), cloud computing (2008) and information technology revolution (1967) as key moments of industrialisation of communication could be just happy coincidence.

However, it's not bad though and gives food for thought.

Sunday, October 12, 2014

On maps, component class, pipelines, markets, inertia and economic states.

When drawing maps, I often use different symbols to represent different aspects of competition. For example, since activities, practices, data and even knowledge evolve then I'll often mark these aspects on the map (see figure 1).

Figure 1 - Activities, practices, data and knowledge.


With most maps, I tend not to mark up the different component classes unless it is useful. When it is useful then in practice though I might add a legend to show the different class of components (activities to knowledge), I tend to not fully write out the process of evolution for each class on the evolution axis - it becomes unwieldy.  I simply use the evolution of activities to represent the different publishing type I to type IV of evolution. 

It's worth remembering that maps (even geographical maps) are simply a representation of the space. 

In some cases, within a map there will be a pipeline of constant change e.g. content for publishing. I'll normally mark this on (as with the case of the TV industry in figure 2).

Figure 2 - A content pipeline


When scenario planning, I'll tend to add on different markets to show comparison to the market in focus. I'll also add on further contextual information such as price elasticity, known forces (buyer vs supplier), known constraints and known difference between the company and the market (usually a dotted red line identifying a delta or a solid red line indicating a difference). An example of this is shown in figure 3.

Figure 3 - Comparison to market.


I'll also add on potential force multipliers (e.g. ecosystems), potential sources of inertia, likely points of change (a grey dotted line) and general comments or areas of interest.

Figure 4 - Ecosystems, inertia, points of change and areas of interest.


Lastly, I'll add different competitive states (peace, war and wonder), current and future states, competitive forces and potential for impacts. See figure 5.

Figure 5 - Competitive states and competitive forces.


The final maps I produce tend to contain elements of all the above. They are complex but then, so is competition.

When scenario planning then all of these components from activities, practices and data, to inertia, competitive forces, constraints, economic state, points of change, ecosystems, comparison to other markets, buyer vs supplier relationships, pipelines, elasticity and other compound effects (co-evolution, Jevons' paradox etc) need to be considered. Trying to do this in your head without a map i.e. a way of visualising and discussing a landscape - is almost impossible for any complex business.

Saturday, October 04, 2014

Something that will change the world of competition ...

One of the most powerful force multipliers in competition is the use of an ecosystem model known as ILC built around a utility service. This model has been in operation for about a decade and can be shown to create network effects in terms of innovation, efficiency, customer service and stability of revenue. There's nothing quite like it but since it's old hat, I won't go through it again.

However, the ILC model doesn't work quite so well in the product space (because the capture of consumption data requires expensive market research) nor in the physical commodity space (again there is no way of capturing consumption data).

This is all about to change. 

Sensors are getting to the point of being industrialised to commodity components that will capture and centrally store data through a "Sensor as a Service". Future products, even physical commodities will contain multiple "Sensor as a Service" components. This provides the capability for ecosystem games like ILC to be played out in the physical world. 

Supplier companies will start providing low cost commodity sensors with an attached Sensor as a Service capability as a highly industrialised platform. Other companies will deploy these components into their products, new inventions and hence an ecosystem will build around these Sensor as a Service. The benefit for the deploying companies is the sensors will be low cost and the Sensor as a Service will provide data aggregation, market comparisons (performance compared to other sensors) and a range of other useful capabilities. Whilst useful for lowering cost of experimentation and product implementation for the deployer, the real beneficiary is the supplier. 

The supplier can play the same trick that happens in the digital world of not interrogating what the sensor is doing (that'll be private to the company deploying it) but simply monitoring consumption through the Sensor as a Service to identify the spread of new successful innovations (whether genesis of a new act or a product differential). It's no different to the ILC model but now played out in the physical world and it will have the same impacts. 

From figure 1 below - you industrialise a component activity representing a sensor (A2) to a more commodity form (A3) which is provided with a Sensor as a Service data capability. Other companies then build new inventions / feature differentiation (B1, C1, D1) on top of the Sensor (A3) because you provide it at very low cost and hence reduce their risk of experimentation. You then simply monitor consumption data to identify what changes have been successful and when identified you aim to commoditise any component (D2) to a future Sensor service and repeat the game - you get everyone else to Innovate, you Leverage the ecosystem to spot success, you Commoditise - ILC.

Figure 1 - ILC Model


For example, let us suppose we were Amazon (they are very good at ecosystem games) and with big data becoming a rapidly industrialised component (already services like BigTable and EMR exist) and CCD being a fairly commodity component then let us hypothesise that we introduced a CCD Sensor as a Service. For makers of devices which include CCD, they would get low cost CCDs and a service telling them about the performance of their CCDs in the wild, maybe some other data aggregation capability (even to the point of customisation to location / time given environmental conditions). Of course, as the supplier, we would get to know what products (in which our sensors are deployed) are rapidly growing and being used regardless of who is making or selling them or the data being transferred. This is achieved by simply looking at consumption of the service, the actual sensor data being private to the deploying company. This is incredibly useful for the supplier and why ecosystems are powerful future sensing engines.

The net effect will be the same as the digital world. The supplier will start to simultaneously exhibit :-
  • rapidly growing rates of innovation - it's in fact the ecosystem that is doing the innovation for it. 
  • rapidly growing rates of customer satisfaction - by using consumption data to pick successful changes in the ecosystem and then providing this as new components to everyone else
  • high rates of efficiency - simply economies of scale
  • high stability of revenue - through provision of industrialised components and reduced risk of experimentation (everyone else is taking the risk)
  • eventual grumbling - as other companies start to complain "they've eaten our business model again"
There's a whole new world approaching where ecosystem games (from gaming theory to open source as weapon) can be re-applied in the physical world. Competition from physical engineering to healthcare is going to get seriously interesting. We've seen early starts in this space over the last couple of years but it is building. Key to success of course will be to position yourself as the supplier of commodity sensors with the Sensor as a Service attached i.e. you need to identify those sensors suitable for such a game, industrialise to components and start building the ecosystem of other companies building on top of it (see figure 2 for a rough simplification of the game).

Figure 2 - high level map of the game


That's the really interesting thing about the Internet of Things. The real battle will be over the underlying components and ecosystems that are built around this. Sensors are sexy - well, if you're a competition nut like myself. You don't want to be the device manufacturer, you want to be the component sensor as a service in every other manufacturers device.

Oh, and the best news is ... most of the competitors in this space probably won't see this coming (poor situational awareness), they'll focus on the device and communication between devices whilst you can start to build up in underserved markets. When it finally hits then combined with inertia, this will be one of those predictable forms of disruption that any start-up can have a field day in. There's a few billion to hundred billion dollar companies to be made in this space.

How do I know this? Well, I don't - well not for definite. With another 7,200 days of data collection I could be more conclusive (or not) but alas that's another story. At the moment, I'd advise taking the above with a pinch of salt as with any other prognostication on the future unless you're a start-up in which case I'd say 'you become what you disrupt'.  It'll take 10-15 years before this space really kicks off, so it's time to start building now.

There's a lot of future value in sensors and sensor ecosystems.

On the future ...

The Chinese philosopher Lao Tzu once stated, “Those who have knowledge don’t predict. Those who predict don’t have knowledge”. However, let us assume that there is future change that is known to everyone and change that is unknown to all (i.e. we’re forced to speculate and predict). Advantage can be created by a business through change that is known only to a few and “unevenly distributed”. 

This raises a question; can we determine a means of identifying future change that is knowable but not known to all? When examining literature, we can often cite examples of past science fiction novels that appear prophetic. However, the sheer volume of publications (in excess of 70K novels & short stories per annum) means that this can be attributable to pure coincidence and often such predictions suffer from interpretation effects (i.e. we read into them prophecy where there is none, the prognosticators’ equivalent of the P.T Barnum effect). 

The challenge is whether we could develop a means of more accurately predicting change beyond random coincidence. Could we predict the predictable because the knowledge was already there even if we were only vaguely aware of it? Could we create a more ‘prophetic’ story? To exploit the future, we need to somehow create a framework that allows us to uncover knowable change. Such a framework must be inherently holistic, interdisciplinary, relative, repeatable and useful: - 
  • Holistic because of the potential for combinations to be greater than the individual change (e.g. standard electricity plus material science leading to computing. 
  • Interdisciplinary because it impacts not just technological, economic and physical systems but also social systems. 
  • Relative because the changes may be different depending upon the observers’ viewpoint i.e. the impacts in one industry may not be the same in all. 
  • Repeatable because the validity of single, one off predictions provides no method of testing beyond the scope of the single prediction. 
  • Useful because vague generalisations and known effects provides no means of exploitation. 
Is this possible? The answer turns out to be ... sort of, maybe ... but I'll leave that to another day ... well, to be precise ... another 7,200 days approximately in order to gather the data to validate it.

On disruption and executive failure

When examining the issue of predictability, it becomes fairly obvious that there are two important axes to consider - the predictability of what and the predictability of when. By examining change on these axes (see figure 1) then one pattern becomes clear - there is more than one form of disruption.

Figure  1 - Predictability of What vs When


The two main forms are disruption by Product to Product substitution (which is unknowable i.e. unpredictable) and disruption by Product to Utility substitution (which is knowable) :-

The Unknowable (e.g. product to product substitution). In this form, a change in the value chain (caused by some new component added to the product e.g. size) is unpredictable. The entrant has no idea whether it will be successful nor does the incumbent. However, the incumbent has inertia caused by past business success and if the change is successful then the speed of adoption tends to be non-linear. This means the incumbent has little time to react, lots of inertia to overcome and is unsure whether the change will dominate. Once they realise the change is inevitable them then it's usually too late to do anything about it. This type of disruption can only be accurately seen post event and in this case, Jill Lapore is correct that disruption is a post event classification and unpredictable.

The Knowable (e.g. product to utility substitution). In this from, the change is highly predictable and can be prepared for many years (or decades) in advance. Cloud computing fits into this category. This predictable disruption can be determined pre-event but for the same reasons this can defended against and shouldn't disrupt. The incumbent will have inertia but they also have plenty of time to prepare - from years to decades. The reason why incumbents become disrupted is due to one of two variations.

The first variation is that the change is usually unseen due to poor situational awareness & scenario planning on behalf of the executive. Less than 30% of companies have any form of scenario planning and less than 4% have any means of visibly seeing the landscape, hence such blindness is rife. In this scenario the incumbent fails to prepare for a predictable change and suffers the same consequences as though the change was unknowable. For an attacker, this is highly attractive as the change is highly predictable and therefore a new entrant can target a space with a good degree of certainty of success. If you can find a product to utility substitution for which all the weak signals indicate it is ready to happen and which incumbent executives show low levels of situational awareness ... well, it's like stealing candy from a child. For extra icing on the cake, you can often use incumbents inertia against them. This is how Canonical managed to walk in and help itself to the entire future cloud market against the vastly better resourced and funded competitor of Red Hat.

The second variation is that the change is seen but the incumbent still fails to act. In some cases (quite rare) this failure to act is due to extremely strong and entrenched inertia to change. This is what happened with Blockbuster vs Netflix - Blockbuster saw the change, was first with video online and on demand but completely failed to deal with the inertia cause by shops. The same happened with Kodak - first in with DSC and online photos but failed to deal with inertia caused by its analog fulfilment systems. Not seeing a predictable change (due to poor situational awareness) is a failure on behalf of the executive however seeing the change and failing to act is shockingly poor.

The points to take home are 1) disruption occurs (as described by Christensen) and 2) there are also two forms of disruption - unknowable and knowable. The unknowable type is difficult to defend against due to its unpredictability. So, in the case of APPL vs RIM then you can't really point the finger at the executives. It's just down to chance, a gamble and it's not surprising that companies get hit by this. Lapore is spot on regarding lack of predictability and post event classification with this form.

However, the knowable form is highly defendable against - it is easy to adapt to this. That companies get disrupted by this is either failure of executives in terms of situational awareness or even worse a failure to act upon what is known. There isn't much of an excuse for this. 

I mention this because 'disruptive innovation' appears to have become a common excuse for company failure. In many cases, the reason why the companies were disrupted was because of executive failure to either see or act upon a visible change i.e. the storm was visible, they either failed to look or failed to move the company out of the way. This shouldn't happen but unfortunately it does.

On Prognositication

The Internet did not catastrophically collapse in 1996 as Robert Metcalfe (co-inventor of the Ethernet) predicted. Apple was not ‘dead’ by 1997 as Fortune, Business Week and the New York Times told us. Neither wireless nor airplanes made war impossible – Marconi and the Wright brothers got this wrong.

I don’t use a nuclear powered vacuum cleaner nor do I live in Arthur C. Clarkes’ vision of the autonomous home that flies into the air to head south for the winter. Apparently I was supposed to in 2001 when not travelling around in my Fred Freeman 1999 Rocket Belt or taking saucer shaped flying ships to NASA’s permanent moon base which still hasn’t appeared after 50 years of prediction. Edison’s prognostication that Gold would become worthless never came true. My automated vehicle of the 1980s that David Rorvik assured us that we would get, never turned up. Without a getaway vehicle, it’s fortunate that Donald Michael’s warning that by the 1980s we would be replaced by intelligent machines never actually happened. Of course, I could have escaped to my 2014 underwater city of Isaac Asimov … except it’s not there. Nor am I taught by electrocution, Fireman do not have flying wings and I’m still waiting for my personal helicopter in my garage (I was supposed to get that in 1968).

Prognostication is a very sorry business.

Friday, October 03, 2014

When to use a curve, when to use a map

The mapping technique is based upon two axes - one which describes a chain of needs (the value chain) from the user needs (visible to the user) to the underlying components (whether activities, practices or data) that meet those needs (invisible to the user) versus evolution. A description of how the evolution axis was developed including its relationship to publication types can be found in this post.

Now generally, when examining change in any complex system then you'd use a map because the system contains many evolving components and the interaction between them is also important (see figure 1).

Figure 1 - A map


The history of each component is simply a movement from left to right on the map and this can also be viewed on the evolution curve - for example, you can examine the change of computing infrastructure from its genesis with the Z3 to commodity forms today (see figure 2)

Figure 2 - Evolution of a single component (computing infrastructure)

However, sometimes it is useful to simply focus on the current state of change and view the position of all the components of a current map on the evolution curve  - see figure 3. This is part of a process known as profiling.

Figure 3 - The components on the Figure 1 map on the evolution curve


Now, the components shown on figure 3 are all independent and they are all evolving along the same pathway. Creating profiles is an extremely useful technique for competition and one I've mentioned beforehand, however for the time being I simply want to mention it.

Monday, September 29, 2014

Paradoxes of Organisation

I've just seen Dave Gray's post on the paradoxes of organisation. Oh, this is a wonderfully rich area and something I'm looking forward to reading.

Over the years (based upon evolution), I've noticed several patterns from conflicts (innovation vs efficiency, agile vs six sigma) to misunderstandings to paradoxes. From Ashby's to Jevons' paradox to the impacts of componentisation then there is lots of interesting material.

Four of my favourite paradoxes in business (with each argument described by a scale from 0 to 1) are :-

Predictability (of what) + Predictability (of when) ≈ 1 (i.e. we can often say what will happen but not when or when stuff will happen but not what)

Potential for future value + Certainty of value  ≈ 1 (i.e. the more something has potential, the less certain we are about it. If we're certain about it then so is everyone else and the value is correspondingly smaller)

Simplification of management + Effectiveness of management ≈ 1 (impact from Ashby's law of requisite variety i.e. the more we pretend management is simple, the less effective it becomes.)

Organisational focus on survival today + Probability of survival tomorrow ≈ 1 (impact from Salamon & Storey i.e. survival today requires coherence, co-ordination and stability whilst survival tomorrow requires 'innovation' and hence a replacement of those values)