Tuesday, September 15, 2015

How commodity is something?

tl;dr There are a number of different ways you can refine your maps but in most cases, it's rare that you should go beyond aggregated views. Most of the benefit can be gained by a group of people sitting around sharing and discussing the map

Everything (whether activities, practices, data or knowledge) evolves through the application of competition (supply and demand side). Each instance of an evolving act diffuses through a market over time and hence evolution can be seen as a series of diffusion curves (often many hundreds) with each diffusion curve having its own chasm

Those diffusion curves however have different markets and different time spans i.e. diffusion of the first phones was not the same as diffusion of later, more evolved, more mature phones. So when examining an act (A) which is evolving through ever more mature examples of the act (A1 to A5) then you see a diffusion patterns that looks like figure 1.

Figure 1 - Diffusion of an activity A

At any moment in time in the market you may have many examples of the act in existence i.e. older phones to modern phones to the latest phone can all exist in the market at the same time. You cannot simply look at a market and see 50% of households having something and make a claim over how evolved it is. Whilst evolution contains many diffusion curves, evolution does not equal diffusion and you don't know which diffusion curve you're on.

However, there is a pattern to evolution which you can measure by examining the ubiquity and the certainty (i.e. maturity and completeness) of a thing. Unfortunately, the measurement can only be done for the past. Something has to become a commodity before its past can be measured accurately. This means that whilst I know the direction of travel of something (e.g. the genesis of an act becomes custom built becomes product becomes a commodity - see figure 2), I can not directly measure where something is but only where it was.

Figure 2 - Evolution

The reason for this is certainty axis itself and the role of individual actors in the market. I can only say where something is with certainty once it becomes certain. The future unfortunately acts as an information barrier and until something has become certain (and I can therefore measure its past) then I have to guess unless I invoke some form of crystal ball.

Now this evolution curve provides the x-axis used in mapping. So, how do I know where to place something (e.g. activity A) on the x-axis if I can't measure it? (see figure 3).

Figure 3 - Where on the map is it?

There are a number of techniques you can use.

Ask yourself what this thing is.
The first step is to ask yourself is this thing :-
a) rare and poorly understood i.e. genesis?
b) uncommon and somewhat understood, normally provided by custom built examples often built by consultants?
c) quite common and reasonably understood, often provided as product or rental service through product vendors?
d) well understood and ubiquitous provided via utility services or as undifferentiated commodities?

If you take a number of people (say 2-4) with experience of the field then you can get a better answer by asking the group. This is also helps when there's arguments because that usually signals the act consists of several components at different stages of evolution.

Look at the properties of the thing
The second step is to examine the characteristics of the thing. As it evolves its characteristics will change from the uncharted space to the industrialised (with a transitional space in between). See figure 4.

Figure 4 - Characteristics.

Hence look at the activity for example and ask yourself :- Is this rapidly changing? Is this a differential? Is this exciting? Is it stable? Does it seems chaotic? Is this well defined? Am I experimenting? etc.

Look at other maps.
Obviously everyone has bias, hence ideally you should use a group to determine how evolved something is. However, if you have several maps then you can use a aggregated view (i.e. a summary of common points on different maps) to determine how this thing should be treated. You do this by looking for clusters in order to remove bias (see figure 5).

Figure 5 - Aggregated View

The above graphic is a great way of removing duplication in an organisation and stopping groups from custom building the sort of thing which should be a commodity e.g. user registration

Look at publication types.
At this point I'd probably say you're going overboard. You don't need to get maps this accurate in order for them to be useful tools for organisational learning, collaboration, communication, scenario planning and strategic gameplay. However, if you want to go the extra mile then start examining the publication types (see figure 6).

Figure 6 - Publication Types

E.g. examine the frequency of different key publication types to determine roughly where you are. Please note, even this measure is rough until the act has become well defined (i.e. certain) and in which case the volume of publication types can be used to determine where it was (past tense) on the certainty axis. However, you can use this as a weak signal but I mostly use this in anticipating future change. 

NB. You have to be careful when examining publication type because certain words / phrases appear in multiple stages. For example people talk about platforms when describing a product (i.e. you can build on my product) and they also talk about platforms when describing utility services. The two are not the same.

At this point you're definitely going overboard and trying to create a perfect map. Don't bother, your map will change and you shouldn't spend more than a few hours to a day creating it. The only time you should be stepping into weak signals is when you're into anticipating a very specific change and working out when to roughly attack a market. 

There are a number of signals that can be used, for example when crossing a boundary you need to overcome inertia and that requires a number of factors to be in place - concept, suitability, technology and attitude.

For example in figure 7, we have an evolving act and every time it evolves from one stage (e.g. custom) to another (e.g. product) and hence crosses a boundary then there's inertia to the change created by the incumbent group. Hence the originator (for genesis to custom), consultants (for custom to product) and product vendors (for product to commodity) all resist a change that impacts their market. The size of the inertia barrier increases as the act becomes more evolved (and more established). 

Figure 7 - evolution and inertia.

In order to overcome an inertia barrier you need the four factors in place. The concept of providing the act in the next stage must exist. It has to be suitable (i.e. widespread and defined enough). The technology to achieve this must exist and finally there must be an attitude of acceptance for such a change by the customers. The latter is normally represented by the customers being dissatisfied with the existing arrangement.

You can go further into weak signals, for example the rate at which higher order systems can be built from lower components accelerates as the underlying systems become more industrialised. You can even use timing based upon common economic cycles such as peace, war and wonder (see figure 8). However, these are overkill for a simple mapping exercise and are more suitable when directing an attack.

Figure 8 - Economic cycles & speed of building higher order systems.

There are a number of different ways you can refine your maps but in most cases, it's rare that you should go beyond aggregated views. Most of the benefit can be gained by a group of people sitting around sharing and discussing the map.

Wednesday, September 09, 2015

The art of strategy ... it doesn't matter.

Strategy is fundamentally about answering the question of "Why?"

But "Why" is a relative statement as in "Why here over there?"

But "Here over there?" requires you to understand "Where you could attack"

And this requires situational awareness.

The process of strategy is inherently iterative. You start by roughly understanding your purpose including your scope, how you interact with others and your users. You then expand into situational awareness by understanding your users' needs, your value chains and their context (i.e. a map of the environment). You then apply doctrine including tactical approaches such as removing duplication, bias, improving flow and using appropriate methods. You learn about the environment, your competitors, the common repeatable economic patterns and forms of gameplay. You structure yourself around this including your capabilities. 

You then set a direction through a mixture of anticipation and scenario planning and through acting you refine your purpose, your situational awareness, your use of doctrine and your learning. You constantly repeat this process of observing (purpose, situational awareness), orientating around the landscape (doctrine), decision (strategic play) and action. This is summarised in figure 1.

Figure 1 - Strategic Play.

Except you probably don't live in that sort of organisation, there's less than a 0.5% chance that you do. For most of you, I can happily describe your organisation without even meeting you.

Your organisation has :-
  • A weak definition of purpose often not widely understood by people in your organisation
  • Little to no clear understanding of your users and their needs
  • Little to no situational awareness including any detailed understanding of the value chains in your market and how they are evolving
  • Significant amounts of duplication and waste, you frequently discover the same projects being built by different teams.
  • Considerable misapplication of doctrine from the inappropriate use of outsourcing or agile or six sigma and a tendency towards one size fits all methods.
  • No effective mechanism of organisational learning particularly for common economic patterns and repeatable gameplay. You're probably nervous of being disrupted but don't even realise there are multiple forms.
  • Endless talk on data, on being hypothesis driven and how your company uses this but you'll have this nagging feeling of "how do we know if this is the right data?"
  • A non adaptive structure requiring endless bolt-ons for new capabilities e.g. digital
  • A tendency to yo-yo between innovation and efficiency with internal programs.
  • Significant internal inertia to change without a clear understanding of the causes and yet somehow a belief that your systems are as efficient if not better than competitors.
  • A strategy based upon backward causality (i.e. meme copying others especially in the Tech field whether its ecosystems or open source or uberize the economy), gut feel and with little direction or anticipation (beyond gut feel). It's probably represented with volumes of papers to cover up its weakness.
  • A vague notion of your culture but awareness that it's really important. You probably tell yourself that culture eats strategy for breakfast.
Your process of strategy (if you're in the Tech industry) is more likely to be akin to figure 2 (even though this is said in jest). Though you'll probably deny it due to the considerable amount of effort and time expended on this, you'll add in a few more steps, write a business case, create a SWOT, mention something about capabilities and justify to yourself that it works.

Figure 2 - Your strategy process

Without situational awareness then you are simply waiting around for some company to take an interest in your space and then take you out. Fortunately your competitors are usually in the same boat and so ... the art of strategy ... it doesn't matter. Focus on execution, work on gut feel, copy those memes, listen to the analysts, get yourself in their magic quadrants, sell a good story, focus on your vague notions of culture (or at least tell everyone that's really important to you) and run away from those competitors that you suspect play a different game. 

As long as you build a product that is vaguely useful then you can survive for a considerable amount of time on gut feel, copying others and just by executing well. It's ok to suck as long as your competitors do. However, just because the art of strategy doesn't matter that much today, that doesn't mean it won't matter tomorrow. Slowly but surely, there are signs that companies are getting better at this. As more adapt then it'll become harder to survive just by copying the memes of others.

Tuesday, September 08, 2015

The continued search for Spime Script

Back in 2006, I gave a talk on the likely development of a future language - Spime Script - that would encode both digital and physical structure. This would enable a developer to describe the function of something and allow a compiler to determine what should be expressed physically and what should be expressed digitally. In other words, I describe the function of a smart watch and a compiler expresses this through 3D printed physical form, printed electronics and digital code. The compiler decides what bits of the function are expressed in what form and it's the interaction of those forms that create the function we're looking for. I regularly keep an eye out for this and how the industry is progressing slowly in that direction

A key part of this change is the development of hybrid printers capable of creating both physical form and electronic structure. There's a couple of things to note. According to weak signals then the span of time at which hybrid printing becomes industrialised (i.e. the printers become widespread, well understood and commodity like) should be around 2035-2040 - see table 1.

Table 1 - Point of Industrialisation (the war)

This might seem a long way off but we're only talking 20 years and this is bang on course for the current pace of evolution which stands at 20-30 years from genesis to point of industrialisation (post 2012). The first hybrid 3D printers capable of physical and electronic printing were circa 2006.

Now, I know everyone thinks that in this "digital" age that everything goes superfast but that's mainly an illusion created by the overlap of many points of industrialisation (the wars) across many activities and value chains. Yes, the pace of evolution accelerates with industrialisation of the means of communication to more commodity forms (printing press, postage stamp, telephone, the internet etc) and yes, it has recently accelerated from around 30-60 years to around 20-30 years but that's all. Do remember that the first 3D printer in a research setting was in 1967. We've got many rounds of product development for hybrid printers until they become widespread and well defined enough to become suitable for more industrial (i.e. commodity) provision.

The second thing to note is the common characteristics of the point of industrialisation (i.e. the "war"). These common characteristics include :-

1) A punctuated equilibrium as we move from product to more commodity forms. This means a non linear pattern of change, an exponential growth pattern. Hence in the first 5-10 years though the adoption will be exponential, by the end of a decade then these hybrid printers will only represent 3% -5% of the market (somewhere between 2040-2050). It's in the next five years in which they grow to 30%-50% of the market that catches everyone out.

2) An explosion of higher order system i.e. new more complex systems built with these industrialised hybrid printers.

3) A co-evolution of practice in the manufacturing industry i.e. new methods and new techniques based upon more industrialised provision.

4) Disruption of past vendors stuck behind inertia barriers i.e. existing incumbents in the manufacturing industry that are hampered by their existing business models, practices, political and financial capital and can't see the change until it's too late to react. The new entrants in this space almost always become the future giants.

There are many others characteristics but we'll leave it at those four.

The third thing to note is from the above list, the co-evolution of practice leading to new methods and new techniques. This is where Spime Script will come into play. An entire new development language that will compile to both physical and digital form. We'll probably even create a whole new meme for this change - I hope it isn't ManuOps or something daft. 

Now, as with all such changes the seeds of change are set will in advance. Hence whilst SpimeScript (or whatever it's called) will emerge and then dominate this change impacting value chains everywhere circa 2035-2055, the origin of Spime Script will start much earlier. 

When this change hits, have no doubt that its impact will be astronomical in terms of our quality of life and economics. This change impacts almost every value chain and it'll make cloud computing, the internet and most other things look like small potatoes. Everything we build and all the related industries and supply chains will alter. Basically, everything is impacted from agriculture to industry to services - from the cars you drive (or will be driven for you) to the house you live in to the gadgets you use.

Bizarrely, it looks like this change is going to hit at the same time when bio manufacturing and aspects of meta material science (other reasonably large changes) start to industrialise and so that period 2035-2040 is set to become rather spectacular, a time of notable 'wonder'. You'll have all forgotten about this "digital" stuff by then anyway and what's left of that will have become legacy.

So, in rough order

2015 - 2035 : we should see continued product development of hybrid printers including potential minor waves of disruption from product vs product substitution.
2035 - 2040 : we should see new entrants providing more industrialised forms of hybrid printers, an explosion of new activities built on this, co-evolution of practice including the rise of a new language (e.g. SpimeScript) and exponential change.
2040 - 2050 : the new entrants should achieve 3-5% of the market, a meme will exist for the new form of manufacturing, past incumbents will show visible signs of inertia and be generally dismissive.
2045 - 2055 : the new form of manufacturing will dominate all value chains, past incumbents will be disrupted.

Alas, by the end of this time, I'll be in my seventies at the youngest. So, there's every chance that I won't be around to see what should be one of the most spectacular changes in the history of mankind. Robots, drone, AI, big data ... bah humbug, small fry ... components of a much larger change that includes the entire process of how we make stuff. 

Hence my interest in finding Spime Script earlier, it should be fascinating to watch it grow.

P.S on the off chance that you don't know what a Spime is then please go read Bruce Sterling's book.

Sunday, September 06, 2015

Somethings are different, some remain the same.

Back in 2011, I published a population study of companies showing that a next generation of company seemed to be emerging with a different phenotype. Details of this can be found here. I've summarised the main changes in table 1.

Table 1 - Differences between Traditional vs Next Generation. 

NB, these characteristics describe what was significantly different between the populations at that time and of course, we will only be able to tell whether these characteristics become universal with the application of more time. However, despite the differences there were many things that remained the same i.e. did not distinguish between populations. A selection of these "neutral" characteristics that remained the same are included in table 2.

Table 2 - Similarities between Traditional vs Next Generation. 

I mention this because I've noticed a tendency of people to bundle all sorts of characteristics under the title "digital" and claim fairly definitively that this is the future of the firm. Before embarking on such exercises, I would suggest that you understand both the concepts of evolution & co-evolution, along with common repeating economic patterns that tend to form new organisational structures and then do some population studies to confirm / reject your hypothesis across numerous characteristics. Once you've done this, you now have to wait many years to see if a characteristic turns out to be a pointless fad or the consequence of an echo chamber (i.e. it won't become supported in the wider environment).

Whether it's social media to blogging CEOs to culture to structure to adoption of specific methods then there's quite a few trendy ideas which seem to be on questionable grounds. I'm not saying these things aren't without any merit and certainly we should experiment but that's a far cry from picking a characteristic from a couple of "leading companies" and declaring it as the future of all firms. You need to be wary of backward causality and just because Uber uses "surge pricing" then that doesn't mean it's the future of all.

What my study showed was whilst there were differences, there were many more similarities between the traditional and the next generation. As time passes, the traditional will adapt even on the small number of characteristics that showed any significant difference. Of course for me that's important because in a decade or so time, I'll be able to go back and confirm whether a next generation did truly emerge or was it some other aberration. 

Friday, September 04, 2015

VMware, EMC ... the rumour mill is in fine flow.

In a discussion over China and the USA, the conversation turned to all the tech rumours floating around VMware & EMC. I shrugged my shoulders and said I retired from cloud almost five years ago and I have very little interest. 

Alas, people kept needling the point. 

Eventually, I added that I thought EMC had left it late to jettison VMware. I'd have looked to maximise the capital last year.  I had an old "Escape button" bet from 2009 that...

By the end of 2014, VMWare will have divided into two companies, one focused on existing virtualisation technology (IaaS Group) and the other focused on PaaS technology. The IaaS group will have been sold off to another company.

Now whilst I don't expect to win the bets, I do intend to get the direction of travel roughly right. Alas, whilst I expected an open source platform play and this to be broken out as a separate group which has happened with Pivotal, there has been no clear movement on spinning out the virtualisation business. Hence my bet was lost. I'm no fortune teller and so don't ask me what is going on with EMC and VMware.

Alas, people kept needling the point.

In annoyance, I explained that I took the view that EMC must suspect that everything from the virtualisation business to storage would be heading towards the cliff. I had anticipated that VCE would be a potential vehicle to extract EMC from this but alas that didn't gain traction. The timing a year ago seemed right with all the marketing about private and hybrid cloud being the future. This looked like a perfect pig in a poke play i.e. take a business that you anticipate will be disrupted, dress it up as the future and sell it for as much as possible. Nothing happened. As I said, I'm no fortune teller. When it comes to the rumours floating around VMware and EMC ... don't ask me, I haven't got a clue. 

Alas, people kept needling the point.

In frustration, I started drawing maps with endless muttering under my breath. I explained that if I was in Joe Tucci's position then I'd probably try something even more ambitious at this stage. Despite what you might think, a lot of the market is based upon sentiment about the future and if you make the story big enough then people can get carried away with it. The problem with just spinning out VMware is the market might well be suspicious. You'd need a reason, a conflict, a message - something like your own IaaS play creating conflict with the existing business. You'd need to somehow portray an image that you really want to hold onto it but alas, reluctantly you have to let it go. 

When I look around, there are a number of companies that I don't personally consider to be well positioned for the future - VMware, the storage components of EMC, Cisco, RedHat and HP. Most of these I view rather uncharitably as drowning men who don't just know it yet. However, just because I don't think they're well positioned doesn't mean I can't find a way of maximising return with a good enough story.

So, after about ten minutes I finished my scribbles and muttering. I looked up and said "Dry land is not just our destination, it is our destiny!"

In the film Waterworld, the Dean muttered these immortal worlds because with a good enough story, you can get everyone rowing in the right direction without anyone asking the awkward questions. For me, that story would be around converged infrastructure. 

Behind the scenes I'd be working on a play for a merger between VMware, Cisco and the soon to be split off Enterprise component of HP combined with an acquisition of EMC storage and EMC stock in VMware and possibly even acquisition of RedHat all funded through debt. Well, if you're going to go all out then make it as large a play as possible.

For me, this gets EMC off the hook at top dollar and the new vehicle portrays a big enough story that the market will get excited about the beast without considering that many of the components are going to have a tough time in cloud. I could spin the story of synergy between the groups, converged infrastructure, one stop shop, large enterprise customer base, efficiency, hybrid environments combining HP, Cisco and VMware capabilities and a host of other reasons under a tagline of the future. I could do this until the cows come home which by my reckoning means in the next 18 months, before the cloud battle gets too horrific. I could craft a story here that the Dean would be proud of, the troops would get excited about and just hide the map from prying eyes.

Do I think it would be a good idea? Of course not and for two reasons. Firstly, I spent about ten minutes on it. Secondly, even that ten minutes told me it is about as daft as you can get in my book. Drowning men grabbing hold of each other doesn't solve the problem of drowning. But could I sell the story to the market and get away with it? Probably.

But then I'm not involved in any of those companies nor do I have any idea of what plans they might have or not. As far I can tell, the stories about VMware and EMC are all rumours and I've little interest in investigating. At best, you can consider the rumours to be nothing more than a weak signal that someone, somewhere is exploring something. Who, what? ... no idea, don't care. Ask a fortune teller.

Monday, August 31, 2015

A lament to the Enterprise of yesteryear

We're being hit by disruptive innovation! 

Our industry is being commoditised! 

Our business is complex! 

We're behind the curve!

We're going to innovate!

We've created a strategy!

Hired the best!

They said they were experts!

Marketing is key!

And the future is Private!

Or Enterprise!

Or Hybrid!

But we need to re-organise!

We have the solution!

Our strategy tells us so!

If we just fix our culture!

Then this time, it'll be different!

Or I will rend thee in the gobberwarts with blurglecruncheon, see if I don't!

An ode to a small lump of enterprise I found in my portfolio one midsummer morning, a lament to the Enterprise of yesteryear. Best repeated with the true poetic wit of a Vogon constructor fleet.

We're being hit by disruptive innovation! 
Our industry is being commoditised! 
Our business is complex!
We're behind the curve!
We're going to innovate!

We've created a strategy!
Hired the best!
They said they were experts!
Marketing is key!
And the future is private! 
Or Enterprise!
Or Hybrid!

But we need to re-organise!
We have the solution!
Our strategy says so!
If we just fix our culture!
Then this time, it'll be different!
Or I will rend thee in the gobberwarts with my blurglecruncheon, see if I don't!

Anyone looking for a short cut out of this, I'm afraid I can't help you. However, I would suggest mapping your landscape ideally by going on a get fit regime and cleaning up the enterprise then afterwards applying some thought. Adapting to the changing technological-economic environment is not a choice.

Thursday, August 27, 2015

Amazon and the last man standing

I often talk about the 61 repeatable forms of gameplay in the market and I know I'm a bit behind on doing those posts. I don't normally stray off the path but I thought I'd cover a well known game called last man standing. The reason why I want to talk about this, is there seems to be continued misunderstanding about Amazon and what's likely to happen. Now there are two possible reasons - either I'm wrong or lots of other people are.

Hence, I'll put my stall forward.

Amazon is likely to be supply constrained when it comes to AWS and EC2. What I mean by this is that it takes time, money and resources to build data centres. You can't magic them out of the air. With AWS already doubling in physical size (or close to) each year, this creates considerable pressure and if AWS were to drop the price too quickly then demand will go up to outstrip supply (i.e. it just won't be able to build data centres fast enough). Hence Amazon would have to control pricing in order to control demand.

I know that people talk about AWS being a low margin business but I'll stick with older figures and say that Amazon is probably making a gross (not net) margin of 80%+.  Let us look at revenue and for this, I'll turn to an old model from my Canonical days (see figure 1) after which we will cover a couple of key points in time that are coming up in that model.

Figure 1 - Estimated of Forward Revenue Run rate.

By my past reckoning then by 2014, AWS would have a forward run rate of around $8Bn. Which means in 2015, it would make around $8Bn or more in revenue. Currently people are estimating at around $5-6Bn, so I count that as pretty damn good to get into the right order of magnitude. However, this is not about how accurate or inaccurate I might have been. This is about the steps and what roughly will happen.

1) In 2015, I expected AWS to clock a revenue of $8Bn+, a gross margin of 80%+, for Amazon still to be supply constrained and for a few examples of some large companies reliant on cloud (i.e. what we now call data centre zero companies)

2) In 2016, I expected AWS to clock a revenue of $16Bn+, a gross margin near to 80%, for Amazon still to be supply constrained, a very visible movement of companies towards using AWS and the market around AWS skills to heat up. I expected by the end of the year for the wheels to start coming off the whole private cloud market (which is why I've warned about this being the crunch time).

3) In 2017, I expected AWS to clock a revenue of $30 Bn+, a gross margin near to 80% and Amazon still to have to control pricing. However, by the end of the year I expected this supply tension to reduce as the growth rate would show signs of levelling. This will provide more opportunity to reduce pricing to keep physical growth to doubling. I expect AWS skills to be reaching fever pitch and the wheels to be flying off the private cloud market.

4) In 2018, I expected AWS to clock a revenue of $50Bn+. I expected gross margin (and prices) to start coming down fairly rapidly as Amazon has significantly more price freedom (i.e. is far less price constrained than is currently the case). Data centre zero companies will become prevalent and there will still be a fever pitch around AWS skills.

5) In 2019, I expected AWS prices to be rapidly dropping, the growth rates to continue levelling, the fall-out to start biting into hardware competitors, the private cloud industry to have practically vanished and the remaining laggards to be making a desperate dash into cloud.

6) By 2020, the game is not only all over (last chance saloon was back in 2012) but we start chalking up the casualties. 

This doesn't mean there won't be niches - there will be and it's in these spaces that some open source efforts will hopefully hide out for future battles. This doesn't mean that some geographic regions won't try and hold out for spurious reasons - they will and at the same time harm their own competitive industries. This doesn't even mean I think my own figures or timing will be right, remember this model is ages old. I'm no fortune teller and at best I view it as being in the right direction. However, until someone gives me a better direction then this is the one that I've stuck with and so far, it seems to be fairly close.

Oh, and the last man standing? Well, in the last few years of the model when the price is dropping then it is all about last man standing. Many competitors won't be in a position to cope with how low the prices will go. The economies of scale will start to really tell here. Many will fall and it won't be gentle and graceful like. It'll be more brick like as in brick fired from a howitzer pointing downwards on the top of a building.

P.S. Before someone tells me the big hardware vendors are going to make a difference in infrastructure ... please don't. It's over. It has been over for sometime. Even if I had $50 Bn, I need to build the system, build the team, build the data centres before I launched and at any reasonable scale (even with using acquisition as a short cut) I'd be talking two years+ at lightning fast speed. I'd be walking into this market as a well funded startup against a massive behemoth who owned the ecosystem. Even those ex-hardware vendors with existing cloud efforts have too little, too late. No amount of money is going to save them here. These companies are just going through the motions of hanging on for as long as they can. There's a platform play but that's a different post.

P.P.S There will be some cloud players left - AWS will dominate followed by MSFT and then Google and a player like Alibaba. There'll be some jostling for position and geographic advantages.

Wednesday, August 26, 2015

The Open Source Cloud, start playing the long game.

Back in 2007, I gave a keynote at OSCON on the future importance of open source and open standards to create competitive utility computing markets (i.e. the cloud). We had a chance for an early land grab to make that happen in what is called Infrastructure as a Service but we lost that battle to AWS (and to a lesser extent MSFT and Google). There are numerous reasons why, far too many to go through in this post.

Just because the battle was lost, doesn't mean the war was. Yes, because of the punctuated equilibrium, we're likely to see a crunch in the 'private' cloud space and near dominance of the entire space by AWS with MSFT following. Yes, Amazon plays a very good ecosystem game and they are a tough competitor. However, in about 10-15 years in what will feel like the wilderness, we will get another opportunity. In much the same way that Linux clawed its way against the near total domination of Microsoft. There are numerous reasons for this, again too many to go through in this post and of course, there could be many twists and turns (e.g. the somewhat unlikely open sourcing of AWS technology).

For the time being, the open source cloud world (and yes by that I mean systems like OpenStack) need to hunker down, to firmly entrench itself in niches (e.g. network equipment), to build up and mature and prepare for the long fight and I do mean a LONG fight. A couple of encouraging signs were @jbryce comment at OpenStack SV 2015 on "having a reason" to build a cloud and not just because it's cool technology along with the discussion on maturity vs adoption of technology. This was good. But afterwards some of the conversations seemed to slip into "the path to Cloud Native", "democratising IaaS", "a platform for containers" (an attempt to re-invent again but around Kubernetes), "the problem is you" (as in IT depts not adopting it), "open source is a competitive advantage" (depends upon the context) and on and on.

You need to remember that for companies who might use these services their focus should (and increasingly will) be on meeting some need with speed (i.e. quickness of delivery), agility (applying more or less resources to the problem as needed) and efficiency (being cost competitive to others). Yes, things like mobility matter from the point of buyer / supplier relationships and in some niches there are location constraints. However, no business under competition is going to last if they sacrifice speed, agility and efficiency in order to gain mobility. To survive, any open approach needs to solve these problems and deal with any issue created by Amazon's huge ecosystem advantage. There is lots of good stuff out there such as Docker and in particular Kubernetes but the strongest plays today in the open source world are around the platform with Cloud Foundry and in the operating system where Ubuntu dominates with some competition from the challenger CoreOS. 

The battle for IaaS maybe lost but the war is far from over and yes, I hear that this or that paradigm shift will change the game again - oh, please don't bother.  The open source world will get another chance at the infrastructure game as long they focus on the long term. Probably the best route of attack in the long term starts with Kubernetes but that's another post.

P.S. People ask why I think CloudStack has a shot. Quite simply, the Apache Software Foundation (ASF) can play the long term game. I'm not convinced that after the crunch that OpenStack will be in such a position. We shall see.

P.P.S. People ask why am I so against OpenStack? This might surprise you but I'm not. However, OpenStack needs to hunker down against the storm and play the long term game. I'm not convinced by its earlier examples of gameplay that it either understands this or is willing to do anything about it.

On Diffusion and Evolution

I recently saw this tweet and unfortunately, despite good intentions, there's a lot wrong with it. I've taken the main image (unknown copyright) for figure 1 and I'll go through what is wrong.

Figure 1 - Evolution mixed with diffusion

The fundamental problem with this image is it conflates diffusion with evolution. Whenever we examine an activity, practice or form of data then yes, it tends to diffuse. But, it also evolves through the diffusion of ever more mature, more complete and more certain forms of the act. Hence in the evolution of an act there maybe hundreds if not thousands of diffusion curves involved. The problem you have with trying to tie diffusion curves to evolution is in trying to determine which diffusion curve you're on. 

For example, let us take an activity A[x]. Let us suppose it evolves through multiple diffusing instances of the act (e.g. if A[x] was telephony then A[1], A[2], A[3] and so forth would represent ever better phones). I've added these diffusion curves into figure 2 below.

Figure 2 - Diffusion of an activity A[x]

Now each of these diffusion curves can cover different time periods and different applicable markets. Each will have a chasm i.e. in the evolution of A[x] there will be many chasms to cross and not just one. So, when examining the question of early adopters to laggards then we have to ask, which diffusion curve are we on? The laggards of A[1] are not the same as the laggards of A[5].

The normal response is to say - "well, we will measure the overall one i.e. when it becomes ubiquitous". Here you have an immediate problem because if I ask the question whether gold is a commodity (i.e. well defined, understood, standardised) then most would respond yes. But if I ask the question "Does everyone own some gold?" then most would respond no. The problem is that ubiquity is to a market and so you can't just say "measure its ubiquity" because you need to understand the market first. Hence in some cases, a ubiquitous market is 30% of the population owning an example of this thing (i.e. that's all it will ever get to) but in other cases a ubiquitous market is everyone in the market owning fifty examples of this thing. 

So how do you determine the appropriate market? Actually, this was the trick I discovered back in 2006 to 2007. As things evolve, they become more defined and certain and the type of publications associated with the act change. There are four basic types of publications show in figure 3.

Figure 3 - Publication Types.

So when something appears e.g. Radio then we first write about the wonder of radio, then how to build and construct a radio crystal set, then we move onto to differences between radios until finally being dominated by guides for use.  I used just over 9,000 articles to determine these four types and used this to develop a certainty axis (show in the figure above and developed from type II and type III publications) and a bit more detail on this is provided here and here.

Now, the transition from Type III to Type IV in the graph above is critical because this defines the point of ubiquity in a market. If I take this as the point of ubiquity and plot back through history over both ubiquity and certainty then you get the following (figure 4)

Figure 4 - Ubiquity vs Certainty

The figure above represents a large range of different activities from telephones to TV to fridges etc. If you now overlay the the different publication types (i.e. type I, II, III and IV) then you create the evolution curve (see figure 5). What drives this evolution is competition (supply and demand) and that's marked on as well.

Figure 5 - Evolution

When can now go back to our diffusion curves in figure 2 and plot them on the evolution curve. I've illustrated this onto figure 6 (nb. this particular graph is just an illustration, not based upon data)

Figure 6 - Diffusion on Evolution

So when we look at A[1] from a diffusion point of view we might have crossed the chasm and the laggards maybe joining but it's very much in the early stages of evolution. We know from the publication types that despite the act reaching close to 100% adoption of its market that the market is nowhere near evolved. But at A[5] the act is very evolved and we already know that we've reached the point of ubiquity in the market from the publication types. It might not be the case that everyone has this item but this is what the ubiquitous market for this item looks like and it is now a commodity.

Now with evolution I can add all sort of changing characteristics i.e. genesis is very different from commodity (see figure 7). So for example, I know those activities or components in the genesis phase are uncertain, rare, risky, a point of differentiation, poorly understood, chaotic, deviates from the past, a source of worth, rapidly changing etc etc.

Figure 7 - Properties

So, I understand what the original image and tweet was trying to convey but alas - as simple and as seductive as it sounds, it's just plain wrong.  You can't just mix diffusion and evolution together in that manner. For those wanting to use the diagrams above, they all date from 2007 onwards and are creative commons licensed. The original work (i.e. data collection and testing) was done in 2006 & 2007 and the concepts actually date back much earlier in case you're interested (e.g. I was using the "pattern" of evolution back in '04/'05 though at that time it was just a noticed pattern rather than something with substance).

Monday, August 24, 2015

On the common fallacy of hypothesis driven business.

TL;DR Look before you leap.

There's a lot wrong with the world of software engineering but then again, there's always been a lot wrong with it. Much of this can be traced back to the one size fits all mentality that has pervaded our industry - be agile, be lean, be six sigma, outsource

However, there is a universal one size fits all which actually works. It's called look before you leap or in other words observe the environment before you decide to take any action. In the case of OODA loops there's even two whole steps of orientate and decide before you get from observe to the action bit. Alas in many companies action seems to be the default. Our strategy is delivery! Delivering what exactly? Who cares, deliver!

Your organisation, your line of business and even a discrete system consists of many components. All of those components are evolving through supply and demand competition from the uncharted space of the uncertain, unknown, chaotic, emerging, changing and rare to become industrialised over time. The industrialised have polar opposite characteristics to the uncharted something we've known about since Salaman & Storey's Innovation Paradox of 2002. If you want to accelerate the speed at which you operate and create new things then you have to break down complex systems into stable components and treat those components appropriately

So, how do you manage this? Well, since most companies fail to observe the environment then they will resort to the only thing possible which is backward causality or meme copying - "Everyone else is doing this thing, so lets adopt DevOps, Agile, Lean, Digital, Cloud, APIs, Ecosystems, Open Source, Microservices" and on and on. Each of these approaches have certain benefits if used in the right context but in most cases, the context is missing. Furthermore in today's software world various claims are given to being more scientific, to being driven by hypothesis but many of these ideas are misguided without context.

To understand why, we need to explore the game of chess. A key part of the game of chess is understanding the board i.e. where the pieces are (position) and where they can move to (movement). You don't actually have to physically see the board if you're good enough. You can create a mental model of the board and play the game in your mind. But the board is there, it's an essential element of the game. Though each game may be different, you can learn from each game and use these lessons in future games. This is because you can understand the context at hand (the position of pieces and where they can move) and can learn consequences from the actions you take. You can apply such lessons to future contexts. This is in fact how we learn how to play chess and why practice is so important.

Now, imagine you have no concept of the board but instead all you see is a range of computer characters on the screen (see figure 1). Yes, you can play the game by pressing the characters but you have no understanding of position or movement. Yes, over time you can grab the sequences of thousands of games and look for secrets of success in all those presses e.g. press pawn, pawn, queen, queen tends to win. You will by nature tend to copy other successful players (who also have no context) and in a world dominated by such chess play then memes will prevail - expect books on the "power of the rook". Action (i.e. pressing the key) will dominate, there is little to observe other than the sequence of actions (previous presses) and all these players exist in a low level situational awareness environment.

Figure 1 - Chess with low levels of situational awareness.

If you ever come up against a player who can see the context (i.e. the board) then two things will happen. First, you will lose rapidly despite having access to thousands of games containing millions of data points from sequences of action. Secondly you'll be bewildered. You'll start to grab for the spurious. Naturally, you'll try and copy their moves (you'll lose), you look for all sorts of weird and wonderful connections such as the speed at which they pressed the button (you'll lose), whether they had a good lunch or not (you'll lose) and whether they're a happy person or not (you'll lose). It's like the early days of astronomy where without any understanding we collected all sorts of things such  as whether it was a windy day. Alas you will continue to be utterly outplayed because the opponent has much higher levels of situational awareness and hence they understand the context better than you. To make matters worse, with every game your opponent will actually discover new patterns, new ways of defeating you and they will get better with time. I've tried to show an example of low vs high situational awareness in figure 2.

Figure 2 - low vs high situational awareness.

The player who understands the board will be absorbed by first observing the environment, understanding it (i.e. orientate and decide) and then making a move (i.e. acting). Terms like position and movement will matter in their strategy. Their strategy (the why of action) will be based upon why here over there i.e. why this move over that move. 

Most businesses exist in the low level situational awareness environment described by figure 1. They have no context, they are rife with meme copying, magic sequences and are dominated by action. We already know that this has an impact not only from individual examples but by examination of a range of companies. It turns out that high levels of situational awareness appears to be correlated with positive market cap changes over a seven year period (see figure 3).

Figure 3 - Situational Awareness and Market Cap changes.

So what has this got to do with hypothesis driven business? Hypothesis without context is often akin to saying "If we press the pawn button will it give us success?"

The answer to that question is it might in that context (which you're unaware of) but as the game changes with every move then there is nothing of real value to learn. Without understanding context you cannot learn patterns of play to use from one game to another. To give an example of this, let us examine The Scenario as described in an earlier post. This scenario has all the information you require to create a map and to start learning from previous conflicts and repeating patterns. However, most companies have no idea how to map and hence have no mechanism of past learning through context. 

It is certainly possible without context to create multiple hypotheses for the scenario e.g. expand into Brazil or maybe attempt to differentiate with a new feature? These can be created and tested. Some may well show a short term benefit. However, if you take the same scenario and map it  - as done in the Analysis post - then a very different picture appears. Past and repeatable patterns such as co-evolution, ILC & punctuated equilibriums can be applied and it shows the company is in a pretty miserable state. Whilst a hypothesis around differentiate with a new feature might show some short term benefit and be claimed as successful, we already know it's doomed to fail. The hypothesis therefore appears to be right (short term) but before acting, from the context, we already know it's wrong and potentially fatal (long term). It's the equivalent of knowing that if you move the Queen you might capture a pawn (i.e. success from a hypothesis of pressing the queen button) but at the same time you expose the King to checkmate (from looking at the context, the board).

The mapping technique described is about the equivalent of a Babylonian clay tablet but it's still better than having no map as it provides some idea of context covering position (relative to a user need) and movement (i.e. evolution). There will be better mapping techniques created over time but at this moment, this is the best we have. Many of us over the last decade have developed a sophisticated enough mental model of the environment, principles and repeatable patterns that we can just apply to them to a scenario without mapping it first. In much the same way that if you get really good at playing chess, you don't even have to look at the board. However, most have no understanding of the board, of position, of movement, of context or the numerous repeatable patterns (a subset of these, 61 patterns are provided below in figure 4). 

Figure 4 - An example list of repeatable patterns / gameplays

Without understanding context then most have no mechanisms of anticipation, learning and cannot even use weak signals to refine this. In such cases, you can make an argument that hypothesis driven business is better than nothing at all but it's a very poor substitute for understanding the context. Even if your hypothesis appears to be right, it can be completely the wrong thing to do.

This is the fallacy of hypothesis driven business. Without a mechanism of understanding context then any hypothesis is unlikely to be repeatable as the context will likely change. Yes, you can try and claim it is more scientific (hey, we've pinched a word like hypothesis and we're using data!) but it's the equivalent of saying "If I have a good lunch every day for a month then the heavenly bodies will move!" ... I had a hypothesis, I've eaten well for a month, look those stars moved ... success! Oh dear, oh dear, oh dear. Fortunately, astronomers also built maps.

This doesn't mean there is no role for hypothesis, of course there is! For example it is extremely useful for exploring the uncharted spaces where you have to experiment or for the testing of repeatable patterns or even for refinements such as identifying user needs. But understand the context first, understand where you can attack and then develop your hypothesis. The context is your route to continued learning.

Observe (i.e. look at the context) then Orientate & Decide (i.e. apply thought to that context) then Act (i.e. do stuff in that context). 

Saturday, August 15, 2015

The Analysis

Ok, this post provides a quick analysis of the Scenario. As a guide, this sort of analysis should take about 30 minutes. To get the most out of this exercise, read the scenario post, write your plan and then read this analysis. In a final post, we will go through gameplay.

The Analysis

First, lets start by creating a basic map. Our users are data centre operators, they have a need for a mechanism of improving Data Centre efficiency in electricity consumption, we have our software product which is based upon best practice use of an expensive sensor that we purchase and a custom set of data. This is shown in figure 1.

Figure 1 - Initial Map

In this exercise, I'm going to slowly build up the map. Normally, I would just dive into the end state and start the discussion but that'll be like one of those "it's therefore obvious that" exercises in maths which often confounds others.  

First of all, I'm going to add some bits I know e.g. we anticipate an opportunity to sell into Brazil (I'll mark as a red dotted line) and we have a US software house in our market selling a more commodity version as a utility service (I'll mark as a solid red line as it's something that is definitely happening). From the discussion with the head of sales (who was rather dismissive of the US effort) and the strategy, I already know we're going to have inertia to any change, so I may as well add that in (a black bar).

Figure 2 - Brazil and US.

However, we also know that the US version provides a public API and has a development community building on top of it. The US company is also harvesting this, probably through an ILC like model. The consequence of this, is the US company will start to exhibit higher rates of apparent innovation, customer focus and efficiency with proportion to the size of their ecosystem. Those companies building on top of their API act as a constant source of differential for them. I've added that in the figure below.

Figure 3 - ILC play.

Given the US company growth last year and that a shift from product to utility is often associated with a punctuated equilibrium, I can now take the figures and put together a P&L based upon some reasonable assumptions. Of course, we're missing a lot of data here in particular the development cost of the software etc. However, we'll lump that into SG&A.

Figure 4 - P&L and Market.

Ok, so what I now know is that we seem to be a high gross margin company (i.e. a juicy target) and a good chunk of our revenue is repeating software licenses. If this is a punctuated equilibrium (which seems likely) then I expect to see a crunch time in 2020 between us and our US company as we will both have around 50% MaSH. Unfortunately, when that happens then they're likely to have higher rates of efficiency, apparent innovation and customer focus due to their ecosystem play. Furthermore I'm going to have inertia to any change probably due to existing practices, business and salespeople compensation.

If I do make a utility play then I'm going to need to gain the capability to do this, raise the capital needed to build a utility and launch fast. Let us suppose this takes two years. Then I'll be entering a market where my competitor has 8 years of experience with a large & growing ecosystem and 100% MaSh of the utility business (worth £30M to £60M). I'll have no ecosystem, no MaSh and a salesforce probably fighting me and pointing out how our existing business is worth £144M to £173M. In the worst case, if I haven't explained the play properly then they could even be spreading FUD about my own utility service and trying to get customers to stick with the product.

Even my own board could well push against this move and the talk will be of cannibalisation or past success.  Alas, I know our existing business is a dead man walking. Post 2020 things are going to be grim and by that I mean grim for us. Despite the competitor only being 3% of the market, I've already left it late to play this game. I've got some explaining to do to get people on board.

Unfortunately there is more bad news. Let us look at that the other changes in the market, such as the shift of sensors.

Figure 5 - Change of sensors.

Now, we've already seen signs of inertia in our organisation to using these sensors. As the product manager says they're not as good as the old. However, we also know that as an act becomes a commodity then practices co-evolve and new methods of working emerge. Hence the future systems probably won't have one sensor in the data centre but dozens of cheap ones scattered around. Unfortunately, our software encodes best practice around the expensive product based sensor and if the practice evolves then our software is basically legacy. I've added this to the diagram below, however rather than using a solid red line (something we know is happening) then in this case I've added a dotted line (something we anticipate or an opportunity).

Figure 6 - Co-evolution

So, our business is being driven to a utility and we don't have much time. Even if we get started now then by the time we launch we'll be up against an established player with a growing ecosystem. Our own people will fight this change but even worse our entire system will become legacy as commodity sensors lead to co-evolved practice and new software systems designed around this. So along with my head of sales and marketing fighting me, I'm pretty sure I can add the product manager and a good chunk of an engineering team that has built skills around the old best practice. 

Now, if you're used to mapping then you'll have spotted both the punctuated equilibrium and the danger of co-evolution. As a rule of thumb, these forms of co-evolution can take 10-15 years to really bite (unless some company is deliberately accelerating the process). Hence, even if we somehow survive our current fight in the next five years, we're going to be walking smack bang into another one five years later.

Of course, at this point I need to start to consider the other players on the board e.g. the US competitor. They're already providing a utility play, so we can assume that they have some engineering talent in this space. This sort of capability means they're likely to be pre-disposed to building and using more commodity components. The chances are, they're already thinking about the commodity sensors and building a system to exploit this. That could be a real headache. I could spend a couple of years getting ready to launch a cloud based service based upon the expensive product sensors and suddenly find I'm not only behind the game but the competitor has pulled the rug under me by launching a service based upon commodity sensors. I'll be in no man's land.

The other thing I need to look at is that conversion data issue. I know it's evolved to a product but it could easily be pushed to more of a commodity or provided through some API and play some form of open data ecosystem game on me. I've shown this in the following diagram.

Figure 7 - Data Ecosystem

I've now got a reasonable picture of the landscape and something I can discuss with others. Before I do, let us check the proposed "Growth and sustainability in the data centre business" strategy.

First up was expansion into Brazil. This will require investment and marketing but unfortunately we're not dealing with the issues in our existing market. At worst, we could spent a lot of cash on laying the groundwork for the US company to chew up Brazil after they've finished chewing us up. Still, we need to consider expanding but if we do so in our current form then we're likely to lose.

Second was building a digital service including a cloud based provision for our software system that enable aggregated reporting and continued the licensing model. Ok, one of the killer components of the US system is the API and the ecosystem it has built around this. We could easily invest a significant sum and a few years building a cloud based service, enter the market and be outstripped by the encumbent (the US company) because of their ecosystem and even worse find our entire model is now legacy (because of co-evolved practice). I know it's got the word "digital" and "cloud" in the strategy but as it currently stands then this seems to be a surefire way to lose.

Thirdly, the strategy called for investment in sales and advertising. Well, we've plenty of cash but promoting a product model which as it stands is heading for the cliff and may become entirely irrelevant seems a great way of losing cash.

Lastly, we're to look into the use of the data conversion product. Ok, this one doesn't seem so bad but maybe we should drive that market to more of a commodity? Provide our own Data API? 

On top of all this, we have lots of inertia to deal with. Now that we understand the landscape a bit better then we can craft a strategy which might actually work. Of course, I'll cover that in another post. However, in the meantime I'd like you to go and look at the scenario, look at your original plan and work out how you might modify it.

Happy Hunting.