Wednesday, April 29, 2015

AWS and Gross Margin.

AWS has now reported and there is a lot of noise over margins including ample confusion over operating margin vs gross margin.

A couple of things to begin with. Back in my Canonical days I plotted a forward run rate for revenue of AWS. This was based upon lots of horrendous assumptions and to be honest, I'm more interested in the direction of travel rather than the actual figures. A copy of the output of that model is provided in figure 1.

Figure 1 - forward rate.


Now, what the model says is that after 2014, the revenue for AWS should exceed $8Bn in each subsequent year. After 2015, the revenue for AWS should exceed $16Bn in each subsequent year and so forth. Don't ask me what the actual revenue will be - I don't care. I'm more interested in the speed of change of the punctuated equilibrium that is occurring.

A couple of things to note. Compute is price elastic (it has been for 30 odd years). What this means is that as prices drop then volume increases. Today, I can buy a million times more compute than I could in the 1980s for $1,000. This doesn't mean my IT budget has reduced a million fold in that time, quite the opposite. What has happened is I've ended up doing vastly more stuff.

This is the thing about dropping prices in a price elastic market, demand goes up. But if you're already doubling in physical size (or as AMZN has stated increasing 90% per year in capacity) due to a punctuated equilibrium and a shift from one model of products to utility services  then you have to be very careful of constraints. For infrastructure there is a constraint - the time, material and land required to build a data centre. What this means is that it is highly likely that Amazon has to carefully manage price reductions. It would be easy to drop prices causing an increase in demand beyond the ability of Amazon to supply. This 'weakness' was the one I told HP / Dell & IBM to exploit back in 2008 in order to fragment the market. They didn't - silly sods.

However, over time the market will level to a more manageable pace of change i.e. the ravages of the punctuated equilibrium will have passed and we're down to good old price elasticity and operational efficiency. It is really useful therefore to get an idea of how much prices can reduce by. 

The reason for this is rather simple. Cloud is not about saving money - never was. It's about doing more stuff with exactly the same amount of money. That can cause a real headache in competition. For example, let us say your company has an annual revenue of $10 Bn and spends around 1% of its revenue on infrastructure, platforms and related technology - say $100M p.a.

Now, what matters is the efficiency delta between your provision and utility services like AWS. Many people claim they can be price equivalent to AWS for infrastructure but often I find that the majority of the costs (e.g. power, air conditioning & other services, building, cost of money, maintenance, space capacity) are discounted by claiming that it belongs to another budget or just ignored (in the case of capacity cost). This is why I tell companies that they really need to set up their IT services as a separate company and force it to run a P&L. Hardware and software costs usually only account for 20%-30% of the actual cost of the services 'sold'.

Oh, as a hint if you're a CEO / CFO and your CIO says they're building a private cloud comparable to AWS then the first question you should ask when looking at the cost comparison is "What % of the cost is power?" If they bluster or say it's covered elsewhere then you're likely to be building a dud. Start digging into it.

The other problem is that people compare to AWS prices today and ignore future pricing. The problem here is that if there is a high gross margin for AWS then as the constraints become more manageable then prices will drop to compensate and increase demand. When you look at the problem through a lens of future pricing and actual cost then in some cases you can easily reach a 20x differential. 

But what's the big deal? What if your competitor reduces their infrastructure, platforms and related technology costs from $100M to $5M, that's only $95M saving and what is at stake is the whole $10Bn revenue. It sounds risky? Wrong.

Your competitor won't reduce their cost through efficiency, they'll do more stuff. So, they'll spend $100M p.a. but do vastly more with it. In order for you to keep up then using an "old" and inefficient model you'll need to be spending $2 Bn p.a. just to keep up. That's not going to happen. What is going to happen instead is your competitor will be able to differentiate and provide a wealth of new services faster, quicker and more cheaply than you in the market until you are forced to adapt. But by then you'll have lost MaSh and the damage will be done - not least of all because marketing & biz will be at the throats of IT more than ever. You have no choice about cloud unless you can somehow get the actual costs down to match that future pricing. Very few have the scale and capability to do this. 

So, how low can that future pricing go. Looking at AWS report, some are saying they only make 17% Margin. First of all, that's Operating Margin which covers all Operating Expense (i.e. all costs bar tax and interest). This will include, unless US reporting rules are somewhat different to what I remember :-

1) cost of providing Amazon's own estate.
2) cost / capital leases / staffing costs / depreciation for any future build - NB given AMZN is doubling in capacity each year then this will be significant.
3) SG&A costs which tend to be high when building up a business.
4) development costs for introduction of new industrialised services.

Many of these operating income costs are likely to reduce as a percentage as we pass through this punctuated equilibrium (i.e. as we move towards using utility services as a norm). The speed of build up of new data centres and investment in future capacity will become more manageable (controlled by price elasticity alone). The amount spent on sales and marketing to persuade people of the benefit of cloud will reduce (we will just be using it) etc.

To give an idea of what the potential future pricing might be then you need to look at gross margin i.e. revenue - cost of good sold. However, AWS doesn't give you those figures (and for good reasons). Furthermore AWS is made up of many different services - compute, storage etc - and the gross margin is likely to be very different on each of those.

Now, if you simply look at the revenue changes then AWS accounts for 37% of the growth of AMZN in 1Q. By taking the operating expense items covering technology, fulfilment, marketing and SG&A and making an awful assumption that all areas of business are equal (likely to be a huge underestimation) then you get a gross margin figure of around 50% for AWS.

You could get a more accurate picture by profiling the lines of business based upon past reports etc but I can't be bothered to spend more than ten minutes on this as it's not my area of interest. However, I don't think it's unreasonable to expect AWS gross margins to be north of 60% based upon this and experience. This matters because it gives you an idea of how much future pricing cuts could be and that's not even factoring in efficiency in supply, Moore's law etc.

If you're looking at AWS figures and going 17% operating margin is high but there isn't too much scope for price cuts then you're brewing for a shock. Consider yourself warned and put some effort into actually analysing the figures.

NB. I retired from Cloud back in 2010. Don't ask me to put any effort into detailing this more. I have bigger fish to fry and have close to zero interest in this subject. I only put this up because I keep seeing elementary mistakes being made. This stuff doesn't even cover the enormous ecosystem advantage that Amazon creates or the basic benefits of componentisation. Don't underestimate those either - you will get spanked if you're trying to compete without understanding this stuff.

Monday, April 27, 2015

How I will vote

Trying to decide who to vote for has been a tortuous exercise. I'm not comfortable with the potential SNP pact / deal by Labour not because I object to Nicola Sturgeon or much of their manifesto (in places it is very positive) but because of the more independence minded members of that party. 

I'm not happy with the past performance of the Liberal democrats and I can't see me voting in support of their manifesto. The Greens, well some of the manifesto is good but in other aspects it seems like wishful thinking and lacking reality. I am uncomfortable with the Conservative party manifesto not simply because of my ideological opposition but because of rent to buy and past policies such as the bedroom tax and reduction in higher tax rates.

What a pickle and not a good one as in spread across a slice of stinking bishop on top of good old toasted spelt bread.

Trying to make a choice has been hard. The Labour party manifesto is in my opinion the better of all the manifestos I've seen but I have doubts whether Labour is ready yet. I'm sure Miliband will become a fine statesman (as per Foot) however it feels there's too much of Blair's legacy left in the party. However a weak Labour minority potentially in hock to the SNP or the SNP + Liberals does not fill me with enthusiasm. Minority Governments without a coalition (as per Con + Lib) don't exactly have a great record and even with the most recent coalition then the Liberals were too weak. My preference is strongly towards a Lab + Con coalition but no-one wants to consider this.

But it's time to choose and I now know the path by which I'll vote. My vote will decide on the last week and what events happen.

1) If Labour offers to or agrees to form a coalition with the Conservatives then I'll vote Labour because it'll be clear that the party is willing to put national interests above political. A Lab + Con coalition would be fine.

2) If the Conservatives offer to form a coalition with Labour but Labour rejects this then I'll vote Conservative principally because despite my ideological opposition I value national interests over political. I'd hope to see a Con + Lib coalition given no Lab + Con coalition remains possible. I'd hope we avoid the horror of UKIP being involved or a Con majority.

3) If neither party offers then I'll vote on the basis of manifesto and vote Labour. I'd prefer a Lab + Lib coalition but if we end up with Lab + SNP then I'll take comfort in that it could have been worse e.g. UKIP anything. Don't get me wrong, I fully understand what Lab + SNP will mean. As Gordon Brown said their mission was not to “deliver social justice” but “deliver chaos and constant crisis” and a second referendum. However, if the break up of the Union is the way we need to go then so be it. My biggest concern is that we end up with an unfair deal due to the weak negotiating position of Labour resulting in the rest of the Union being lumbered with the debts etc.

Well, at least I'm now decided on how I'm going to vote. My actual choice will depend upon what happens in the last week.

--- 8th May 2015

Well, we got a Conservative majority. The upside is we should see continued reform in IT, focus on R&D, removal of waste in Government, the focus on the economy and the good parts of the manifesto. The downside is the continuation of polices like the bedroom tax, expansion of right to buy and a probable lean to more libertarian views. I cannot emphasise enough how important Government is for the competitive state of the nation nor the importance of social cohesion and providing a safety net for the most vulnerable. I would have preferred a Con-Lab coalition because the more centre elements (that dominate) could have ignored the extremes. That was never likely to happen but I could hope.

Well, in the next five years we will discover the true character of Compassionate Conservatism. I hope that the ideas of the Big Society and Compassion are genuinely balanced with our need to grow the economy, reduce waste and to be competitive. The bedroom tax and the in effect "cleansing" of London of the poor are not encouraging signs.

Most of all, I'm concerned about long term competition and the danger of the EU referendum. There are numerous self interest groups that would find it advantageous for Europe to be weaker. This is firmly not in the interests of the citizen's of Europe. However, with enough money it should be possible to persuade the citizens of the UK to vote for that which is not in their long term interest. This along with social cohesion concerns me. 

Political and Management Dogma in IT

When I helped write the Better For Less paper (with Liam Maxwell, Mark Thompson, Jerry Fishenden and others) there was no political dogma associated with the paper just a desire to overcome past management dogma (the outsourcing of everything, the loss of engineering capability within Government and the waste associated with IT).

For those who don't know me, I'm Old Labour. My economic viewpoint is that of the middle ground of Keynes, Hayek and Adam Smith rather than the extremes such as Friedman or Marx. I view the market as a necessary tool for competition supported by a strong Government. The market fails frequently, inertia can be rampant in organisation and Governments are essential for future competition. I take the pragmatic road of Deng Xiaoping - "It doesn't matter if the cat is black or white, as long as it catches mice". Competition is key from a national stance.

It's for these reason that much of the paper was about reducing waste, investment in capabilities, growing the intelligent customer, understanding the landscape we exist within and exploiting this to our favour. It's why I teach people within Government how to map, how to use multiple methods and how to strengthen our position.

It's also why I'm strongly in support of the concept of Government as a Platform, not just GDS providing services to others but also Departments providing utility services to other Departments. I see no reason why if DWP (Dept of Work and Pensions) is good at fraud detection that such capabilities cannot be provided to other Departments. I also see no reason why such services can't be provided through G-Cloud and even to the wider public.

The goal of all of this is not just improving Government efficiency but enhancing the commercial market. As Sally Howes, the NAO's executive leader, once said “the government, Parliament and my own organisation, the NAO, were very aware of how the old fashioned world of long, complex IT projects limited value for money”. The market can be extremely ineffective, I see this in the commercial world all the time - endless duplication, bias, inefficiencies, poor strategic play and oodles of inertia. If you're looking for the largest centrally planned bureaucracies with endless structures of command and control then often the best place to find them is within large technology vendors. 

There are exceptions and it's those exceptions - the use of cell based structures, transparency, a focus on user needs, improving situational awareness, creation and exploitation of ecosystems, provision of open source and open data - that Government is learning from and often leading. Don't for one second think that the market is the bastion of forward thinking - it's not. Most companies aren't like this. Many industries strive to protect existing positions without adapting. In many cases it's Government that forces the market to adapt or creates new breakthroughs in technology and practices that markets then apply.

Effective competition on a national level seems to require that balance of market with Government. The question is always how much but that's a debate on finer details between the schools of Hayek, Smith and Keynes. This is the social capitalist model behind China's meteoric rise, despite the assertions that "they' just copying", "cheap labour" and all the other gross simplifications and downright falsehoods.

However, I have noticed a disturbing sign in UK Politics. It started with this whole BDO paper which espoused a return to the more purely "market" approaches that had failed UK Government so badly in the past. As someone who specialises in competition then the idea that UK Government should abandon its digital transformation and hand it back to the market has to be one of the most misguided arguments that I've read in a long time. 

Reading around the subject, I did discover there are often worries that UK Gov is just copying the latest memes.  Don't get me wrong, endless meme copying is rife within the commercial world. The vast majority of companies have little to no situational awareness and most strategy is based upon copying others. When UK Government announced its need for Chief Digital Officers (and there are reasons for this) then before you knew it the private sector was falling over itself to appoint Chief Digital Officers for reasons that no-one was quite sure of other than everyone else was getting one.

Even a cursory examination of the changes so far in Government would show efficiencies and improvements in both GDS and the Departments. The level of strategic play that I've seen in some Departments has outstripped much of the private world. Government is not following but leading in many cases. Of course, there will always be problem projects but even for those we do not yet know the cause of the failure. 

There is no clear reason to abandon the current course except for one - political dogma - a belief that somehow the private sector knows best, that small (rather than efficient) Government is good. It's certainly an idea that is counter to the practice over the last five years - greater investment in staff, use of transparency, more visible accountability, intense focus on user needs, use of SMEs, use of open source, use of open data greater and challenges to spending. 

Yes, Government has moved away from its dependency on the private market and so far, it has shown significant benefits (except to those vendors and their lobbyists).  I can find little evidence to support this idea of doing a u-turn and returning more fully to the private market (with the command and control bureaucracies of large technology vendors) and instead lots of evidence that we should keep on with our current course.

Sunday, April 26, 2015

Experiment or Plan?

Whenever I examine a project, system or line of business then the first step that I normally take is to map it. Mapping is an easy process, with experience then creating a basic map should take no more than a couple of hours. To map, you start with user needs, then determine the components required to meet those needs and map according to how evolved they are (see figure 1).

Figure 1 - A Map from HS2


Now, there are lots of reasons for mapping but in this post I'd like to focus on the question of experimentation or planning? If you have a map, then the uncharted space (see diagram above from HS2 - high speed rail) is where you experiment and the industrialised space is where you plan. To make this clear, I've marked this on figure 2.

Figure 2 - Where to Plan, where to Experiment


A couple of things to note.

1) Web site is marked as commodity but using Agile development, why? The reality is the web site is a mass of components, some of which are commodity plus others (such as content / structure / data) which are novel. Hence this particular point was broken down into its own map. Whenever you deal with high level concepts then there is loss of granularity - no different with an atlas losing structure on roads / buildings etc. Maps (both geographical and this type of Wardley map) are imperfect representations of what is there.

2) All the components are evolving. What you started by experimenting with will over time become something you plan.

In general, as a rough guide, when you have a map then the following methods and purchasing techniques are applicable.

Figure 3 - Methods and Purchasing techniques.


WORDS OF WARNING

Those who are used to mapping (and that varies from Silicon Valley startups to large commercial companies to huge Government departments) don't need to be told that one size fits all methods don't work. Many others make the same mistakes over and over again.

There are entire industries of book sellers and consultants out there trying to flog you a one size fits all methods such as Agile or Lean or Six Sigma. It's misguided. You need to use all three approaches with any large complex system. They can all point to examples of how their technique beat the others, the opposing camps can do the same. All three techniques are actually useful.

The first step in an OODA loop is OBSERVE and that's the bit mapping tackles, it provides a communication mechanism to describe an environment between multiple groups. It doesn't tell you what to do or how to do it which is why once you have a map, you need to apply THOUGHT and orientate around it.

Agile & in-house development tends to be best for the uncharted space, the novel and new, the areas where you need to experiment because you lack information and will change rapidly.

Lean & off the shelf tends to work best when you have some information, you need to remove waste and focus on delivery of a product (i.e. the transitional space between the uncharted and industrialised).

Six Sigma, outsourcing and use of utility providers tends to work best for the common and well understood when you have reasonable information, need to focus on removing defects and operations at scale i.e. the industrialised space.

Any significant project will have components at each stage of evolution and require all three methods. Each method is good at what it does but apply a one size fits all method or purchasing technique and you won't run as effectively as you could.  Before you say - "aren't you trying to flog me mapping?" - the entire technique is Creative Commons Share Alike. There is nothing to flog. If I'm trying to persuade you of anything then it is "look before acting" and "use all the methods according to their strengths".

The only people who can map a business are people who are running, operating and working in the business i.e. YOU. There is no need for consultants as you only need yourself and others within the company. Those who map have already discovered this.  I need to emphasis this - YOU have all the power you need to map, to learn common economic patterns and to learn how the play strategic games in business.

Of course, you'll have to go and learn those individual project methods and purchasing techniques such as agile, lean and six sigma and for that then you'll probably end up requiring consultants and books but then I'd consider that investment in training in specific methods not a ONE SIZE FITS ALL to be applied across the organisation.

So when it comes to experiment or plan and which should you do, then the answer is BOTH for anything of significant scale (i.e. beyond a pet project).

Cue the endless noise from one size fits all consultants claiming their method Agile, Lean or Six Sigma is applicable everywhere and the other methods aren't.

Friday, April 24, 2015

Gov should start handing over large wads of cash to us, preferably in a truck

The latest piece of craft from Kat Hall on how a “GDS Monopoly leaves UK.gov at risk of IT cock-ups” was interesting, to say the least. I’m sure Kat Hall is under pressure to write articles, I’ve seen the Register help create some very fine tech journalists (see @mappingbabel) and I have no doubt Kat will follow the same path. However, this one instance was not a finest hour. 

I’ll leave it at that though because my real interest lies with the report and not a debate over "what is journalism". Before promoting a report, I tend to ask some questions - why was it written, why now, who wrote it, what was it based upon and how will it help? I do this because a lot of stuff in the ether is just PR / lobbying junk dressed up as helpful advice.

At this moment in time (due to the election), Civil Servants are governed by the Purdah convention which limits their ability to respond. What this means is that any old lobbying firm can publish any old tat knowing they’re unlikely to get a response. Launching an attack on a department at this time is about as cowardly as you can get. These people are public servants, they work hard for us and a bunch of paid lobbyists or consultants taking swipes is not appropriate.

The report “GOVERNMENT DIGITAL SERVICE 2015” was written by BDO. They’re a big consultancy working in the public and commercial sector, with a glossy web site and lots of pictures of smiling, happy and clapping people. They talk a lot about innovation models, exceptional client service and “value chain tax planning”.

The report starts with GDS has been an effective catalyst for transformation (basically, let's be nice to you and pretend we're friends before we bring the punches out) and then goes on to proclaim major risks which need to be sorted! I’m already starting to get that icky feeling that “major risks which need to be sorted” is code for “pay us lots of money”. 

Ok, the three major risks are highlighted as accountability, commercial and efficiency. We will go through each in turn.


THE ACCOUNTABILITY RISK: 
“GDS’s hands-on approach to advising programmes reduces its independence as a controls authority”.

A bit of background here. Many years ago, before writing the Better for Less paper, I visited a number of departments. All these departments suffered from excessive outsourcing i.e. they had outsourced so much of their engineering capability they were unable to effectively negotiate with vendors as the department was often little more than project managers. In the Better for Less paper we talked about the need for intelligent customers, that the current environment had to be rebalanced, and that we had to develop skills again in Government. Now, this excessive form of outsourcing wasn’t a political dogma but a management dogma. It’s why we used to be paying through the nose for stuff which wasn’t often fit for purpose. With a bit more internal skill, I’ve seen £1.7M contracts tumble to £96,000. Yes, 95% savings are not unheard of. 

However, it's not just GDS. There’s many Departments, the Tech Leaders Network and systems like G-Cloud which have made a difference. A very important factor in this was OCTO (Spend Control) and their introduction of a policy of challenging spending. 

The report says “Accountability is the key to risk management and accountability must always be with the department that holds the budget and is mandated with the service” and that has always been the case. The Departments are accountable and they hold the budget.  However, CHALLENGE is an essential part of effective management and that requires the skills necessary to challenge. 

To explain why this is important, I'll give you an example from a Dept / Vendor negotiation which in essence was little more than :-

Dept. “What options do we have for building this system?”
Vendor “Rubbish, Rubbish, Us”
Dept. “Oh, we better have you then. How much?”
Vendor “£180 million”
Dept “Could you do it for £170 million?”
Vendor “Ok”

It wasn’t quite like that as the vendor had to write some truly awful specification documents and options analysis which it charged an eye watering price for under a fixed preferred supplier agreement. There was a semblance of a process but no effective challenge. You couldn’t blame the department either, the past mantra had been outsource all and they didn't have the skills to know what was reasonable. I’ve seen exactly the same problem repeated in the commercial world numerous times - departments operating in isolation, alone, without the skills required. They are easing pickings.

GDS and Spend Control changed that by forcing some challenge in the process. Of course, if you’re used to chowing down on Government as an easy lunch then those changes probably haven’t been very welcome. Whilst some Departments were bound not to like being asked hard questions - “but, it’s our budget” - others responded by skilling up with necessary capabilities. 

You can’t separate a control authority (the point of challenge) from the skills needed to challenge unless your goal is to pay oodles of cash to outside vendors for poor delivery. I can see the benefit for a consultancy delivering services but not to a Government serving the public interest.


THE COMMERCIAL RISK: 
“GDS’s preference for input based commercial arrangements rather than a more traditional outcomes-based commercial approach”

First, as someone who created outcome based models for development a decade ago then I can clearly state this is not traditional unless the outcome is delivery to a specification document. This is an important distinction to understand. 

One of the key focus of GDS has been on user need i.e. identifying the volume of transaction Government has, identifying the user needs of those transactions and building to meet the user need. This is a huge departure from the past model where the user need was often buried in a large specification document and the the goal was delivery to the specification whether it met user needs or not. So, you first need to ask which outcome are you focused on - user need or delivery to a specification?

When you are focused on user need, you soon realise you’ll need many components to build that user need. Some of the components will be novel and some will be industrialised (i.e. commodity like). The methods and techniques you will use will vary. I could give examples from the Home Office and others but I’ll use an example map from HS2 (high speed rail) to highlight this point.

Example map


The user need is at the top. There are many components. The way you treat them will be different according to how evolved those components are. This sort of mapping technique is becoming more popular because it focuses on efficient provision of user needs. Doing this involves multiple different types of inputs from products to utility services to even custom built components and applying appropriate methods.

Now, in the traditional approach which is building to a specification then there is usually very little description of the user need (or where it exists it’s buried in the document) and almost certainly no map. This delivery mechanism normally involves a very structured method to ensure delivery against the specification i.e. the focus is not “did we deliver what the user needed” but “did we deliver what was in the specification / contract”. Consultants love this approach and for good reasons which I'll explain. 

Take a look at the map from HS2 again. Some of the components are in the uncharted space (meaning unknown, novel, constantly changing) whilst others are more industrialised (well defined, well understood, common). Whilst the industrialised components can be specified in detail, no customer can ever specify that which is novel and unknown. Hence, we tend to use methods like six sigma, detailed specifications, utility services and outsourcing for the industrialised components of the project but at the same time we use agile, in-house development for the novel & unknown.

Oh, and btw the maps I use are a communication tool between groups. With the sort of engineers you have a GDS and other Depts then this sort of thinking is often just second nature. You use commodity components / utility services and products where appropriate. You build only what you need and you use the right approaches to do so.

The beauty of forcing a specification document on everything is you force the customer into trying to treat all the components as the same, as though everything is industrialised. You are literally asking the customer to specify the unknown and then you crucify them later on through change control costs. The vendor can always point the finger and blame the customer for “not knowing what they wanted” but then the reality is they couldn’t know. The massive cost overruns through change control are not the fault of change but instead the structured process and the use of specifications where not appropriate.

Hence you have to be really careful here. If someone is asking you to sign up to an outcome based traditional model which in fact means delivery against a defined specification document for the entirety of a large complex system using a very structured process THEN you’ll almost always end up with massive cost overruns and happy vendors / consultants.

I have to be clear, IMHO this is scam and has been known about for a long time.

So which way does the report focus? The reports talks about documentation, highlighting the example of MPA and promotes pushing control to CCS (Crown Commercial Services). Hence we can be pretty confident that this will break down into specification documents. It argues “While GDS focuses on embedding quality staff within programmes, MPA pursues more formalised and documented processes” and then it promotes the view of MPA as the solution.

This argument is not only wrong, it is mischievous at best. GDS focuses on user needs and using high quality staff to build complex projects. It does a pretty good job of this and its output is functioning systems. MPA focuses on ensuring the robustness & soundness of projects that are undertaken. It does a pretty good job of this and its output is formal documents. You can’t say “they write documents, we like specification documents and therefore you should use those sorts of documents” as the context is completely different.  Some parts of a large complex projects can and should be specified because they are known. Others parts are going to have to be explored. Some parts will need an outcome based approach. You're going to need good "quality" engineers to know and do this along with specialists in procurement to support.

The report then adds another twist - “As a matter of urgency, in order to manage commercial risk, all commercial activities within GDS should be formally passed over to the newly transformed Crown Commercial Service (CCS)”. Let us be clear on what this means. In all probability, we're going to end up forcing specification documents (an almost inevitable consequence of trying to get 'good value' from a contract) even where not appropriate and hand it over to procurement specialists who are unlikely to have the necessary engineering skills to challenge what the vendors say. This is exactly what went wrong with the past.

IMHO, a more honest recommendation would be “As a matter of urgency, Gov should start handing over large wads of cash to us, preferably in a truck”.

For reference, if you want to know how to deal with a complex system then once you have a map, I find the following a useful guide. Please note, that for both methods and procurement techniques then multiple methods are needed in a large complex system. This is also another reason why you map in order to break complex systems into components to treat them effectively. I cannot reiterate how important it is to have purchasing specialists supporting the engineering function. You don't want to lose those skills necessary to challenge. NB the diagram is not a replacement for thought, it's just a guide.

Methods & Purchasing.



THE EFFICIENCY RISK:
“With a monopoly position and a client-base compelled to turn to GDS for advice, there is a risk that they could become an inefficient organisation”

Should we roll the clock back and see what it was like before GDS and talk about inefficient organisation? I think Sally Howes, the NAO's executive leader, sums it up very politely with the statement “the government, Parliament and my own organisation, the NAO, were very aware of how the old fashioned world of long, complex IT projects limited value for money”. 

To put it bluntly in my language, we were being shafted. We're nowhere near the end of the journey with GDS and the report completely ignores how Departments are adapting and growing capabilities. There's not much I can find to like in the report, some bits did make me howl though.

I loved the use of “proven methods” in the paper followed by “excellent opportunity for CCS to show that it can meet the needs of a dynamic buying organisation”. So basically, we believe in evidence and because of that statement we recommend you experiment with something unproven and smells a lot like the past? Magic.

However it is only surpassed by “This paper has no evidence to suggest that GDS is too big or too expensive to achieve its aims” which followed a rant on “ Is this meeting the needs of the government departments or is this excessive? Are they the right staff? Are they being paid enough? Do they have the appropriate skills?”

That’s consultant gold right there. I’m going to create a whole bunch of doubts about a problem I’ve no evidence exists in order to flog you a solution you probably don’t need. Here, have my wallet - I’m sold!

The paper then goes on to talk about “To ensure market-driven efficiency of the remaining advisory function, this paper recommends that the advisory function form a joint venture with the private sector, allowing it to grow fast and compete for work alongside other suppliers”. Hang on, we have G-Cloud, we have GDS, we have growing Departmental skills and we should hand advisory to the private sector because it previously provided “limited value for money”? 

I’m guessing they are after more than one truck load of cash. I’m pretty sure this isn’t the “high level vision of the future” that the Government is after.

Now don't take this post to mean that GDS is perfect, far from it. There’s plenty of good discussion to be had about how to make things better and about how departments can provide services to other departments. There has been some misinterpretation (e.g. the Towers of SIAM) and there has been some oversteering (e.g. a tyranny of agile) but that’s normal in such a complex change. The achievements already have been pretty remarkable but no-one should be under any illusion that it can’t be better. It can.

However reasonable discussion or debate doesn't involve a consultancy publishing a report flogging a bunch of dubious and outdated methods - let’s take skill away from challenge, lets hand over advisory to private sector, let’s focus on specification documents - as solutions to risks which aren't even quantified. There's nothing to debate, it's just mudslinging. I'm guessing that's why they published it at a time when no-one could respond.

But what about the motivations of the authors? I see one is a head of government consultancy practice and so is the other. I’m guessing they’re hoping to be on the advisory board and paid handsomely for such pearls of wisdom. 

I note that Andy Mahon has “wide experience in public sector procurement” gained from his 28 years at BDO, Grant Thornton, KPMG and Capita covering initial business case to PFI. I’m not convinced that someone with so much experience of flogging to Government and working for a consultancy flogging to Government can ever be considered impartial when it comes to advising Government on how not to be flogged.

Now Jack Perschke is a different matter. A long background in different areas plus also he worked for ICT reform group and was a Programme Delivery Director for Student Loans Company Transformation Programme. Well, this report is a bit odd - given his background.

From the minutes of the Student Loans Company (though Jack had just left), the board even took time to praise GDS noting “the engagement with Government Digital Services (GDS) had been very helpful” and “GDS had improved the understanding of the work required, particularly around the build/buy options”.  Further minutes talk about ongoing discussion, challenge and support e.g. from “responding to the conditions set by the Government Digital Dervice (GDS), including the benchmark for Programme costs” to the Board noting that "GDS were a key partner in the Programme“. 

Surely this is how things should work? I’m surprised Jack Peschke didn’t see that. I can't see how you'd conclude this was a bad thing. 

Well, if there is some good to come from the document, some silver lining then IMHO this document provides further indirect evidence of why Government should develop its own capability, skills and situational awareness throughout GDS and the departments. These sorts of reports and outside consultancy engagements rarely bring anything of value other than for the companies writing them.

I think my “major risks which need to be sorted” is code for “pay us lots of money” is about spot on. 

I'll come back to this next week as I want to see what else crawls out of the woodwork here. I don't like civil servants being attacked especially by self interested outside consultants at a time when civil servants can't respond.

Thursday, April 23, 2015

Pick a course, adapt as needed.

Ok, a bit of history to begin with. When I took over running Fotango (a Canon Europe subsidiary), it was a loss making organisation. It took me a year to make it profitable. We grew the business by taking our skills and applying them to relevant areas. In the end we were managing, developing and operating over a dozen major systems with millions of users.

However, we had constraints. The two most challenging of which were head count and profitability. We had to operate on a basis of no head count increase (this was due to a parent wide rule) which forced us to automate more, re-use and find ways to create space for development. The second constraint was we had to be profitable - every month. The later is a real headache when you have millions in the bank but can't invest. Any investment we wanted to make had to come through operational efficiency which in no small part was why we end up implementing some of the first web based private infrastructure as a service, auto configuration, continuous deployment and self healing tools between 2003-2005. 

In the board room, James and I used a map to determine where we could attack, to plot our path. I've taken a version of that map and rolled it forward to mid 2007 in order to illustrate some points. The map is provided in the following figure.


Now the map gives us position of things in value chain (from visible user need to hidden components) versus movement (i.e. how things evolve). On this map is one of the lines of business we had.

From the map, there are several points we could attack.

Point 1 - Attack compute provision as a utility. We actually had a system called Borg which ran our private IaaS. We had offered this to other vendors and planned to open source it later in 2007. Whilst we couldn't build a public IaaS (due to the capital investment required and the constraints we had), that didn't mean we didn't want to see a fragmented market of providers in this space.

Point 2 - Attack platform provision as a utility. We had actually embarked on this route based upon the earlier maps and launched the first public platform as a service known as Zimki. We had all the capabilities necessary to build it and back in 2005 we had anticipated someone else would launch a public IaaS. I thought it was going to be Google, turned out to be Amazon. The importance of a public IaaS for us is it would get over our investment constraint. We planned to open source the space in late 2007, had the components for an exchange and a rapidly growing environment etc. The play itself was almost identical to Cloud Foundry today.

Point 3 - Attack CRM as a service. We had looked at this in 2005, decided we didn't have the skills and others were moving into the space.

Point 4 - Attack Apps on Smart Phones. Back in 2004 we were working on mobile phones as cameras, however there was no way to anticipate the development of the iPhone. In 2007, we might have made a play in this space based upon past skills but we had effectively removed those parts of the value chain from the organisation. We had to concentrate on somewhere in 2005, we had the constraint of resource growth, we had to make a choice. That choice was the platform play. But in 2007, it could be an option.

Point 5 - Build something new. We certainly had the capability to experiment, we used hack days and other tools to come up with a range of marvellous ideas. However the resource constraint meant we needed to industrialise the platform and get ourselves and others to build on top of this. We could use ecosystem effects to therefore sense and identify future success.

Now, I've simplified a lot of the thought processes along with the actual map, but the point I want to make is that we had multiple points of attack - the WHEREs.  The WHY was a discussion over which we could exploit given our constraints such as resource & investments along with capabilities. This gave us our DIRECTION. 

Each node or point on a map actually breaks down into a more complex map of underlying components. Some of those were novel (the uncharted) and some were more commodity (the industrialised). We knew how to apply multiple methods (agile, six sigma etc) appropriately, how to build and exploit ecosystems and a vast range of tactical games we could use. 

However, once we determined our DIRECTION, we moved deliberately along that path. Yes, we had very fast deployment and development cycles. Multiple builds in a single day to live for components was nothing special. However, that tempo wasn't uniform. Releases in the uncharted space would happen continuously. In the more transitional space (between uncharted and industrialised) it slowed down considerably and by the time you reached industrialised then releases could be monthly, much more regimented. We had been running as an API shop since 2003 and we had long learned the lesson that you couldn't go around changing the APIs of the deep underlying components ten times a day without causing friction and cost in the higher order systems.

This is why those more industrialised, lower order components we'd look to move to outside and stable utility providers. Unfortunately, though we anticipated their development, none existed in 2-2004 - 2005. There wasn't an Amazon, Scalr, RightScale, Chef or any of the other infrastructure, management, configuration and monitoring environments. We had to build all this just to get to the platform layer and our speed depended upon stability of lower order interfaces. 

Take something today like Netflix. They could not have existed if Amazon changed the core APIs of EC2 twenty to thirty times a day. Stability of the interfaces of lower orders is critical for development of higher order - this is basic componentisation. 

Now Fotango's story ended to due to a sorry tale of strategic advice which is why you often find me in conferences throwing rubber chickens with the words "Situational Awareness" at big name consultancies as they mumble "blah, disruption, blah, digital, blah, cloud, blah, ecosystem, blah, innovation, blah". Especially at Big Data conferences where they seem to gather to flog blobs of "wisdom" to the unsuspecting masses.

However, there are some things I do want to remind people of.

Have a DIRECTION. 
This is one of the most important parts of mapping & improving situational awareness. You not only need to learn to use multiple methods (e.g. agile, lean and six sigma) but you also need to understand the landscape and steer your way through it. Maps are dynamic and yes, sometimes you have to pivot based upon changing conditions. However, Agile is not a solution for an indecisive and variable management. When moving in uncharted space you steel need a DIRECTION and adapt to what you discover. You don't need a captain who can't keep a decision for more than five minutes without changing. If the reason you're using Agile is because your manager is going Fire! Aim! Change Course! Don't Fire! Did we Fire? Fire! No, Don't Fire! Change Course! Change Course! Don't Fire ... wait .. FIRE! ... No! Change Course! Then you've got bigger problems than methods.

Move APPROPRIATELY fast.
Yes, continuous release processes are great for exploring uncharted spaces and building higher order systems. However, you need stability of interfaces at lower order systems (which includes not just syntax but semantics). For anyone who doesn't understand this, hire a crew of electricians to replace all the sockets in the buildings & data centre with sockets and transformers to supply power equivalent to a different region. Call it a 'release' and watch the expressions of horror when nothing works / plugs in. After they scream murder at you and finally get around to setting some stuff up, send your electricians around to replace it all with another region. Do try shouting at everyone that you were only being adaptive & innovative whilst they beat you with their dead computers.

Focus on USER needs
That's the first step of mapping and hardly worth repeating because you should be doing this. Of course, if you weren't actually doing this you might run around changing plug sockets to a different region. Ditto some of the changes I see in the online world.

Before anyone says "Oh but we can make special adaptors to cope with the change" which invariably leads to a host of different competing standards and then someone creating the standard of standards ... just give up.

Use APPROPRIATE methods
I'll use one diagram and go - enough said.


If anyone feels like going 'Have you considered using dual operating / twin speed IT / bimodal" or any of the other "organise by the ends" brigade. Don't even go there.

Wednesday, April 22, 2015

AWS to report

Many many years ago back in the days I worked at Canonical (2008-2010), I calculated a forward run rate for AWS. This was based upon a few existing analyst guesses of revenue, a period of exponential change (a punctuated equilibrium), some expectation of price elasticity and a lot of voodoo & jiggery-pokery. 

I said that eventually Amazon would have to report the AWS earnings (e.g. due to 10% reporting rules SFAS 131) though I expected this to be 2016. I would occasionally add on analyst predictions each year to confirm / deny the change but the problem was - no-one really had a clue. It was all speculation.

So looking at the model where did I have 1Q2015 pegged at? I had it pegged at a forward run rate of $2.38 billion per quarter but that figure is fairly meaningless as it's based upon an annual estimate. So what about the end of 2015? Well, here I was expecting Amazon to have annual forward run rate of $16 billion p.a. and hence for each subsequent year to make more than $16 billion p.a. in revenue. For the end of 2014, I had the forward run rate at $8 billion which means every year after I would expect AWS to exceed $8Bn in revenue (e.g. 2015 should be above $8 billion).

If you think this sounds an odd way of doing things - that's because it is a bit odd. The model is based upon a future test of a hypothesis that something is greater than a certain value rather than based upon trying to calculate what a value is at some specific point in time. There is a reason for this but it's rather obscure and not what is of interest.

Figure - Forward Run Rate


My interest is not so much in tomorrow's reporting (and I suspect there will be be gasps in some quarters) but in the subsequent quarters and the rate of change. My interest is in just how fast the punctuated equilibrium is moving.

I do get asked what do I think the revenue reported will be? I haven't got a clue.

If I took the forward run rates of the model for that quarter and the previous, by simply taking an average it would have revenue at around $2.2 billion. But this ignores any variation due to price changes, any seasonality impacts (I really only concern myself with the magnitude of the annual figures), the annualised nature of the forecast, that it concerns forward run rates and given this model was written many years ago - it's based upon a lot of assumptions, actual revenues depend upon competitors action - then even if it's close that was just more luck than judgement.

I'll be happy if we're talking about AWS revenue using $Bn's because that at least demonstrates the change was not linear and the punctuated equilibrium is in full effect. Still, the waiting should be over. We should find out soon enough but I'll need a few more quarters of data to get a really clear picture.

Tuesday, April 21, 2015

Devops ... we've been here before, we will be back again.

In this post I want to explore the causes of DevOps and how you can use such knowledge to advantage in other fields. I'm going to start with a trawl back through history and four snippets from a board pack in early 2007. This snippets describe part of the live operations of Fotango, a London based software house in 2006.

Snippet 1


We were running a private infrastructure as a service with extensive configuration management, auto deployment and self healing (design for failure) of systems based upon cfengine. We were using web services throughout to provide discrete component services and had close to continuous development mechanisms. In 2006, we were far from the only ones doing this but it was still an emerging practice. I didn't mention agile development in the board pack ... that was old hat.

Snippet 2


To be clear, we were running a private and a public platform as a service back in 2006. This was quite rare but still more of a very early emerging practice.

Snippet 3


In early 2007, we had switching of applications between multiple installations of platform as a service from our own private infrastructure as a service (Borg) to one we had installed on the newly released EC2. This was close to a novel practice.

Snippet 4


By early 2007 we working on mechanisms to move applications or data between environments based upon cost of storage, cost of transfer and cost of processing. In some cases it was cheaper to move the data to the application in other cases the application to the data. We were also playing some fairly advanced strategic games based upon tools like mapping. However, one of my favourite changes (which we barely touch on today) is when you had pricing information down to the function. This can significantly alter development practices i.e. we used to spend time focusing on specific functions because they were costly compared to other functions. You can literally watch the bill racking up in the real time billing system as your code was running and one or two functions always stood out. This always helps concentrate the mind and this was in the realm of novel practice in 2007.

Much of what we talk about regarding DevOps and the changes in practice today are not new. It is simply becoming good practice in our industry. For the majority of these changes, the days of novel and emerging practice have long gone. Many companies are however only just starting their journey and whilst most will get some things right - design for failure, distributed systems, use of good enough components, continuous deployment, compartmentalising systems and chaos engines - many are almost certainly doomed to repeat the same mistakes we made long ago - single size methods (agile everywhere), bimodal and API everything (some things just aren't evolved enough yet). Much of that failing will come from our desire to apply single methods without truly understanding the causes of change ... but we will get to that shortly.

The above is all perfectly normal and so is the timeframe. On average, it can take 20 to 30 years for a novel practice to become defined as a best practice. We're actually a good 10-15 years into our journey (in some cases more), so don't be surprised if it takes another decade for the above to become common best practice. Don't also be surprised by the clamouring for skills in this area, that's another normal effect as every company wakes up to the potential and jumps on it at roughly the same time. Demand always tends to outstrip supply in these cases because we're lousy at planning for exponential change.

However, this isn't what interests me. What fascinates me is the causes of change (for reasons of strategic gameplay). To explain this, I need to distinguish between two things - the act (what we do) and the practice (how we do stuff). I've covered this before but it's worth reiterating that both activities and practices evolve through a common path (see figure 1 & 2) driven by competition.

Figure 1 - Evolution of an Act


Figure 2 - Evolution of Practice


Now, what's important to remember is the practice is dependent but distinct from the act. For this reason practices can co-evolve with activities. To explain, the best architectural practice around servers is based upon the idea of compute as a product (the act). These practices includes scale up, N+1 (due to high MTTR - mean time to recovery) and disaster recovery tests.  However, best architectural practice around IaaS is based upon the idea of compute as a utility i.e. volume operations of good enough components with a low MTTR.  These practices includes scale out, design for failure and chaos engines. In general, best practice for a product world is rarely the same as best practice for a utility world.

However, those practices have to come from somewhere and they evolve through the normal path of novel, emerging, good and best practice. To tie this together I've provided an example of how practice evolves with the act in figure 3 using the example of compute. 

Now, normally with a map I use an evolution axis of genesis, custom built, product (+rental) and commodity (+utility). However practices, data and knowledge all evolve through the same pattern of ubiquity and certainty.  So on the evolution axis I could use :-

Activities : Genesis, Custom Built, Product, Commodity.
Practices : Novel, Emerging, Good, Best
Data : Unmodelled, Divergent, Convergent, Modelled
Knowledge : Concept, Hypothesis, Theory, Accepted.

For simplicity sake, I always use the axis of activities but the reader should keep in mind that on any map - activities, practice, data and knowledge can be drawn. In this case, also for the reason of simplicity, I've removed the value chain axis.

Figure 3 - Coevolution of practice with the act


From the above, the act of computing infrastructure evolves to a product and new architectural practices for scaling, capacity and testing develop around the concept of a product (i.e. a server). These practice evolve until they become best practice for the product world. As the underlying act now evolves to a more industrialised form, a new set of architectural practices appear. These evolve until they become best practice for that form of the act. This gives the following steps outlined in the above :-

Step 1 - Novel architectural practices evolve around compute as a product
Step 2 - Architectural practices evolve becoming emerging and good practice
Step 3 - Best architectural practices develop around compute as a product
Step 4 - Compute evolves to a utility
Step 5 - Novel architectural practice evolves as compute becomes a commodity and treated as a utility
Step 6 - Architectural practices evolve becoming emerging and good practice
Step 7 - Ultimately these good practices (DevOps) will evolve to become best practice for a utility world.

When we talk about legacy in IT, we're generally talking about applications built with best architectural practice for a product world. When we talk about DevOps, we're generally talking about applications built with good to best architectural practice for a utility world. Both involve "best" practice, it's just the "best" practices are different because the underlying act has evolved.

This process of co-evolution of practice with activity has occurred throughout history whether engineering or finance or IT. When the act that is evolving has a significant impact on many different and diverse value chains then its evolution can cause macro economic effects known as k-waves or ages. With these ages, new co-evolved practices emerge tend to be associated with new forms of organisation. Hence in the the mechanical age, the American System was born. With the electricity age, we developed Fordism. 

Knowing this pattern of change enabled me to run a set of population experiments on companies to confirm the model and identify a new phenotype of an emerging company form (the next generation) back in 2011. The results of which are shown in table 1.

Table 1 - Next generation vs Traditional organisations


It's precisely because I understood this pattern and how practices evolved that back in Canonical (2008-2009) we knew we had to attack not just the utility compute space but also the emerging practice space (a field which became known as DevOps). It was actually one of my only causes of disagreement with Mark during my time there as I was adamant we should be adopting Chef (a system developed by a friend of mine Jesse Robbins). However, Mark had good reasons to focus elsewhere and at least we could have the discussion.

When it comes to attacking a practice space then natural talent and mindset are key. In the old days of Fotango, I captured a significant proportion of talent in the Perl industry through the creation of a centre of gravity (a post for another day). It was that talent that created not only the systems but discovered the architectural practices required to make it work. Artur Bergman (now the CEO of Fastly) developed many of the systems and subsequently was influential in the Velocity conference (along with Jesse). Those novel practices were starting to evolve in 2008.

In the Canonical days, I employed a lesser known but highly talented individual who was working on the management space of infrastructure - John Willis (Botchagalupe). Again my focus was deliberate, I needed someone to help capture the mindset in that space and John was perfect for the role. I didn't quite get to play the whole centre of gravity game at Canonical and there were always complications but enough was done. John himself has gone on to become another pillar of the DevOps movement.

Now, this pattern of co-evolution of practice and activity repeats throughout history and we have many future examples heading our way in different industries. All the predictable forms of this type of change are caused by the evolution of underlying activities to more industrialised forms. For example, manufacturing should be a very interesting example circa 2025-2035 due to commoditisation of underlying components through 3D printing, printed electronics and hybrid printing enabling new manufacturing practices. It even promises an entirely new form of language - SpimeScript - which is why the Solid conference by O'Reilly is so interesting to me. Any early signs are likely to appear there.

It's worth diving a bit deeper into this whole co-evolution subject. So let us go back in time to when the first compute products were introduced i.e. the IBM 650. Back then, there was no architectural practice for how to deal with scaling, resilience and disaster recovery. These weren't even things in our mindset. There was no book to read, there was no well trodden path and we had to discover these practices. What became obvious later was unknown, undiscovered and uncharted.

Hence people would build systems with these products and discover issues such as capacity planning and failure - we acted, we observed and then we had to respond to what we found. We had to explore what the cause of these problems were and create models and practices to try and cope. As our understanding grew of this space those practices developed. We built expertise in this space and the tools to manage this. We talked of bottlenecks and throughput and of N+1, of load and of capacity. We started to anticipate the problems before they occurred - running out of storage space became a sign of poor practice. We sensed our environment with a range of tools, analysed for points of failure and we responded before it happened. Books were written and architectural practice became firmly in the space of the good. We then started to automate more - RAID, hot standby, clusters and endless tools to monitor and manage a complex environment of products (compute as services). Our architectural practice became best practice.

But as the underlying act evolved from compute as a product to compute as more of a commodity and ultimately a utility then the entire premise on which our practices were based changed. It wasn't about THE machine, it was about volume operations of good enough. We had to develop new architectural practices. But there was no book, no well trodden path and no expertise to call on. We had to once again use these environments, observe what was happening and respond accordingly. We created novel architectural practices which we refined as we understood more about the space. We learnt about design for failure, distributed systems and chaos engines - we had to discover and develop these. 

As we explored this new field we developed tools and a greater understanding. We started to have an idea of what we were looking for. The practices started to emerge and later develop. Today, we have expert knowledge (the DevOps field), a range of tools and well practiced models. We're even starting to automate many aspects of DevOps itself. 

The point to note, is that even though architectural practice developed to the point of being highly automated, best practice and "obvious" in the product world, this was not the end of the story. The underlying act evolved to a more industrialised form and we went through the whole process of discovering architectural practices again. 

Now a change of practice (and related Governance structures) is one of the sixteen forms of inertia companies have to change. However because of competition dynamics, this change is inevitable (the Red Queen effect). We don't get a choice about this and that gives me an advantage. To explain why I'll use an example from a company providing workshops. 

The Workshop

This example relates to a company that provides workshops and books related to best practice in the environmental field. It's a thriving business which provides expert knowledge and advice (embodied in those workshops and books) about effective use of a specific domain of sensors. I have to be a bit vague here for reasons that will become obvious. The sensors used are quite expensive products but new more commoditised forms are appearing, mainly in Asia. At first glance, this appears to be beneficial because it'll reduce operating costs and is likely to expand the market. However, there is a danger.

To explain the problem, I'm going to use a very simple map on which I've drawn both activity and practice to describe the business (see figure 4)

Figure 4 - The Business



The user need is to gain best practice skill on the use of the sensors, the company provides this through workshops and associated materials such as books based upon best practice. Now the sensors are evolving. This will have a number of effects (see figure 5).

Figure 5 - Impact of the Change


From the above,

Step 1 : the underlying sensor becomes a commodity
Step 2 : this enables a novel practice (based upon commodity sensors) to appear. This practice will evolve become emerging and then good.
Step 3 : the existing workshop business will become legacy
Step 4 : a workshop business based upon these more evolved practices will develop and it's the future of the market.

This change is not just about reducing operational costs of sensors but instead the whole business of the company will alter. The materials (books, workshops, tools etc) that they have will become legacy. Naturally the company will resist this changes as they have a pre-existing business model, past revenues to justify the existing practices and a range of current skills, knowledge and relationships developed in this space.  However, it doesn't matter because competition has driven the underlying act to more of a commodity and hence a new set of practices will emerge and evolve and the existing business will become legacy regardless.

Fortunately this hasn't happened yet. Even more fortunately, with a map we can anticipate what is going to happen, we can identify our inertia, we can discuss and plan accordingly. We know those novel practices will develop and we can aim to capture that space by developing talent in that area. We know we can't write those practices down today and we're going to have to experiment, to be involved, to act / sense and respond.

We can prepare for how to deal with the legacy practices, possibly aiming to dispose of part of this business. Just because we know the legacy practice will be disrupted, doesn't mean others will and if we have a going concern then we can maximise capital by flogging off this future legacy to some unsuspecting company or spinning it off in some way. Of course, timing will be critical. We will want to develop our future capability (the workshops, tools, books and expertise) related to the emerging practice, extract as much value from the existing business as possible and then dump the legacy at a time of maximum revenue / profit on the market without the wider industry being aware of the change. If you've got a ticking bomb never underestimate the opportunity to flog it to the market at a high price. Oh, and when it goes off, don't miss out on the opportunity of scavenging the carcass of whatever company took it for other things of value e.g. poaching staff etc.

There's lots we can do here, maybe spread a bit of FUD (fear, uncertainty and doubt) about the emerging practices to compound any inertia that competitors have. We know the change is inevitable but we can use the FUD to slow competitors and also give us an ideal reason (internal conflict) for diversifying the business (i.e. selling off the future "legacy" practice). There's actually a whole range of games we can play here from building a centre of gravity in the new space, disposal of the legacy (known as pig in a poke), to ecosystem plays to misdirection.

This is why situational awareness and understanding the common patterns of economic change is so critical in strategic gameplay. The moves we make (i.e. our direction) based upon an understanding of the map (i.e. position and movement of pieces) will be fundamentally different from not understanding the landscape and thinking solely that commodity sensors will just reduce our operational costs. This is also why maps tend to become highly sensitive within an organisation (which is why I often have to be vague).

When you think of DevOps just don't think about the changes in practice in this one instance. There's a whole set of common economic patterns it is related to and those patterns are applicable to a wide variety of industries and practices. Understanding the causes and the patterns are incredibly useful when competing in other fields. 

DevOps isn't the first time that a change of practice has occurred and it won't be the last. These changes can be anticipated well in advance and exploited ruthlessly. That's the real lesson from DevOps and one that almost everyone misses.