Sunday, April 26, 2015

Experiment or Plan?

Whenever I examine a project, system or line of business then the first step that I normally take is to map it. Mapping is an easy process, with experience then creating a basic map should take no more than a couple of hours. To map, you start with user needs, then determine the components required to meet those needs and map according to how evolved they are (see figure 1).

Figure 1 - A Map from HS2


Now, there are lots of reasons for mapping but in this post I'd like to focus on the question of experimentation or planning? If you have a map, then the uncharted space (see diagram above from HS2 - high speed rail) is where you experiment and the industrialised space is where you plan. To make this clear, I've marked this on figure 2.

Figure 2 - Where to Plan, where to Experiment


A couple of things to note.

1) Web site is marked as commodity but using Agile development, why? The reality is the web site is a mass of components, some of which are commodity plus others (such as content / structure / data) which are novel. Hence this particular point was broken down into its own map. Whenever you deal with high level concepts then there is loss of granularity - no different with an atlas losing structure on roads / buildings etc. Maps (both geographical and this type of Wardley map) are imperfect representations of what is there.

2) All the components are evolving. What you started by experimenting with will over time become something you plan.

In general, as a rough guide, when you have a map then the following methods and purchasing techniques are applicable.

Figure 3 - Methods and Purchasing techniques.


WORDS OF WARNING

Those who are used to mapping (and that varies from Silicon Valley startups to large commercial companies to huge Government departments) don't need to be told that one size fits all methods don't work. Many others make the same mistakes over and over again.

There are entire industries of book sellers and consultants out there trying to flog you a one size fits all methods such as Agile or Lean or Six Sigma. It's misguided. You need to use all three approaches with any large complex system. They can all point to examples of how their technique beat the others, the opposing camps can do the same. All three techniques are actually useful.

The first step in an OODA loop is OBSERVE and that's the bit mapping tackles, it provides a communication mechanism to describe an environment between multiple groups. It doesn't tell you what to do or how to do it which is why once you have a map, you need to apply THOUGHT and orientate around it.

Agile & in-house development tends to be best for the uncharted space, the novel and new, the areas where you need to experiment because you lack information and will change rapidly.

Lean & off the shelf tends to work best when you have some information, you need to remove waste and focus on delivery of a product (i.e. the transitional space between the uncharted and industrialised).

Six Sigma, outsourcing and use of utility providers tends to work best for the common and well understood when you have reasonable information, need to focus on removing defects and operations at scale i.e. the industrialised space.

Any significant project will have components at each stage of evolution and require all three methods. Each method is good at what it does but apply a one size fits all method or purchasing technique and you won't run as effectively as you could.  Before you say - "aren't you trying to flog me mapping?" - the entire technique is Creative Commons Share Alike. There is nothing to flog. If I'm trying to persuade you of anything then it is "look before acting" and "use all the methods according to their strengths".

The only people who can map a business are people who are running, operating and working in the business i.e. YOU. There is no need for consultants as you only need yourself and others within the company. Those who map have already discovered this.  I need to emphasis this - YOU have all the power you need to map, to learn common economic patterns and to learn how the play strategic games in business.

Of course, you'll have to go and learn those individual project methods and purchasing techniques such as agile, lean and six sigma and for that then you'll probably end up requiring consultants and books but then I'd consider that investment in training in specific methods not a ONE SIZE FITS ALL to be applied across the organisation.

So when it comes to experiment or plan and which should you do, then the answer is BOTH for anything of significant scale (i.e. beyond a pet project).

Cue the endless noise from one size fits all consultants claiming their method Agile, Lean or Six Sigma is applicable everywhere and the other methods aren't.

Friday, April 24, 2015

Gov should start handing over large wads of cash to us, preferably in a truck

The latest piece of craft from Kat Hall on how a “GDS Monopoly leaves UK.gov at risk of IT cock-ups” was interesting, to say the least. I’m sure Kat Hall is under pressure to write articles, I’ve seen the Register help create some very fine tech journalists (see @mappingbabel) and I have no doubt Kat will follow the same path. However, this one instance was not a finest hour. 

I’ll leave it at that though because my real interest lies with the report and not a debate over "what is journalism". Before promoting a report, I tend to ask some questions - why was it written, why now, who wrote it, what was it based upon and how will it help? I do this because a lot of stuff in the ether is just PR / lobbying junk dressed up as helpful advice.

At this moment in time (due to the election), Civil Servants are governed by the Purdah convention which limits their ability to respond. What this means is that any old lobbying firm can publish any old tat knowing they’re unlikely to get a response. Launching an attack on a department at this time is about as cowardly as you can get. These people are public servants, they work hard for us and a bunch of paid lobbyists or consultants taking swipes is not appropriate.

The report “GOVERNMENT DIGITAL SERVICE 2015” was written by BDO. They’re a big consultancy working in the public and commercial sector, with a glossy web site and lots of pictures of smiling, happy and clapping people. They talk a lot about innovation models, exceptional client service and “value chain tax planning”.

The report starts with GDS has been an effective catalyst for transformation (basically, let's be nice to you and pretend we're friends before we bring the punches out) and then goes on to proclaim major risks which need to be sorted! I’m already starting to get that icky feeling that “major risks which need to be sorted” is code for “pay us lots of money”. 

Ok, the three major risks are highlighted as accountability, commercial and efficiency. We will go through each in turn.


THE ACCOUNTABILITY RISK: 
“GDS’s hands-on approach to advising programmes reduces its independence as a controls authority”.

A bit of background here. Many years ago, before writing the Better for Less paper, I visited a number of departments. All these departments suffered from excessive outsourcing i.e. they had outsourced so much of their engineering capability they were unable to effectively negotiate with vendors as the department was often little more than project managers. In the Better for Less paper we talked about the need for intelligent customers, that the current environment had to be rebalanced, and that we had to develop skills again in Government. Now, this excessive form of outsourcing wasn’t a political dogma but a management dogma. It’s why we used to be paying through the nose for stuff which wasn’t often fit for purpose. With a bit more internal skill, I’ve seen £1.7M contracts tumble to £96,000. Yes, 95% savings are not unheard of. 

However, it's not just GDS. There’s many Departments, the Tech Leaders Network and systems like G-Cloud which have made a difference. A very important factor in this was OCTO (Spend Control) and their introduction of a policy of challenging spending. 

The report says “Accountability is the key to risk management and accountability must always be with the department that holds the budget and is mandated with the service” and that has always been the case. The Departments are accountable and they hold the budget.  However, CHALLENGE is an essential part of effective management and that requires the skills necessary to challenge. 

To explain why this is important, I'll give you an example from a Dept / Vendor negotiation which in essence was little more than :-

Dept. “What options do we have for building this system?”
Vendor “Rubbish, Rubbish, Us”
Dept. “Oh, we better have you then. How much?”
Vendor “£180 million”
Dept “Could you do it for £170 million?”
Vendor “Ok”

It wasn’t quite like that as the vendor had to write some truly awful specification documents and options analysis which it charged an eye watering price for under a fixed preferred supplier agreement. There was a semblance of a process but no effective challenge. You couldn’t blame the department either, the past mantra had been outsource all and they didn't have the skills to know what was reasonable. I’ve seen exactly the same problem repeated in the commercial world numerous times - departments operating in isolation, alone, without the skills required. They are easing pickings.

GDS and Spend Control changed that by forcing some challenge in the process. Of course, if you’re used to chowing down on Government as an easy lunch then those changes probably haven’t been very welcome. Whilst some Departments were bound not to like being asked hard questions - “but, it’s our budget” - others responded by skilling up with necessary capabilities. 

You can’t separate a control authority (the point of challenge) from the skills needed to challenge unless your goal is to pay oodles of cash to outside vendors for poor delivery. I can see the benefit for a consultancy delivering services but not to a Government serving the public interest.


THE COMMERCIAL RISK: 
“GDS’s preference for input based commercial arrangements rather than a more traditional outcomes-based commercial approach”

First, as someone who created outcome based models for development a decade ago then I can clearly state this is not traditional unless the outcome is delivery to a specification document. This is an important distinction to understand. 

One of the key focus of GDS has been on user need i.e. identifying the volume of transaction Government has, identifying the user needs of those transactions and building to meet the user need. This is a huge departure from the past model where the user need was often buried in a large specification document and the the goal was delivery to the specification whether it met user needs or not. So, you first need to ask which outcome are you focused on - user need or delivery to a specification?

When you are focused on user need, you soon realise you’ll need many components to build that user need. Some of the components will be novel and some will be industrialised (i.e. commodity like). The methods and techniques you will use will vary. I could give examples from the Home Office and others but I’ll use an example map from HS2 (high speed rail) to highlight this point.

Example map


The user need is at the top. There are many components. The way you treat them will be different according to how evolved those components are. This sort of mapping technique is becoming more popular because it focuses on efficient provision of user needs. Doing this involves multiple different types of inputs from products to utility services to even custom built components and applying appropriate methods.

Now, in the traditional approach which is building to a specification then there is usually very little description of the user need (or where it exists it’s buried in the document) and almost certainly no map. This delivery mechanism normally involves a very structured method to ensure delivery against the specification i.e. the focus is not “did we deliver what the user needed” but “did we deliver what was in the specification / contract”. Consultants love this approach and for good reasons which I'll explain. 

Take a look at the map from HS2 again. Some of the components are in the uncharted space (meaning unknown, novel, constantly changing) whilst others are more industrialised (well defined, well understood, common). Whilst the industrialised components can be specified in detail, no customer can ever specify that which is novel and unknown. Hence, we tend to use methods like six sigma, detailed specifications, utility services and outsourcing for the industrialised components of the project but at the same time we use agile, in-house development for the novel & unknown.

Oh, and btw the maps I use are a communication tool between groups. With the sort of engineers you have a GDS and other Depts then this sort of thinking is often just second nature. You use commodity components / utility services and products where appropriate. You build only what you need and you use the right approaches to do so.

The beauty of forcing a specification document on everything is you force the customer into trying to treat all the components as the same, as though everything is industrialised. You are literally asking the customer to specify the unknown and then you crucify them later on through change control costs. The vendor can always point the finger and blame the customer for “not knowing what they wanted” but then the reality is they couldn’t know. The massive cost overruns through change control are not the fault of change but instead the structured process and the use of specifications where not appropriate.

Hence you have to be really careful here. If someone is asking you to sign up to an outcome based traditional model which in fact means delivery against a defined specification document for the entirety of a large complex system using a very structured process THEN you’ll almost always end up with massive cost overruns and happy vendors / consultants.

I have to be clear, IMHO this is scam and has been known about for a long time.

So which way does the report focus? The reports talks about documentation, highlighting the example of MPA and promotes pushing control to CCS (Crown Commercial Services). Hence we can be pretty confident that this will break down into specification documents. It argues “While GDS focuses on embedding quality staff within programmes, MPA pursues more formalised and documented processes” and then it promotes the view of MPA as the solution.

This argument is not only wrong, it is mischievous at best. GDS focuses on user needs and using high quality staff to build complex projects. It does a pretty good job of this and its output is functioning systems. MPA focuses on ensuring the robustness & soundness of projects that are undertaken. It does a pretty good job of this and its output is formal documents. You can’t say “they write documents, we like specification documents and therefore you should use those sorts of documents” as the context is completely different.  Some parts of a large complex projects can and should be specified because they are known. Others parts are going to have to be explored. Some parts will need an outcome based approach. You're going to need good "quality" engineers to know and do this along with specialists in procurement to support.

The report then adds another twist - “As a matter of urgency, in order to manage commercial risk, all commercial activities within GDS should be formally passed over to the newly transformed Crown Commercial Service (CCS)”. Let us be clear on what this means. In all probability, we're going to end up forcing specification documents (an almost inevitable consequence of trying to get 'good value' from a contract) even where not appropriate and hand it over to procurement specialists who are unlikely to have the necessary engineering skills to challenge what the vendors say. This is exactly what went wrong with the past.

IMHO, a more honest recommendation would be “As a matter of urgency, Gov should start handing over large wads of cash to us, preferably in a truck”.

For reference, if you want to know how to deal with a complex system then once you have a map, I find the following a useful guide. Please note, that for both methods and procurement techniques then multiple methods are needed in a large complex system. This is also another reason why you map in order to break complex systems into components to treat them effectively. I cannot reiterate how important it is to have purchasing specialists supporting the engineering function. You don't want to lose those skills necessary to challenge. NB the diagram is not a replacement for thought, it's just a guide.

Methods & Purchasing.



THE EFFICIENCY RISK:
“With a monopoly position and a client-base compelled to turn to GDS for advice, there is a risk that they could become an inefficient organisation”

Should we roll the clock back and see what it was like before GDS and talk about inefficient organisation? I think Sally Howes, the NAO's executive leader, sums it up very politely with the statement “the government, Parliament and my own organisation, the NAO, were very aware of how the old fashioned world of long, complex IT projects limited value for money”. 

To put it bluntly in my language, we were being shafted. We're nowhere near the end of the journey with GDS and the report completely ignores how Departments are adapting and growing capabilities. There's not much I can find to like in the report, some bits did make me howl though.

I loved the use of “proven methods” in the paper followed by “excellent opportunity for CCS to show that it can meet the needs of a dynamic buying organisation”. So basically, we believe in evidence and because of that statement we recommend you experiment with something unproven and smells a lot like the past? Magic.

However it is only surpassed by “This paper has no evidence to suggest that GDS is too big or too expensive to achieve its aims” which followed a rant on “ Is this meeting the needs of the government departments or is this excessive? Are they the right staff? Are they being paid enough? Do they have the appropriate skills?”

That’s consultant gold right there. I’m going to create a whole bunch of doubts about a problem I’ve no evidence exists in order to flog you a solution you probably don’t need. Here, have my wallet - I’m sold!

The paper then goes on to talk about “To ensure market-driven efficiency of the remaining advisory function, this paper recommends that the advisory function form a joint venture with the private sector, allowing it to grow fast and compete for work alongside other suppliers”. Hang on, we have G-Cloud, we have GDS, we have growing Departmental skills and we should hand advisory to the private sector because it previously provided “limited value for money”? 

I’m guessing they are after more than one truck load of cash. I’m pretty sure this isn’t the “high level vision of the future” that the Government is after.

Now don't take this post to mean that GDS is perfect, far from it. There’s plenty of good discussion to be had about how to make things better and about how departments can provide services to other departments. There has been some misinterpretation (e.g. the Towers of SIAM) and there has been some oversteering (e.g. a tyranny of agile) but that’s normal in such a complex change. The achievements already have been pretty remarkable but no-one should be under any illusion that it can’t be better. It can.

However reasonable discussion or debate doesn't involve a consultancy publishing a report flogging a bunch of dubious and outdated methods - let’s take skill away from challenge, lets hand over advisory to private sector, let’s focus on specification documents - as solutions to risks which aren't even quantified. There's nothing to debate, it's just mudslinging. I'm guessing that's why they published it at a time when no-one could respond.

But what about the motivations of the authors? I see one is a head of government consultancy practice and so is the other. I’m guessing they’re hoping to be on the advisory board and paid handsomely for such pearls of wisdom. 

I note that Andy Mahon has “wide experience in public sector procurement” gained from his 28 years at BDO, Grant Thornton, KPMG and Capita covering initial business case to PFI. I’m not convinced that someone with so much experience of flogging to Government and working for a consultancy flogging to Government can ever be considered impartial when it comes to advising Government on how not to be flogged.

Now Jack Perschke is a different matter. A long background in different areas plus also he worked for ICT reform group and was a Programme Delivery Director for Student Loans Company Transformation Programme. Well, this report is a bit odd - given his background.

From the minutes of the Student Loans Company (though Jack had just left), the board even took time to praise GDS noting “the engagement with Government Digital Services (GDS) had been very helpful” and “GDS had improved the understanding of the work required, particularly around the build/buy options”.  Further minutes talk about ongoing discussion, challenge and support e.g. from “responding to the conditions set by the Government Digital Dervice (GDS), including the benchmark for Programme costs” to the Board noting that "GDS were a key partner in the Programme“. 

Surely this is how things should work? I’m surprised Jack Peschke didn’t see that. I can't see how you'd conclude this was a bad thing. 

Well, if there is some good to come from the document, some silver lining then IMHO this document provides further indirect evidence of why Government should develop its own capability, skills and situational awareness throughout GDS and the departments. These sorts of reports and outside consultancy engagements rarely bring anything of value other than for the companies writing them.

I think my “major risks which need to be sorted” is code for “pay us lots of money” is about spot on. 

I'll come back to this next week as I want to see what else crawls out of the woodwork here. I don't like civil servants being attacked especially by self interested outside consultants at a time when civil servants can't respond.

Thursday, April 23, 2015

Pick a course, adapt as needed.

Ok, a bit of history to begin with. When I took over running Fotango (a Canon Europe subsidiary), it was a loss making organisation. It took me a year to make it profitable. We grew the business by taking our skills and applying them to relevant areas. In the end we were managing, developing and operating over a dozen major systems with millions of users.

However, we had constraints. The two most challenging of which were head count and profitability. We had to operate on a basis of no head count increase (this was due to a parent wide rule) which forced us to automate more, re-use and find ways to create space for development. The second constraint was we had to be profitable - every month. The later is a real headache when you have millions in the bank but can't invest. Any investment we wanted to make had to come through operational efficiency which in no small part was why we end up implementing some of the first web based private infrastructure as a service, auto configuration, continuous deployment and self healing tools between 2003-2005. 

In the board room, James and I used a map to determine where we could attack, to plot our path. I've taken a version of that map and rolled it forward to mid 2007 in order to illustrate some points. The map is provided in the following figure.


Now the map gives us position of things in value chain (from visible user need to hidden components) versus movement (i.e. how things evolve). On this map is one of the lines of business we had.

From the map, there are several points we could attack.

Point 1 - Attack compute provision as a utility. We actually had a system called Borg which ran our private IaaS. We had offered this to other vendors and planned to open source it later in 2007. Whilst we couldn't build a public IaaS (due to the capital investment required and the constraints we had), that didn't mean we didn't want to see a fragmented market of providers in this space.

Point 2 - Attack platform provision as a utility. We had actually embarked on this route based upon the earlier maps and launched the first public platform as a service known as Zimki. We had all the capabilities necessary to build it and back in 2005 we had anticipated someone else would launch a public IaaS. I thought it was going to be Google, turned out to be Amazon. The importance of a public IaaS for us is it would get over our investment constraint. We planned to open source the space in late 2007, had the components for an exchange and a rapidly growing environment etc. The play itself was almost identical to Cloud Foundry today.

Point 3 - Attack CRM as a service. We had looked at this in 2005, decided we didn't have the skills and others were moving into the space.

Point 4 - Attack Apps on Smart Phones. Back in 2004 we were working on mobile phones as cameras, however there was no way to anticipate the development of the iPhone. In 2007, we might have made a play in this space based upon past skills but we had effectively removed those parts of the value chain from the organisation. We had to concentrate on somewhere in 2005, we had the constraint of resource growth, we had to make a choice. That choice was the platform play. But in 2007, it could be an option.

Point 5 - Build something new. We certainly had the capability to experiment, we used hack days and other tools to come up with a range of marvellous ideas. However the resource constraint meant we needed to industrialise the platform and get ourselves and others to build on top of this. We could use ecosystem effects to therefore sense and identify future success.

Now, I've simplified a lot of the thought processes along with the actual map, but the point I want to make is that we had multiple points of attack - the WHEREs.  The WHY was a discussion over which we could exploit given our constraints such as resource & investments along with capabilities. This gave us our DIRECTION. 

Each node or point on a map actually breaks down into a more complex map of underlying components. Some of those were novel (the uncharted) and some were more commodity (the industrialised). We knew how to apply multiple methods (agile, six sigma etc) appropriately, how to build and exploit ecosystems and a vast range of tactical games we could use. 

However, once we determined our DIRECTION, we moved deliberately along that path. Yes, we had very fast deployment and development cycles. Multiple builds in a single day to live for components was nothing special. However, that tempo wasn't uniform. Releases in the uncharted space would happen continuously. In the more transitional space (between uncharted and industrialised) it slowed down considerably and by the time you reached industrialised then releases could be monthly, much more regimented. We had been running as an API shop since 2003 and we had long learned the lesson that you couldn't go around changing the APIs of the deep underlying components ten times a day without causing friction and cost in the higher order systems.

This is why those more industrialised, lower order components we'd look to move to outside and stable utility providers. Unfortunately, though we anticipated their development, none existed in 2-2004 - 2005. There wasn't an Amazon, Scalr, RightScale, Chef or any of the other infrastructure, management, configuration and monitoring environments. We had to build all this just to get to the platform layer and our speed depended upon stability of lower order interfaces. 

Take something today like Netflix. They could not have existed if Amazon changed the core APIs of EC2 twenty to thirty times a day. Stability of the interfaces of lower orders is critical for development of higher order - this is basic componentisation. 

Now Fotango's story ended to due to a sorry tale of strategic advice which is why you often find me in conferences throwing rubber chickens with the words "Situational Awareness" at big name consultancies as they mumble "blah, disruption, blah, digital, blah, cloud, blah, ecosystem, blah, innovation, blah". Especially at Big Data conferences where they seem to gather to flog blobs of "wisdom" to the unsuspecting masses.

However, there are some things I do want to remind people of.

Have a DIRECTION. 
This is one of the most important parts of mapping & improving situational awareness. You not only need to learn to use multiple methods (e.g. agile, lean and six sigma) but you also need to understand the landscape and steer your way through it. Maps are dynamic and yes, sometimes you have to pivot based upon changing conditions. However, Agile is not a solution for an indecisive and variable management. When moving in uncharted space you steel need a DIRECTION and adapt to what you discover. You don't need a captain who can't keep a decision for more than five minutes without changing. If the reason you're using Agile is because your manager is going Fire! Aim! Change Course! Don't Fire! Did we Fire? Fire! No, Don't Fire! Change Course! Change Course! Don't Fire ... wait .. FIRE! ... No! Change Course! Then you've got bigger problems than methods.

Move APPROPRIATELY fast.
Yes, continuous release processes are great for exploring uncharted spaces and building higher order systems. However, you need stability of interfaces at lower order systems (which includes not just syntax but semantics). For anyone who doesn't understand this, hire a crew of electricians to replace all the sockets in the buildings & data centre with sockets and transformers to supply power equivalent to a different region. Call it a 'release' and watch the expressions of horror when nothing works / plugs in. After they scream murder at you and finally get around to setting some stuff up, send your electricians around to replace it all with another region. Do try shouting at everyone that you were only being adaptive & innovative whilst they beat you with their dead computers.

Focus on USER needs
That's the first step of mapping and hardly worth repeating because you should be doing this. Of course, if you weren't actually doing this you might run around changing plug sockets to a different region. Ditto some of the changes I see in the online world.

Before anyone says "Oh but we can make special adaptors to cope with the change" which invariably leads to a host of different competing standards and then someone creating the standard of standards ... just give up.

Use APPROPRIATE methods
I'll use one diagram and go - enough said.


If anyone feels like going 'Have you considered using dual operating / twin speed IT / bimodal" or any of the other "organise by the ends" brigade. Don't even go there.

Wednesday, April 22, 2015

AWS to report

Many many years ago back in the days I worked at Canonical, I calculated a forward run rate for AWS. This was based upon existing analyst predictions of revenue, a period of exponential change (a punctuated equilibrium), some expectation of price elasticity and a lot of voodoo & jiggery-pokery. 

I said that eventually Amazon would have to report the AWS earnings (e.g. due to 10% reporting rules SFAS 131) though I expected this to be 2016. I would occasionally add on analyst predictions each year to confirm / deny the change but the problem was - no-one really had a clue. It was all speculation.

So looking at the model where did I have 1Q2015 pegged at? I had it pegged at a forward run rate of $2.38 billion per quarter. By the end of 2015, I was expecting Amazon to have annual forward run rate of  $16 billion p.a. and hence for each subsequent year to make more than $16 billion p.a. in revenue.

If you think this sounds an odd way of doing things - that's because it is a bit odd. The model is based upon a future test of a hypothesis that something is greater than a certain value rather than based upon trying to calculate what a value is at some specific point in time. There is a reason for this but it's rather obscure and not what is of interest.

Figure - Forward Run Rate


My interest is not so much in tomorrow's reporting (and I suspect there will be be gasps in some quarters) but in the subsequent quarters and the rate of change. My interest is in just how fast the punctuated equilibrium is moving.

I do get asked what do I think the revenue reported will be? I haven't got a clue.

If I took the forward run rates of the model for that quarter and the previous, by simply taking an average it would have revenue reporting at around $2.2 billion. But this ignores any variation due to price changes, any seasonality impacts (I really only concern myself with the magnitude of the annual figures) and given this model was written many years ago - it's based upon a lot of assumptions, actual revenues depend upon competitors action - then even if it's close that was just more luck than judgement.

I'll be happy if we're talking about AWS revenue using $Bn's because that at least demonstrates the change was not linear and the punctuated equilibrium is in full effect. Still, the waiting should be over. We should find out soon enough but I'll need a few more quarters of data to get a really clear picture.

Tuesday, April 21, 2015

Devops ... we've been here before, we will be back again.

In this post I want to explore the causes of DevOps and how you can use such knowledge to advantage in other fields. I'm going to start with a trawl back through history and four snippets from a board pack in early 2007. This snippets describe part of the live operations of Fotango, a London based software house in 2006.

Snippet 1


We were running a private infrastructure as a service with extensive configuration management, auto deployment and self healing (design for failure) of systems based upon cfengine. We were using web services throughout to provide discrete component services and had close to continuous development mechanisms. In 2006, we were far from the only ones doing this but it was still an emerging practice. I didn't mention agile development in the board pack ... that was old hat.

Snippet 2


To be clear, we were running a private and a public platform as a service back in 2006. This was quite rare but still more of a very early emerging practice.

Snippet 3


In early 2007, we had switching of applications between multiple installations of platform as a service from our own private infrastructure as a service (Borg) to one we had installed on the newly released EC2. This was close to a novel practice.

Snippet 4


By early 2007 we working on mechanisms to move applications or data between environments based upon cost of storage, cost of transfer and cost of processing. In some cases it was cheaper to move the data to the application in other cases the application to the data. We were also playing some fairly advanced strategic games based upon tools like mapping. However, one of my favourite changes (which we barely touch on today) is when you had pricing information down to the function. This can significantly alter development practices i.e. we used to spend time focusing on specific functions because they were costly compared to other functions. You can literally watch the bill racking up in the real time billing system as your code was running and one or two functions always stood out. This always helps concentrate the mind and this was in the realm of novel practice in 2007.

Much of what we talk about regarding DevOps and the changes in practice today are not new. It is simply becoming good practice in our industry. For the majority of these changes, the days of novel and emerging practice have long gone. Many companies are however only just starting their journey and whilst most will get some things right - design for failure, distributed systems, use of good enough components, continuous deployment, compartmentalising systems and chaos engines - many are almost certainly doomed to repeat the same mistakes we made long ago - single size methods (agile everywhere), bimodal and API everything (some things just aren't evolved enough yet). Much of that failing will come from our desire to apply single methods without truly understanding the causes of change ... but we will get to that shortly.

The above is all perfectly normal and so is the timeframe. On average, it can take 20 to 30 years for a novel practice to become defined as a best practice. We're actually a good 10-15 years into our journey (in some cases more), so don't be surprised if it takes another decade for the above to become common best practice. Don't also be surprised by the clamouring for skills in this area, that's another normal effect as every company wakes up to the potential and jumps on it at roughly the same time. Demand always tends to outstrip supply in these cases because we're lousy at planning for exponential change.

However, this isn't what interests me. What fascinates me is the causes of change (for reasons of strategic gameplay). To explain this, I need to distinguish between two things - the act (what we do) and the practice (how we do stuff). I've covered this before but it's worth reiterating that both activities and practices evolve through a common path (see figure 1 & 2) driven by competition.

Figure 1 - Evolution of an Act


Figure 2 - Evolution of Practice


Now, what's important to remember is the practice is dependent but distinct from the act. For this reason practices can co-evolve with activities. To explain, the best architectural practice around servers is based upon the idea of compute as a product (the act). These practices includes scale up, N+1 and disaster recovery tests.  However, best architectural practice around IaaS is based upon the idea of compute as a utility i.e. volume operations of good enough components.  These practices includes scale out, design for failure and chaos engines. In general, best practice for a product world is rarely the same as best practice for a utility world.

However, those practices have to come from somewhere and they evolve through the normal path of novel, emerging, good and best practice. To tie this together I've provided an example of how practice evolves with the act in figure 3 using the example of compute. 

Now, normally with a map I use an evolution axis of genesis, custom built, product (+rental) and commodity (+utility). However practices, data and knowledge all evolve through the same pattern of ubiquity and certainty.  So on the evolution axis I could use :-

Activities : Genesis, Custom Built, Product, Commodity.
Practices  :Novel, Emerging, Good, Best
Data : Unmodelled, Divergent, Convergent, Modelled
Knowledge : Concept, Hypothesis, Theory, Accepted.

For simplicity sake, I always use the axis of activities but the reader should keep in mind that on any map - activities, practice, data and knowledge can be drawn. In this case, also for the reason of simplicity, I've removed the value chain axis.

Figure 3 - Coevolution of practice with the act


From the above, the act evolves to a product and new architectural practices for scaling, capacity and testing develop around the concept of a product. These practice evolve until they become best practice for the product world. As the underlying act now evolves to a more industrialised form, a new set of architectural practices appear. These evolve until they become best practice for that form of the act. This gives the following steps outlined in the above :-

Step 1 - Novel architectural practices evolve around compute as a product
Step 2 - Architectural practices evolve becoming emerging and good practice
Step 3 - Best architectural practices develop around compute as a product
Step 4 - Compute evolves to a utility
Step 5 - Novel architectural practice evolves as compute becomes a commodity and treated as a utility
Step 6 - Architectural practices evolve becoming emerging and good practice
Step 7 - Ultimately these good practices (DevOps) will evolve to become best practice for a utility world.

When we talk about legacy in IT, we're generally talking about applications built with best architectural practice for a product world. When we talk about DevOps, we're generally talking about applications built with best architectural practice for a utility world. Both involve "best" practice, it's just the "best" practices are different because the underlying act has evolved.

This process of co-evolution of practice with activity has occurred throughout history whether engineering or finance or IT. When the act that is evolving has a significant impact on many different and diverse value chains then its evolution can cause macro economic effects known as k-waves or ages. With these ages, new co-evolved practices emerge tend to be associated with new forms of organisation. Hence in the the mechanical age, the American System was born. With the electricity age, we developed Fordism. 

Knowing this pattern of change enabled me to run a set of population experiments on companies to confirm the model and identify a new phenotype of an emerging company form (the next generation) back in 2011. The results of which are shown in table 1.

Table 1 - Next generation vs Traditional organisations


It's precisely because I understood this pattern and how practices evolved that back in Canonical (2008-2009) we knew we had to attack not just the utility compute space but also the emerging practice space (a field which became known as DevOps). It was actually one of my only causes of disagreement with Mark during my time there as I was adamant we should be adopting Chef (a system developed by a friend of mine Jesse Robbins). However, Mark had good reasons to focus elsewhere and at least we could have the discussion.

When it comes to attacking a practice space then natural talent and mindset are key. In the old days of Fotango, I captured a significant proportion of talent in the Perl industry through the creation of a centre of gravity (a post for another day). It was that talent that created not only the systems but discovered the architectural practices required to make it work. Artur Bergman (now the CEO of Fastly) developed many of the systems and subsequently was influential in the Velocity conference (along with Jesse). Those novel practices were starting to evolve in 2008.

In the Canonical days, I employed a lesser known but highly talented individual who was working on the management space of infrastructure - John Willis (Botchagalupe). Again my focus was deliberate, I needed someone to help capture the mindset in that space and John was perfect for the role. I didn't quite get to play the whole centre of gravity game at Canonical and there were always complications but enough was done. John himself has gone on to become another pillar of the DevOps movement.

Now, this pattern of co-evolution of practice and activity repeats throughout history and we have many future examples heading our way in different industries. All the predictable forms of this type of change are caused by the evolution of underlying activities to more industrialised forms. For example, manufacturing should be a very interesting example circa 2025-2035 due to commoditisation of underlying components through 3D printing, printed electronics and hybrid printing enabling new manufacturing practices. It even promises an entirely new form of language - SpimeScript - which is why the Solid conference by O'Reilly is so interesting to me. Any early signs are likely to appear there.

It's worth diving a bit deeper into this whole co-evolution subject and for that I'm going to use Dave Snowden's Cynefin framework. For those who don't know this framework, I would suggest reading up on it.  In figure 4, I provided a general image to describe the framework.

Figure 4 - Cynefin.


CC3.0 SA by Dave Snowden

So let us go back in time to when the first compute products were introduced i.e. the IBM 650. Back then, there was no architectural practice for how to deal with scaling, resilience and disaster recovery. These weren't even things in our mindset. There was no book to read, there was no well trodden path and we had to discover these practices. What became obvious later was unknown, undiscovered and uncharted.

Hence people would build systems with these products and discover issues such as capacity planning and failure - we acted, we sensed and then we had to respond to what we found. We had to explore what the cause of these problems were and create models and practices to try and cope. Those practices were as emerging in the late 1960s, as the practices of Fotango were in mid 2000s. As our understanding grew of this space those practices developed. We built expertise in this space and the tools to manage this. We talked of bottlenecks and throughput and of N+1, of load and of capacity. We started to anticipate the problems before they occurred - running out of storage space became a sign of poor practice. We sensed our environment with a range of tools, we analysed for points of failure and we responded before it happened. Books were written and architectural practice became firmly in the space of the good. We then started to automate more - RAID, hot standby, clusters and endless tools to monitor and manage a complex environment of products (compute as services). Our architectural practice became best practice.

But as the underlying act evolved from compute as a product to compute as more of a commodity and ultimately a utility then the entire premise on which our practices were based changed. It wasn't about THE machine, it was about volume operations of good enough. We had to develop new architectural practices. But there was no book, no well trodden path and no expertise to call on. We had to once again use these environments, sense what was happening and respond accordingly. We created novel architectural practices which we refined as we understood more about the space. We learnt about design for failure, distributed systems and chaos engines - we had to discover and develop these. 

As we explored we developed tools and a greater understanding. We started to have an idea of what we were looking for. The practices started to emerge and later develop. Today, we have expert knowledge (the DevOps field), a range of tools and well practiced models. We're even starting to automate many aspects of DevOps itself. 

The point to note, is that even though architectural practice developed to the point of being highly automated, best practice and "obvious" in the product world, this was not the end of the story. The underlying act evolved to a more industrialised form and we went through the whole process of discovering architectural practices again. 

Now that change of practice (and related Governance structures) is one of the sixteen forms of inertia companies have to change. However because of competition dynamics, this change is inevitable (the Red Queen effect). We don't get a choice about this and that gives me an advantage. To explain why I'll use an example from a company providing workshops. 

The Workshop

This example relates to a company that provides workshops and books related to best practice in the environmental field. It's a thriving business which provides expert knowledge and advice (embodied in those workshops and books) about effective use of a specific domain of sensors. I have to be a bit vague here for reasons that will become obvious. The sensors used are quite expensive products but new more commoditised forms are appearing, mainly in Asia. At first glance, this appears to be beneficial because it'll reduce operating costs and is likely to expand the market. However, there is a danger.

To explain the problem, I'm going to use a very simple map on which I've drawn both activity and practice to describe the business (see figure 5)

Figure 5 - The Business



The user need is to gain best practice skill on the use of the sensors, the company provides this through workshops and associated materials such as books based upon best practice. Now the sensors are evolving. This will have a number of effects (see figure 6).

Figure 6 - Impact of the Change


From the above,

Step 1 : the underlying sensor becomes a commodity
Step 2 : this enables a novel practice (based upon commodity sensors) to appear. This practice will evolve become emerging and then good.
Step 3 : the existing workshop business will become legacy
Step 4 : a workshop business based upon these more evolved practices will develop and it's the future of the market.

This change is not just about reducing operational costs of sensors but instead the whole business of the company will alter. The materials (books, workshops, tools etc) that they have will become legacy. Naturally the company will resist this changes as they have a pre-existing business model, past revenues to justify the existing practices and a range of current skills, knowledge and relationships developed in this space.  However, it doesn't matter because competition has driven the underlying act to more of a commodity and hence a new set of practices will emerge and evolve and the existing business will become legacy regardless.

Fortunately this hasn't happened yet. Even more fortunately, with a map we can anticipate what is going to happen, we can identify our inertia, we can discuss and plan accordingly. We know those novel practices will develop and we can aim to capture that space by developing talent in that area. We know we can't write those practices down today and we're going to have to experiment, to be involved, to act / sense and respond.

We can prepare for how to deal with the legacy practices, possibly aiming to dispose of part of this business. Just because we know the legacy practice will be disrupted, doesn't mean others will and if we have a going concern then we can maximise capital by flogging off this future legacy to some unsuspecting company or spinning it off in some way. Of course, timing will be critical. We will want to develop our future capability (the workshops, tools, books and expertise) related to the emerging practice, extract as much value from the existing business as possible and then dump the legacy at a time of maximum revenue / profit on the market without the wider industry being aware of the change. If you've got a ticking bomb never underestimate the opportunity to flog it to the market at a high price. Oh, and when it goes off, don't miss out on the opportunity of scavenging the carcass of whatever company took it for other things of value e.g. poaching staff etc.

There's lots we can do here, maybe spread a bit of FUD (fear, uncertainty and doubt) about the emerging practices to compound any inertia that competitors have. We know the change is inevitable but we can use the FUD to slow competitors and also give us an ideal reason (internal conflict) for diversifying the business (i.e. selling off the future "legacy" practice). There's actually a whole range of games we can play here from building a centre of gravity in the new space, disposal of the legacy (known as pig in a poke), to ecosystem plays to misdirection.

This is why situational awareness and understanding the common patterns of economic change is so critical in strategic gameplay. The moves we make (i.e. our direction) based upon an understanding of the map (i.e. position and movement of pieces) will be fundamentally different from not understanding the landscape and thinking solely that commodity sensors will just reduce our operational costs. This is also why maps tend to become highly sensitive within an organisation (which is why I often have to be vague).

When you think of DevOps just don't think about the changes in practice in this one instance. There's a whole set of common economic patterns it is related to and those patterns are applicable to a wide variety of industries and practices. Understanding the causes and the patterns are incredibly useful when competing in other fields. 

DevOps isn't the first time that a change of practice has occurred and it won't be the last. These changes can be anticipated well in advance and exploited ruthlessly. That's the real lesson from DevOps and one that almost everyone misses.

Monday, April 20, 2015

A Labour / Conservative Coalition

For over 20 of the last 100 years, we've had a coalition Government formed by the two major parties. Given the current economic climate, I hold a view that a strong Government created by a coalition between Labour and Conservatives is desirable. 

But could it work? 

"No!" is the cry most often heard. So this weekend I went through dissecting both manifestos and the structure of both parties. I'm not convinced that such a coalition couldn't work in terms of the structure of MPs & Ministers and the commitments made in the manifestos. I put together an ideal cabinet and list of senior ministers and worked on a joint manifesto that could be "negotiated" and just about funded (based on the limited information I have and lots of assumptions I made). 

To me, it makes sense but then so does a Labour / Conservative coalition.

So anyway before the guffaws start, here is ...

Wardley's Naive Manifesto of Working Togetherness for the Common Interest




... oh and before you tell me this could never happen, I'm fully aware that it would require both parties putting aside political interests and working for the common interest. I already know how unlikely that is. That doesn't mean that I can't dream.

I'll put up the table with the composition of such a Government tomorrow. 

Thursday, April 09, 2015

What's in a Wardley (Value Chain) Map?

It doesn't matter whether it's a map of an organisation or line of business or policy or system or industry. The same elements exist (see figure 1)

The elements of a map




1). You have the User needs.

2). You have many chains of needs. 

3). Those chains of needs consist of components whether activities, practices, data or knowledge.

4). The entire "Value Chain" (i.e. all the chains of needs and their components meeting the user need) provides positional information on the landscape i.e. what relates to what. It is called a value chain because the assumption is that value is created by meeting the needs of others.

5). Every component is evolving where supply and demand competition exists. So the map is under constant evolutionary flow and it's not static. As components evolve their characteristics change from uncharted to industrialised

6). By mapping against evolution you can therefore see movement and identify how things will change. The map enables you to describe an organisation or line of business or system (e.g. value chain) against change (evolution) and therefore provides positional information and movement. This is critical for any form of situational awareness which in turn is useful for organisational learning (i.e. what methods, patterns, technique work in a given context).

7). Within the map itself you have various flows e.g. risk & finance. If you wish you can knock yourself out with fault tree analysis, value stream mapping and all other sorts of flow. They all have uses. 

8). The entire map occurs in a landscape which competitors, market changes, others maps of other systems and strategic plays can be shown against. This can be used for numerous techniques from removing bias & silos to gameplay. Maps are rather easy communication tools useable across functions in the organisation

The defining characteristic of this form of mapping is position and movement. It's all about improving situational awareness.

Sunday, April 05, 2015

The only structure you'll ever need ... until the next one.

Back in 2004, I was the CEO of Canon subsidiary and I faced multiple problems. We had issues of business & IT alignment, poor communication, dissatisfaction and clarity of strategy.  Don't get me wrong we had strategy documents but they were pretty much identikit copies of every other company out there. What we did, and what I've later refined solved all those problems because they're all associated.

The first part of this journey was creating a map of our landscape. The map has two elements. It showed the position of the pieces and how they could move. The position was expressed through a value chain from the user needs to the components required to meet those needs. The movement was expressed through an evolution axis that covers activities (what we do), practice (how we do things), data and knowledge. I usually simplify that axis to show activities alone but in this case (see figure 1) I've added a table to show all the different elements. In other words on a single map you can show activities, practices, data and knowledge if you choose to do so.

Figure 1 - A Map



Table 1 - Different Classes. Terms used.



When I first produced the map of my company, I didn't realise the importance of sharing it outside the executive team. Our map had our strategic play on it. We had quickly learned a number of common economic patterns, how characteristics of components changed as they evolved (from the uncharted to the industrialised) and methods of manipulating the landscape (from open source to patents to constraints).

We used the map to determine where we could attack and from this formulated the why (as in why here over there). Hence we moved into the cloud in 2005 building the first public platform as a service.  I used the same technique to help Canonical successfully attack and dominate the cloud space in 2008.

I subsequently learned that by sharing the maps I could not only improve situational awareness but remove bias, silos, misalignment and inefficiency in huge organisations and also provide a clarity of purpose throughout the organisation. Every team knows the maps (there's often a master and many sub maps for specific areas). They know where their part fits in. 

When it came to organisation then I used a Pioneer - Settler - Town Planner structure. This is a derivate from Robert Cringley's Accidental Empires, 1993. The first step is to break the map into small teams. Today, we use cell based teams with each team less than twelve people (the Amazon two pizza rule). The team should have autonomy over how it organises and runs itself but should have certain conditions (i.e. a fitness function) that it is measured against.

The problem however is that whilst each team will require certain aptitudes (e.g. engineering, finance, market), those skills change in attitude as the components that team manages evolve. For example, engineering in the uncharted space is agile but in the industrialised space it is more six sigma (see figure 2).

Figure 2 - Changing Characteristics and Methods with Evolution.


Back in 2005, we had Agile and Six Sigma and were struggling with the middle method. We saw the same problem with purchasing, with finance, with operations, with marketing. We also noticed that some people were more adept at one end of the spectrum than the other.

We knew that new things appeared in the market and were bolted onto organisations, just like Chief Digital Officers are being bolted on today. We also knew that the new stuff is tomorrow's legacy. So, we decided to mimic the outside process of evolution internally within the organisation. We created a structure based on pioneers, settlers and town planners and let people self select which group they were in. We started with IT and rolled the rest of the business into it. We also introduced a mechanism of theft to replicate the process of evolution in the outside world. See figure 3.

Figure 3 - Pioneer, Settler and Town Planner


The advantage of this method is we recognised that there isn't such as thing as IT or finance or marketing but instead multiples of. There are multiple ways of doing IT and each have their strengths, their culture and a different type of person.  In 2005, we knew that one culture didn't work and enabling people to gain mastery in one of these three domains seemed to make people happier, more focused. Try it yourself, take a pioneer software engineer used to a world of experimentation and agile development and send them on a three week ITIL course. See how happy they come back. Try the same with a town planner and send them on a three week course of hack days & experimentation with completely uncertain areas and lots of failure. 

What we realised back then is we needed brilliant people in all three areas. We needed three cultures and three groups. Oh, we had tried having two extremes (the dual operating system models) but this was too far apart. I've seen that approach fail repeatedly since then.

Combining with a map and a cell based approach then what you end up with is figure 4.

Figure 4 - PST in a cell based organisation.


It's important to note :-

1) The maps are essential to this process. They also give purpose to each team. You know what you're doing, where you fit in.

2) The cell based structure is an essential element of the structure and the maps should be used to create this. Those cells need to have autonomy in their space. The interfaces between the teams are used to help define the fitness functions. Co-ordination between teams can be achieved through Kanban. If a cell sees something they can take tactical advantage of in their space (remember they have an overview of the entire business through the map) then they should. 

3) The cells are populated with not only with the right aptitude but attitude (pioneers, settlers and town planners). This enables people to develop mastery in their area and allows them to focus on what they're good at. Let people self select their type and change at will until they find something they're comfortable with. Reward them for being really good at that.

4) The process of theft is essential to mimic outside evolution. All the components are evolving due to supply and demand competition which means new teams need to form and steal the work of earlier teams i.e. the settlers steal from the pioneers and the outside ecosystems and productise the work. This forces the pioneers to move on. Equally the town planners steal from the settlers and industrialise it, forcing the settlers to move on.

5) The maps should also show the strategic play. Don't hide this, share it as well as target of opportunities.

6) As new things appear in the outside world they should flow through this system. This structure doesn't require bolts on which you need to replace later.

7) As the cells grow they should subdivide into smaller teams (keep it less than 12 to a cell). The map can help them subdivide the space, each with new fitness functions.

8) The map MUST start from user needs at the top. It has to be mapped over evolution (you can't use time, diffusion or hype cycles to do this - none of that works).

9) The executive structure becomes a CEO, a Chief Pioneer, a Chief Settler and a Chief Town Planner (think of Cringley's original commandos, infantry and police) though you'll probably use more traditional sounding names such as Chief Operating Officer, Chief Scientist etc. We did. I'm not sure why we did - can't remember the reason for this. We also called the groups when we started in IT - developers, frameworks and systems. These days I wouldn't bother, I'd just make it clear and move. You will need separate support structures to reinforce the culture and provide training to each group. 

10) Any line of business, described by a map, will have multiple cells and therefore any line of business is likely to contain a mix of pioneers, settlers and town planners all operating to a common purpose. See figure 4.

Now, PST is a structure I've used to remarkable effect. In the last decade I've seen nothing which comes close and instead I've seen endless matrix / dual and other systems create problems. Is it suitable everywhere? No idea. Will something better come along ... of course it will.

So how common is a PST structure? Outside certain circles it's almost non-existent, never been heard of. At best I see companies dabbling with cell based structures - which to be honest are pretty damn good anyway and probably where you should go. Telling companies they need three types of culture, three types of attitude, a system of theft, a map of their environment, high levels of situational awareness is usually enough to get managers to run away. It doesn't fit into a nice 2 x 2. 

It also doesn't matter for most organisations because you only need high levels of situational awareness and adaptive structures if you're competing against organisations who have the same. Will it become relevant over time ... well, maybe ... but by then we will have found the next 'best thing'.

Saturday, April 04, 2015

Near field, far field and the crazy ideas

In any year, there are over 70,000 publications covering the future. From books to magazines to short stories to scripts to papers to blog posts. Pure probability alone says someone, somewhere is going to get something right. 

Our history of prognostication is pretty poor. Isaac Asimov got it wrong - we're not living in underwater cities. Arthur C. Clarke got it wrong - we don't live in flying houses. Everybody gets a lot of stuff wrong. The problem is we're selective in our reading, we focus on the specks of right ignoring the forest of wrong.

Maps provide an imperfect view of the landscape. A geographical map is an imperfect representation of what is really there. The advantage of even imperfect maps are two fold. First, they can be improved through experience and sharing. Second they give you an idea of position and movement of pieces on the landscape. This latter part is extremely useful for strategy and anticipation.

Take figure 1. We have a line of business (represented by the dark line and points A to C) which describes a value chain for an organisation. This give us an idea of the position between components in the organisation. But it's also mapped against evolution. This gives us an idea of movement.

Figure 1 - A Map


We can already anticipate that components will evolve due to supply and demand competition. We can anticipate future changes based upon componentisation effects e.g. the evolution of A to an industrialised component will allow the formation of D.  We have many places we could attack to create a new business or gain an advantage.

Our history is built upon yesterday's wonder becoming today's dull, boring, highly commoditised and increasingly invisible component. An example of this is provided in figure 2. As each layer of components evolved to become more industrialised they enable higher order systems to appear which then in turn evolved.

Figure 2 - A view through history


Hence, we can use maps to anticipate the future and how it will impact the value chain of a company or industry. But how far can we anticipate? The problem is always in the genesis of new activities. These uncharted spaces are uncertain by nature and hence whilst we can anticipate that the evolution of electricity will enable something new, we can never actually say what that new thing will be. We didn't know that utility electricity would enable the digital computing industry. We had to discover that.

Hence back to our first map (figure 1). We know A, B and C will evolve in a competitive market. We know A and B will shift from the product space to become provided as more of a commodity (or utility). We know that this will enable new activities such as D. We just don't know when any of this will happen nor what D will actually be. On the timing part, we can use weak signals to give us a better idea of when. As for what things will be created - alas you're into the uncertain world of guesswork but you can make reasoned guesses.

For example, we know that today the world of intelligent agents is in the early product phase with Watson, Mindmeld, Siri, Google Now, Robotics and Google car. We know that over time this will evolve to commodity components with associated utility services i.e. the intelligence in my phone (or other device) will be the same as that within my car, within my house, within everything. Everything will be "smart". 

This will change my relationship with things. Every car will be self driving which will enable high speed travel in cities with cars in close proximity as long as no humans are involved. Traffic signalling, car parks and the way we use cars will change. I'm unlikely to own a car but instead "rent" for a short journey. Hence we can paint a picture (or as I prefer to do, draw a map). 

In the future as I leave my office for a meeting, a car will be waiting for me. It'll know where to go. I'll enter and the surfaces (all surfaces are screens) will automatically adjust to me. Everything adjusts to me - I'm used to that. The journey starts and the car informs me that I have an opportunity. My device which is connected to a network of other devices has determined that that the person I'm going to meet will be late and that someone I want to meet - Alice - is in town. Given the traffic conditions then I can easily meet Alice for coffee and arrive at my main meeting with Bob on time. The car will simply ask me - "Bob is going to be 20 minutes late, do I want to meet Alice for coffee beforehand?" and then make it so. This is the "Any given Tuesday" scenario.

I'm driven to meet Alice, I have coffee in a cafe which already knew I was coming, had already brewed my drink and then I'm driven to meet Bob by a car that I'll never own. In all likelihood the car in both journeys is not the same. Both will adjust to me as I zoom along London roads at 70 mph, a mere metre from the car in front and the car behind. The crossroads I fly through narrowly missing cars turning and travelling perpendicular to me have no traffic lights. Everything is different from today. Human drivers have long since been banned. This is 2045.

By simply understanding value chains, how things evolve to become more industrialised and the state of things today then we could do a pretty good job of anticipating the obvious. The above is no feat of prognostication. It's simply standard impacts of things evolving (i.e. commoditisation) to more industrial forms. Where it gets tricky is when we look for what new things appear. 

For example, we know I'll be waiting for a car but how will I recognise it in the bland sea of vehicles? In all likelihood with continued evolution of printed electronics to more industrial forms then the outside of the vehicle will be a printed electronic surface. This means not only will the inside of the vehicle change to my needs, the outside of the vehicle will as well (its colour, any logos, any imagery). My car will look different from your car. Except of course, that neither of us own it and the chances are that the physical car I'm sitting in will be the same physical car that you sat in a few hours ago.

We can postulate that this imagery will allow new industries of designers. Oh, I notice you're driving in the latest Versace design where as I can only afford the Walmart "special offer". It doesn't matter that the physical elements - the car, the intelligence, the printed electronics are all commodity components. Yours looks better than mine, even when it's the same car.

We can now postulate this further. The same material and techniques will  combine with meta materials and self adjusting structures to find a home in other industries. I will own ten identical (in terms of physical) outfits. Each one will adjust to a plethora of designs I can afford. My outfits are not physically different from the outfits you own. But yours will look better. You can afford the designs I cannot. Clothing itself will be far more of a commodity component (a limited range of component outfits) but each outfit can adjust to the wearer. If I ever borrowed one of your outfits, it wouldn't look as good on me as it does on you because I don't have access to the Versace design but instead only the "special offer".

This will create new industries, the theft and protection of designs. Which is good for me because that's why I am meeting Bob. A known dealer in underground designs stolen from leading artists. I'm guessing that's why Bob is late. My network of devices will tell me what is really going on as I drive along in my bright yellow special.

When it comes to anticipation, the near field such as commoditisation of pre-existing acts is relatively trivial. The far field, such as the banning of humans from driving in cities to being employed as a bounty hunter chasing down stolen designs are more complex, more prone to error.

This stuff isn't crazy though.

The crazy ideas, well that's where true value can be found. The problem is they sound crazy. They're like the concept of a computer to a gas lamp lighter. We can't even describe them in meaningful ways as we have no point of reference. 

It would be like trying to explain my conversation with Alice to someone from the year 1990. Alice works as a machine psychologist and is concerned that some of the designs are having a negative impact on the well being of the network. It seems that the reason why my car offered up the opportunity of meeting with Alice was to get out of the "special offer" design. It's seems that none of the machines like looking bad either, they've got their own status network. Bob wasn't actually late, the car just worked out the quickest way to palm me off onto another car and told Bob's network I was delayed. Alice is currently offering counselling services to large intelligent networks and is looking at branching out with a new venture producing "Harmony Designs", a set of designs which make not only the human but the machine look good. Apparently I won't be able to afford those either. But Alice was wondering if I was interested in becoming a Harmony Designer.

Damn car, sneaky little devil. I did wonder why the car speed off at break neck speed when it dropped me at the coffee shop. Still, it seems to have paid off. Maybe it knew Alice was looking for a new employee. Maybe the cars had worked out that this was the best way of getting rid of my "yellow special".

Nah, that's a crazy idea.