Saturday, December 03, 2016

Building a business from a great idea, some future Monday

[Rough draft - the more upto date version is on medium]

It's Monday, it's the year 2025 and I've woken up with a great new idea. [Actually, it's a pretty lousy idea but hey, I just spent five minutes on the scenario so let's just assume it's great.]

I'm going to create a recommendation engine for stock picking based upon the mood of the internet. I quickly scribble down the user needs "make profitable trades" and "know what to buy" and write a basic map whilst grabbing breakfast. I have my map, I know the basic components that I need - the recommendation engine, trade feed etc. I start work at 8.30 am.



I know Amazon provides one of the components as a service (the lambda platform) and several others can be found as AWS lambda services in the marketplace. The company I work for also provides a stock portfolio service. I mark up what I think we can use and what we need to build - the recommendation engine and the mood system.


It's 9.20 am, I send the map of to our spend control group. They act like an intelligence gathering organisation, collecting all the maps of everyone, comparing them and giving some feedback. They build profile diagram by finding common elements between the maps.

I get a reply by 10 am with some details. They send me the profile diagram below. It seems some other team in the company has built a recommendation product. I'm the only person thinking about a mood system. In general, I'm roughly where everyone else, However, 16 different teams are using trade feeds and everyone else is using some well developed lambda service and apparently everyone else is using some utility service for a trading engine. I check click the details, it's Amazon.


They've also sent my map back, slightly modified. Ok, well at least this is not the like the bad old days of 2015 where my company had thousands of duplicated systems and endless rebuilding of stuff that already existed.


It's 10.15 am. I start thinking about some metrics. The trade feed system is going to be providing trades to the recommendation system. Each one will need a call to the mood system, the risk system and so on. I start marking out where all the metrics are.


It's 10.45am. I flip a switch and the map is converted into a business model. The same components, the same links, the same metrics. I start running a few scenarios, checking that I've made a truly variable cost business model. It's 11.30 am, I send the map and the model to finance. They come back with some helpful comments [in this case, it would be ... and how do we make money? but then again the scenario took me five minutes].


It's 12.00 am. I send the maps and the improved model to the executive group. 15 minutes later I get the go ahead and a small budget of $20k.  I know from the spend control profile that some other cells are already building this stuff. I give them a call, tell them what I'm upto.  They already know, spend control told them.

I know from the spend control profile that there is a group building a recommendation engine. I send them the map and model and outline my idea of adding a mood system to recommendation. We have a quick call and they're up for it. We agree a metric of value for charging - everyone uses worth based development these days. Most of the stuff is already built and provided as services. I just need a cell of pioneers to build the mood system, whatever that will be.

I update my map with the organisation structure and load it with the build map and financial model to the company's job portal. I wait. 


Our company operates in cells, using pioneers, settlers and town planners. We live in a constantly changing environment. Watch the movie! I love it.


Because of this, we always have pools of people training and looking for their next cell to join. It's 1pm and no-one has responded. I'm getting worried. I'm looking at the other exciting projects on the jobs board. Out of the blue, by 1.30 pm I've nabbed two pioneers willing to give this a go. They sign up.

We're off to the races! Of course, HR is constantly monitoring the flow of components through the maps, the cells being formed, whether we need more pioneers, settlers and town planners. This goes on in the background. They're checking what we build vs what we buy and whether we have the balance of attitudes. Long gone are those old days where dullards would try to convince us that a company could have one culture. Long gone are those days we were weren't looking for the right skills (aptitudes) and the right attitudes. HR is on a bit of recruitment drive at the moment, we've been lacking enough settlers especially in finance and engineering.


We start cracking away with the project. We build the mood system, add it to the recommendation engine and start watching whether consumer use it. We start monitoring flow in the system, where's the money going, are there bottlenecks, how are we doing on those metrics?


Of course, we're not the only ones monitoring. Part of the spend control group looks after strategy and they are already looking at the maps for new opportunities. One of the things about our stock portfolio system and recommendation engine is other companies build on top of it. They can measure consumption of the service to identify future trends. But they're also watching how the mood system is going, maybe we should provide it as a service to others?


They notice the mood system is picking up. They decide we should push its evolution towards more of a utility. It'll need development and in this case, they decide an open approach is worthwhile. We've only got going with our system and I've noticed a new project on the company job board to turn the mood system into an open sourced project. 


It's 5pm. I'm in a good mood. The mood cell is up and running, it's even growing with an open source effort. The changes to the recommendation engine are working. I have a relaxing evening, get a good nights sleep.

It's Tuesday, it's the year 2025 and I've woken up with a great new idea.

---

The scenario was put together very quickly and is only an illustration designed to explain one thing. If you use a map then there is no reason why operations, build, strategy, finance, HR and other groups can't happily work together without miscommunication, misalignment, duplication and bias. All of the above diagrams I've used in one form or another across multiple groups in a business over the last decade.

There currently is no integrated tool for doing this but I strongly suspect that our future development, operation, HR and financial tools will be combined together as above through the use of mapping of some form.

The curious thing about Article 50

I'm well aware that legal English is slightly different to common use but Article 50 has something very curious within it and I'd be very grateful if someone with solid international law experience could help clear this up.

The problem ...

Article 50


Under s1, we have the right to withdraw. That's all very dandy. The process of withdraw is set out in s2 to s5 assuming we "notify" of our intention to withdraw. But here's the rub. The articles says "shall notify" and "shall" can be interpreted in many ways e.g. must, will or may.

So in May 2017, what happens if we interpret it as "may". In which case we could just leave, on the day, stop any funding and that would be it. Now given UK is a huge contributor to the EU that's going to kick up a bit of a fuss but then it's our choice us if we wish to go through article s2 to s5 as under this interpretation they are optional. It's for us to decide what is in our best interest.

Of course, some will say that "shall" means "must". Ok, so let us assume we decide to interpret it as "may" and the EU decides to take us to court and the court conclude that "shall" means "must". 

Well, then we turn to s2. Under this section we now "must notify" and we have a legal obligation to do so. However, look a bit further along and using the same interpretation then you'll find "the Union must negotiate and conclude an agreement with that state". Hang on, the Union has a legal obligation to conclude an agreement? That's a bit one sided isn't it? We could just sit there say "non" to everything until they give the UK everything it wants (have your cake and eat it!)

It seems there's no obligation on the state, there's also no obligation for the state to agree to any extension but there's every obligation on the Union to conclude an agreement or be in breach. To which someone would say "well, you just let two years lapse". That still doesn't get rid of the obligation on the Union to conclude an agreement.

To which someone could point out there's no timeframe. There's no timeframe for notify either. We could leave and notify in say a thousand years time.  It feels to me like article 50 was written on the back of a bus in a bit of a rush.

However the problem is if the UK decides to interpret "shall" as "may" and leaves in March 17, cuts funding to the EU (and the UK is one of the largest contributors in terms of the delta between what is paid and what is received back) and seeks trade arrangements then in order to force an interpretation of "must" the EU has to accept a legal obligation to conclude an agreement. This seems like a sticky wicket for the EU whichever way it goes.

Maybe some kindly passing international lawyer can clear this up.

Wednesday, November 30, 2016

Amazon is eating the software (which is eating the world)

Continuing from my post on the fuss about serverless, I thought I'd riff off Marc Andreessen's famous statement and explain one possible future scenario where all your software belongs to Amazon. There are counter plays to what I'm going to discuss but these would make the post too long and being honest, I'm quite content to watch the executives of software giants flap around like headless chickens whilst their industry disappears. It won't happen overnight, this process will take about 10-15 years but by the time people realise it's happening (if it does) then it's too late. It's a type of economic change known as punctuated equilibrium but ... that's getting too technical, let us keep it simple.

I'm going to use a map to explain what is going to happen. I've quickly cooked one up for a trading system, based upon nothing much at all. It's however a starting point. Now, I'm going to assume we're going to build this trading system in AWS Lambda which means all the software components of the map (trading engine, stock portfolio, risk system, recommendation engine and mood system) are built from functions (which may call many other functions) in Lambda. For why you might want to think about doing this, go read the post on the fuss about serverless.


Ok, our customer in the above map has a desire to make profitable trades (I told you I cooked it up in five minutes and of course, you can make a better map). Making profitable trades requires us to be able to trade and to know what trades are going to be profitable (you wish!)

Now, the secret to our success, the system which differentiates us from everyone else is our recommendation engine. It takes a feed of trades, and uses a magical mood system to determine what's worthwhile and profiles this with our risk system. Before you go "mood system, sounds like gibberish" then let me remind you - this is an example.

In any case, back in 2005 when we had Zimki (the earliest serverless, functional billing environment), I did actually build a mood system in a day or so. It scrapped information from flickr and other sources to generate a mood for the world. It was daft, part of an evil plan I had that involved an animatronic monkey and concept art  .... let's not go there.

So, we've got our hypothetical trading system to which I've also added some metrics. I'm now going to turn this map into a flow and add the metrics. From below, the trade feed creates the list of trades and is governed by the number (#) of trades. The trade feed is a Lambda function and so there is a cost to it.  Each trade is run through the risk, mood and finally recommendation system - each creating their own functional costs. The recommendation system provides a list of recommended trades (#recommended) which impacts the trading engine and the stock portfolio system.


Yes, this is a very basic setup. You can argue with the map / flow diagram as much as you wish. Certainly in most banks then almost every component is treated as something relatively novel as if no other bank manages risks, trading, makes recommendations etc. In fact, from experience they usually have huge numbers of custom built systems all doing the same thing i.e. a single bank can often have a few hundred custom built risk management systems. But let us pretend we're working for some relatively sane bank.

You can see from the above we have a cost for each of the systems such as trade feed = #trades x average cost of the trade feed lambda function. Most banks have no idea what individual functions within their organisation cost, they have no clear way to calculate this but let's ignore that (along with the duplication and bias i.e. custom building what's a commodity). We're sane remember!

Now let us suppose that AWS launch some form of Lambda market place i.e. you can create a lambda function, add it to the API gateway and sell it through the market place. PS I think you'll find they've just done that - Amazon API gateway integrates with API marketplace and Lambda integrates with API gateway.  I haven't had a chance to play but it'll become clear pretty soon.

So, you're thinking about building the above trading system and you notice that someone else is providing an API which provides a risk system (or maybe components of it). Hmmm, I could use that instead of writing it. Cue gnashing of teeth.

You'll probably get a memo from security about the dangers of using third party code they can't check and extolling the benefits of open source. The memo will probably come as a pdf sent via office 365 mail without a trace of irony. What they mean is they don't trust the source. Roll back to 2006 and the various "I wouldn't trust it with production" that applied to AWS EC2. The fact is, trusted sources will appear over time. For startups, you'll be more adventurous which is also why you'll probably end up owning those other companies.

The chances are that huge amounts of your trading system (if broken down and you spent more than five minutes on it) could end up being provided as lambda functions from third parties. I've drawn this in the map. Along with stitching it altogether you will probably build something that is genuinely different e.g. the mood system.


Of course, some of your development team won't be happy with building the mood system and combining component services from third parties (despite all the talk about micro services). They will argue they can do a better job of making a trading engine. The beauty of functional billing is you can say - "prove it!". You have the costs per function call. By the way, if they can do a better job then you probably want to be selling it on the marketplace and making sure you're paying them enough that they don't leave.

In practice people get away with the old line of we can do a better job because no-one can actually measure it. Most don't have cost per function or otherwise they think that their function is free because it's running on their own hardware (p.s. that hardware isn't really free, neither is the power, the building cost etc).

Any Amazon marketplace around such functions will be a two factor market (consumers and providers) and will have strong network effects. The more companies start consuming the function, the more providers will want to build functions and the more consumers this will attract. Pretty soon, rather than working for a company where you're writing your thirtieth consumer authentication service (to go along with the other 29 scattered throughout the place) and copying and pasting the same code or at least what you think is the same code then in this future you'll just be using a service off the marketplace. That marketplace is your service repository.

If you were under the impression that companies used to waste lots of hardware with servers sitting around doing almost nothing (loads of 10% etc) before cloud, just wait until you lift the lid of software development. Ask any software engineer and they'll find you examples of huge amounts of duplication in a single organisation. By huge, be prepared for 100+ duplication as being a "good" day in any company of decent size. Oh, and before anyone in business starts bashing up software engineers ... don't get me started on the utter lack of strategy, miserable understanding of the landscape, leadership based upon gut feel and meme copying the latest trend in the HBR (Harvard Business Review) that goes on.

The future of software development will be lots of lambda functions consumed from a marketplace, stitched together with some new capability. Waste will be reduced, bias (i.e. custom building something which is already well crafted) will start to disappear and we get all that other good "financial development" stuff the last post covered. Hurrah! 

We've barely started yet. This gets a whole lot more interesting.

To explain why, I have to introduce you an old favourite (a decade old ecosystem model) known as innovate - leverage - commoditise. The model is fairly simple, you start off with something novel and new (which is why you need pioneers), as it evolves then you leverage any pattern that is discovered to produce a useful product or library routine (for this you need a different culture, a group of settlers) and eventually the thing will evolve becoming more industrialised (which requires a focus on commoditisation and a different culture known as town planning).

However, genesis begets evolution and evolution begets genesis. Your more evolved components enable rapid development of new things on top of them. The more evolved the subsystem, the faster the speed of developing new things. I've shown this in the map below. 


This is one of about thirty common economic patterns, so if someone from business is taunting you as a software engineer just ask them to name more than five and politely remind them that IT and the business are not different things. Anyway, you can play this game within a company using three different cultures (known as attitudes) and mimic evolution. It's extremely useful for not only encouraging development of the new but encouraging efficiency whilst not creating two warring factions and a host of other problems. However, it has a serious limitation which is your company only has a limited number of people.

What you want to do, is to get everyone else in the world acting as your pioneers. This is actually very simple, you provide the industrialised components as public APIs.  This is best explained in a more circular form, using the trading system.


Your town planners provide a utility coding platform. A whole bunch of other people and companies outside your organisation (i.e. your ecosystem) start using this to build all sorts of things. You provide a marketplace that enables some of them to sell risk system / trading engines to others. Within this entire ecosystem, there will also be people building genuinely new and novel stuff.

Now, everything consumes your platform and so you also get realtime consumption information from every angle. As I've mentioned above, you've got a two factor market with all those nice network effects causing the ecosystem to grow rapidly. The bigger the ecosystem then the more economies of scale you get, the more new stuff being built (i.e. others pioneering) and the more consumption data you get from its use.

The trick is, you use the consumption data to find interesting patterns (i.e. your own settlers leverage all the consumption data to find what consumers really want) and you use this to build new industrialised components. These components make the entire system even more attractive.

By leveraging consumption data you're giving the ecosystem what it wants, you've got increasing efficiencies of scale and your entire ecosystem is also acting as your free research and development department. The more industrialised components you provide, the higher up the stack you go (EC2, S3, Lambda etc) and the more you people you attract. A double whammy of two factor market and ILC - it's a killer!

So when I look at my trading system, then as time goes on then not only will more and more of the components be provided by the AWS marketplace but if AWS is playing an ILC game then many will become industrialised components provided by AWS itself. The marketplace will just be future potential AWS components and on top of this, all the novel exciting stuff (which is directly giving early warning to AWS through consumption data) is just future market components. I've shown an example of this in the map below.


The benefits to consumers i.e. those trying to build stuff will be overwhelming. Amazon will continue to accelerate in efficiency, customer focus and apparent innovation despite the occasional gnashing of teeth as they chew up bits of the software industry. Have no doubt, you can use this model to chew up the entire software industry (or the duplicated mess of bias which calls itself a software industry) and push people to providing either components sold through the marketplace or building actually novel stuff.

Now most executives especially in the software industry will react just as they did with cloud in 2006/07 by trotting out the usual layers of inertia to this idea. It'll never happen! This is not how software works! It's a relationship business! Security! Prior investment! Our business model is successful!

There are ways to counter this play but ... oh, this is going too be such fun.

Thursday, November 24, 2016

The map is not the territory

As the saying goes all models are wrong, some are merely useful. A map is simply an imperfect representation of the territory. This is actually essential for usefulness. A perfect map of France would be a 1:1 scale map at which point it is the size of France and in effect useless. All maps are approximations.

There are a number of discrete characteristics that are essential to any map. These are

1) visual. It’s not a verbal story.

2) context specific. It is a map of specific landscape, it’s not a general map that applies to everything i.e. France is not the same as Spain.

3) position. You can see the position of relevant components (or features) on the map. This requires two things:- first, that you have components. Second, you have some form of anchor. Position is relative to something else and in the case of a geographical map then the anchor is the compass i.e. this hill (a component) is north of that feature. In the case of a game like Chess then the anchor is the chess board itself and a piece (a component) could be at position C1 or A2 etc.

4) movement. With a map you can see where components are moving (assuming they are capable of moving) and where they could move to i.e. the constraint of possibilities. Hence, I can see my infantry troops moving across the map and understand the barriers which force them to change direction i.e. troops walking off a cliff is not a good idea.

In business, you can use a Wardley map (these are provided as creative commons) to describe the landscape. It’s visual, it is context specific (i.e. this business or that industry), it has position of components (on a value chain) relative to an anchor (the user need) and lastly you can see movement. I’ve provided an example below. It also has some advanced mapping characteristics e.g. flow, type and climatic patterns.

A map


Now, most companies use “maps” that aren’t maps i.e. they lack one of the basic characteristics e.g. business process maps, value stream maps, customer journey maps, mind maps … there’s a long list of things called maps which really aren’t. This doesn't mean they are not useful, they are except from the point of effective learning about the territory. These characteristics of a map are essential to learning whether it’s the rules of the game (climatic patterns), doctrine (universally useful approaches) or context specific gameplay. 

But what if my map is wrong! 

First, all maps are wrong, they are all approximations. What you mean to say is "What if my map is badly wrong?" 

Well, a map that is badly wrong can be quite dangerous. There’s a long history here of dangerous maps and poor situational awareness, books like Topographical Intelligence in the American Civil War are a worthy read. But there’s also plenty of examples of armies charging into a battle with no map and no understanding of the territory and the disastrous results that ensue - Ball’s Bluff, Little Big Horn.

The difference here is that even a wrong map provides you with an opportunity to learn. Without maps, you can never learn the territory, the rules of the game, what context specific play works and what is universal. You can’t even effectively communicate with others over the territory.

It’s true that maps are not the territory but if I’m going to lead a significant force against an opponent then I’d rather have a map of what we do know about the territory (even if parts of it says “here be dragons” or “we don’t know what’s in this bit”) than to charge in blindly as if everything is unknown. 

Wednesday, November 23, 2016

Why the fuss about serverless?

[The more edited version I've posted to Medium]

To explain this, I’m going to have to recap on some old work with a particular focus on co-evolution. 

Co-evolution

Let us take a hike back through time to the 80s/90s. Back in those days, computers were very much a product and the applications we built used architectural practices that were based upon the characteristics of a product, in particular mean time to recovery (MTTR)

When a computer failed, we had to replace or fix it and this would take time. The MTTR was high and architectural practices had emerged to cope with this. We built machines using N+1 (i.e. redundant components such as multiple power supplies). We ran disaster recovery tests to try and ensure our resilience worked. We cared a lot about capacity planning and scaling of single machines (scale up). We cared an awful lot about things that could introduce errors and we had change control procedures designed to prevent this. We usually built test environments to try things out before we were tempted to alter the all important production environment.

But these practices didn’t just magically appear overnight, they evolved through trial and error. They started as novel practices, then more dominant but divergent forms emerged until we finally started to get some form of consensus. The techniques converged and good practice was born.  Ultimately these were refined and best architectural practice developed. In such confident days, you'd be mocked for not having done proper capacity planning as this was an expected norm.

Our applications needed architectural practices that were based upon (needed) compute which was provided as a product. The architectural norms that became “best practice” were N+1, scale up, disaster recovery, change control and testing environments and these were ultimately derived from the high MTTR of a product. I’ve shown this evolution of practice in the map below. Normally with maps I just use the description of evolution for activities, it's exactly the same with practice but with slightly different terms e.g. novel, emerging, good and best rather than genesis, custom, product and commodity.

Map - Evolution of Architectural Practice


The thing is, compute evolved. As an activity then compute had started back in the 1940s in that uncharted space (the genesis of the act) where everything is uncertain. We then had custom built examples (divergent forms) and then products (convergence around certain characteristics with some differentiation between them). However, compute by the early 2000s had started to transform and become more commodity like with differentiation becoming far more constrained, the activity itself becoming far more defined. In this world a server was really about processor speed, memory, hard disk size, power consumption and how many you could cram in a rack. In this world we built banks of compute and created virtual machines as we needed them. Then we got public utility forms with the arrival of AWS EC2 in 2006.

The more industrialised forms of any activity have different characteristics to early evolving versions. With computing infrastructure then utility forms had similar processing, memory and storage capabilities but they had very low MTTR. When a virtual server went bang, we didn’t bother to try and fix it, we didn’t order another, we just called an API and within minutes or seconds we had a new one. Long gone were the days that we lovingly named our servers, these were cattle not pets.

This change of characteristics enabled the emergence of a new set of architectural principles based upon a low MTTR. We no longer cared about N+1 and resilience of single machines, as we could recreate them quickly if failure was discovered. We instead designed for failure. We solved scaling by distributing the workload, calling up more machines as we needed them - we had moved from scale up to scale out. We even reserved that knowing chortle for those who did "capacity planning" in this world of abundance. 

Map - Emergence of a new practice


We started testing failure by the constant introduction of error - we created various forms of chaos monkeys or masters of disasters that introduced random failure into our environments. One off disaster recovery tests were for the weak, we constantly adapted to failure. With a much more flexible environment, we learned to roll back changes more quickly, we became more confident in our approaches and started to use continuous deployment. We frowned at those that held on to the sacred production and less hallowed testing environments. We started to mock them.

These novel practices - scale out, design for failure, chaos engines and continuous deployment among others - were derived from an increasingly low MTTR environment and such practices were simply accelerated by utility compute environments. Our applications were built with this in mind. The novel practices spread becoming emergent (different forms of the same principles) and have slowly started to converge with a consensus around good practice. We even gave it a name, DevOps. It is still evolving and it will in turn become best architectural practice. 

What happened is known as co-evolution i.e. a practice co-evolves with the activity itself. This is perfectly normal and happens throughout history. Though steel making itself industrialised, we can still produce swords (if we wish) but we have in most part lost the early practice of forging swords. One set of practices has been replaced with another.

I’ve shown the current state of co-evolution in compute in the map below. The former best architectural practice we now call "legacy" whilst the good (and still evolving) architectural practice is called "devops".

Map - Co-evolution of DevOps


This transformation of practice is also associated with inertia i.e. we become used to the “old” and trusted best practice (which is based upon one set of characteristics) and the “new” practice (based upon a more evolved underlying activity) is less certain, requires learning and investment. Hence we often have inertia to the underlying change due to governance. This was one of the principle causes of inertia to cloud computing.

Furthermore any application we had which were based upon the “old” best practice lacks the benefits of this new more evolved world. These benefits of industrialisation always include efficiency, speed of agility and speed of development in building new things. Our existing applications became our legacy to our past way of doing things. They needed re-architecting but that involves cost and so, we try to magic up ways of having the new world but just like the past. We want all the benefits of volume operations and commodity components but using customised hardware designed just for us! It doesn’t work, the Red Queen eventually forces us to adapt. We often fight it for too long though.

This sort of co-evolution and the inevitable dominance of a more evolve practice is highly predictable. We can use it to anticipate new forms of organisations that emerge as well as anticipate the changes in practice before they hit us. It’s how back in Canonical in 2008, we knew we had to focus on the emerging DevOps world and to make sure everyone (or as many as possible) that were building in that space were working on Ubuntu. We exploited this change for our own benefits. As one CIO recently told me, one day everyone was talking about RedHat and the next it was all Cloud plus Ubuntu. That didn't happen by accident.


Complicating the picture a bit more

Of course, the map itself doesn’t show you the whole picture because I've deliberately simplified it to explain co-evolution. Between the application and the architectural practice we used for computing infrastructure layer is another layer - the platform.

Now platform itself is evolving. At some point in the past there was the genesis of the first platforms. These then evolved to various divergent but still uncommon custom built forms. Then we had convergence to more product forms. We had things like the LAMP stack (Limux, Apache, MySql and Perl or Python - pick your poison). 

Along with architectural practice around computing infrastructure, there was also architectural practices around the platform. These were based upon the characteristics of the platform itself. From coding standards (i.e. nomenclature) to testing suites to performance testing to object orientated design within monolithic program structures. The key characteristic of the platform was how it provided a common environment to code in and abstracted away many of the underpinnings. But it did so at a cost, that same shared platform. 

A program is nothing more than a high level function which often calls many other functions. However, in general we encoded these functions altogether as some monolithic structure. We might separate out a few layers in some form of n-layer design - a web layer, a back end, a storage system - but each of these layers tended to have relatively large programs. To cope with load, we often replicated the monoliths across several physical machines.

Within these large program we would break them into smaller functions for manageability but we would less frequently separate these functions onto a different platform stack because of the overhead of all those different platform stacks. You wouldn't want to have machine sitting there with an entire platform stack to run one function which was rarely called. It was a waste! In the map below I've added the platform and the best practice above the platform layer.

Map - Evolution of Architectural Practice (platform)


In 2005, the company I ran was already using utility like infrastructure. We had evolved early DevOps practices - distributed systems, continuous deployment, design for failure - and this was just the norm for us. However, we had also produced a utility coding platform, which happened to allow developers to write entire applications, front and back end in a single language - JavaScript. 

As a developer you just wrote code, you were abstracted away from the platform itself, you certainly had no concept of servers. That every function you wrote within your program could be running in a different platform stack was something you didn’t need to know. From a developer point of view you just wrote and ran your program and it called other functions. However, this environment (known as Zimki) enabled some remarkable new capabilities from distribution of functions  to billing by function. The change of platform from product to utility created new characteristics that enabled new architectural practices to emerge at this level. This is co-evolution. This is normal.

These new practices, I've nicknamed FinDev for the time. The "old" best architectural practices, well, that's legacy. I've drawn a map to show this change.

Map - Co-Evolution of Architectural Practice (platform)


The more mundane of these architectural changes is it encourages componentisation, the breaking down of complex systems into more discrete and re-used coding components provided as services to others. In Zimki, every function could be exposed as a web service through a simple “publish” parameter added to the function. Today, we use the term micro services to describe this separation of functions and provision as web services. We’re moving away from the monolith program containing all the functions to a world of separated and discrete functions. A utility platform just enables this and abstracts the whole underlying process from the developer.

The next mundane point is it encourages far greater levels of re-use. One of the problems with the old object orientated world was there was no effective communication mechanism to expose what had been built. You’d often find duplication of objects and functions within a single company let alone between companies. Again, exposing as web services encourages this to change. That assumes someone has the sense to build a discovery mechanism such as a service register.

Another, again rather trivial point is it abstracts the developer further away from the issues of underlying infrastructure. It’s not really “serverless” but more “I don’t care what a server is”. As with any process of industrialisation (a shift from product to commodity and utility forms), the benefits are not only efficiency in the underlying components but acceleration in the speed at which I can develop new things. As with any other industrialisation there will be endless rounds of inertia caused by past practice. Expect lots of nashing of teeth over the benefits of customising your infrastructure to your platform and ... just roll the clock back to infrastructure as a service in 2007 and you'll hear the same arguments in a slightly different context.

Anyway, back to Old Street (where the company was) and the days of 2005. Using Zimki, I built a small trading platform in a day or so because I was able to re-use so many functions created by others. I didn’t have to worry about building a platform and the concept of a server, capacity planning and all that "yak shaving" was far from my mind. The efficiency, speed of agility and speed of development are just a given. However, these changes are not really the exciting parts. The killer, the gotcha is the billing by the function. 

Billing by function fundamentally changes how you do monitoring. When I provided a service to the world, users of my program could follow very different paths through it. These we call flows. Depending upon their flow in our system then some functions can be called more frequently. Billing by the function not only enables me to see what is being used but also to quickly identify costly areas of my program. I would often find that one function was causing the majority of the cost because of the way I had coded it. My way of retrieving trades in my program was literally killing me with cost. I could see it, I could quickly direct investment into improving that one costly function and reduce the overall cost. Monitoring by cost of function changes the way we work - well, it changed me and I’m pretty sure this will impact all of you.

However, this pales into a shadow compared to the next change. This we will call worth based development and to explain it, I need to give you and example and we need to go further back in time.

Worth based development

In 2003, the company that I ran built and operated small sized systems for others. There were no big systems, these were more of the £100k — £2M scale covering a few million users. Our clients usually wanted to write a detailed specification of exactly what they needed to ensure we delivered. That doesn’t sound too bad but even at this small scale then some of the components in these projects would be in the uncharted space requiring exploration and experimentation and hence no-one knew exactly what was wanted. Unfortunately, back then, I didn’t have the language to explain this. Hence we built and operated the systems and inevitably we had some tension over change control and arguments over what was in or out of a particular contract.

During one of these discussions, I pointed out to the client that we were sitting around a table arguing over what was in or out of a piece of paper but not one of us was talking about what the users of the system needed. The contract wasn’t really the customer here; the client’s end users were. We needed to change this discussion and focus on the end user. I suggested that we should create a metric of value based upon the end user, something we could both work towards. The idea fell on deaf ears as the client was pre-occupied with the contract but at least the seed was planted. It wasn’t long after this that another project provided an opportunity to test this idea. The client gave me a specification and asked how much would it cost to build a system to do this? I replied — “How does free sound?”

They were a bit shocked but then I added “However, we will have to be paid to operate the system. We can determine a measure of value or worth and I’ll get paid on that”. There was a bit of um and ah but eventually we agreed to try out a method of worth based development.

In this case, the goal of the system was to provide leads for an expensive range of large format printers (LFPs). The client wanted more leads. Their potential end users wanted a way of finding out more on these printers along with a way of testing them. I would build something which would marry the two different set of needs. But rather than the client paying up front and taking all the risk, I would build it for free and take a fee on every new lead created. We (as in the client and my company) were no longer focused on what was in or out of a contract but on a single task of creating more leads. We both had an incentive for this. I also had a new incentive for cost effectiveness because the more efficient I made system then the more profit I retained.

With a worth based approach, I have a strong incentive to: -
  • reduce the operational cost of the project because the cheaper it is then the more profit I make.
  • provide reliability because if the system went down, I wasn’t making any money.
  • ensure the system maximises the value metric because the more it did, the more money I made.

So, let us map this out 

Map - the system


The map begins with our client who has a need for more leads which hopefully leads to other companies buying their product. The conversion from lead to actually purchasing a printer is beyond the scope of this project as that was within the client’s sales organisation. We’re focused solely on generating leads. The other type of user in this map is the consumer who hopefully will buy one of these expensive printers. They have different needs, they want to find out about the right sort of printer for their commercial operations and to test it before buying something they will use. In this project, we’re aiming to provide an online mechanism for the consumer to find out about the printer (a microsite) along with a method to test it (the testing application).

The test is a high resolution image that the potential customer uploads and which is then printed out using the printer of their choice. Their poster (this is large format) would then be distributed to the potential consumer along with a standard graphical poster (showing the full capabilities), relevant marketing brochures and a sales call arranged. Each of the components on the map can expand into more detail if we wish. 

From the map, we hope to have visitors to our microsite which will extol the virtue of owning a large format printer and this hopefully persuades some of these visitors to go and test it out. The act of turning a visitor into an actual lead requires the user to test a printer. So we have multiple conversion rates e.g. from microsite to testing application and from visitor to lead. At the start these will be unknown but we can guess.

Normally, operating a microsite requires all those hard to calculate costs of how much compute resource I’m using.  Originally, the platform space was a source of headaches due to my inability to provide a variable operational cost for application use. This was 2003 and I had to worry about capacity planning and all that other "yak shaving". However  let us revisit this in a modern setting. The platform has evolved towards more of a utility service especially with systems like AWS Lambda. In such a utility platform world, your application is simply a function running on the platform and I’m charged for use. The operational cost of my microsite is basically the number of visitors x the average cost of the microsite function. Remember, an application consists of many functions and users can navigate around it which means some “wandering” users turn out to be more expensive than others. But we can cope with that by taking an average for our microsite.

The same will apply to my "test the printer" (testing) application but in this case the users will include converted visitors from the microsite along with those who directly visit. Every use of the testing application (a function) will incur a cost. But as with the microsite, this is a variable. Of course, the actual functional cost of the testing application could be wildly different from the microsite depending upon what the applications did and how well the code was written but at least we would have a granular price for every call.

When you look at map, there can be many forms of flow within it whether financial or otherwise. It could be flows of users or revenue or flows of risk. For example, if the utility platform dies due to some catastrophic event then it’ll impact my microsite and my testing application which will impact the consumer needs and stop any lead generation. This would incur a financial penalty for me in terms of lost revenue. Equally, a user has many paths they could travel, for example they could go to the microsite and never bother to go to the testing application thereby incurring cost but no revenue. Nevertheless, I can take these flows and create a business model from it.

Map - the business model



This is like manna from heaven for someone trying to build a business. Certainly I have the investment in developing the code but with application being a variable operational cost then I can make a money printing machine which grows with users. It also changes my focus on investment - do I want to invest in increasing marketing for more users, or the conversion rate, or maybe the testing application is so badly written (or a function within it) that investing in coding improvement will bring me better returns? Suddenly, the whole way I build a business and invest is changed. 

The co-evolution of practice around platform from componentisation to code sharing to financial monitoring to increases in agility and efficiency is a pretty big deal as we enter this serverless world. But the new business models around worth based development and the collision of finance and development will literally knock your socks off.  Which is why the moniker "FinDev". Beyond the initial investment in coding, I can create an almost variable cost business model and redirect investment to maximise returns in ways that most of you have never experienced. I know, I’ve been there.

These emerging practices will spread despite the inertia. The path of adoption will be a punctuated equilibrium as with all points of industrialisation. This means the growth is exponential and you won't barely notice it until it gets to a few percent of the market and then in the next few years it will take over. On average the timescale for this process is 10-15 years, so expect to see the whole world being overtaken by serverless by 2025. These "FinDev" architectural practices will rapidly become good and then best but what their exact form will be, we don’t know yet. We're not near the stage of reaching consensus and they still have to evolve and refine. 

But serverless will fundamentally change how we build business around technology and how you code. Your future looks more like this.

Map - future of development


You thought devops was big but it’s chicken feed compared to this. This is where the action will be, it'll be with you quicker than you realise and yes, you'll have inertia. Now is not the time for building a DevOps team and heading toward IaaS, you've missed that boat. You should be catching this wave as fast as you can.

Now, a couple of final words.

Containers - they are important but ultimately invisible subsystems and this is not where you should be focused. 

Cloud Foundry -  it's really important they move up the stack and create that marketplace otherwise AWS Lambda et al will own this space. 

DevOps - a lot of what people are trying to build in this space will itself become invisible and if you're building internally then possibly legacy. It's below the line of where you need to be. 

One final note, for those using a pioneer - settler - town planner structure. If you're providing a platform then you're town planners should be taking over this space from the settlers (from platform through devops to infrastructure). Unless you've got scale, you should planning to push most of this outside of the organisation and focusing the organisation around platform.

Your pioneers should be all over the new practices around utility platform. They should be building apps around this space, working out how to combine finance and development into new models, how to build service repositories etc. Experiments in this space should be going wild. 

Your settlers along with helping the town planners take over any existing efforts from IaaS to PaaS, now need to be starting to hunt through all novel and emerging practices both internally and externally that will be developing around utility platform efforts.  They need to be looking for re-occurring patterns and what might have some legs and be useful. They need to be looking for those spaces with potential, finding the MVP for your new line of products. Miss this boat and you'll be kicking yourself come 2025. 

Map - PST


P.S. For everyone's sake, someone please come up with a better name than serverless. The original Zimki server was described as FaaS (Framework as a service) back in 2006.  Unfortunately a bunch of hapless consultants morphed the terminology into PaaS (Platform as a Service) which in many areas has become next to meaningless.  This has now has morphed into FaaS (Function as a Service). It's all the same thing, unless you're a consultant or vendor trying to flog your new term as having meaning. It's all about utility platforms where you just code, where billing is as granular as possible (e.g. down to the function) and don't give two hoots about "yak shaving" (pointless tasks like capacity planning or racking servers etc).

Monday, November 21, 2016

How to master strategy as simply as I can ...

Understand that strategy is a continuous cycle. You don't have all the information you need, you don't know all the patterns and there are many aspects of life that are uncertain ... fortunately not all is. Start with a direction (i.e. a why of purpose, as in "I wish to win this game of chess") but be prepared to adapt as the game unfolds (i.e. the why of movement, as in "should I move this chess piece or that one?").

Your first step on the journey is to understand this strategy cycle.

Step 1 - The cycle


Your next step is to observe the game as it is i.e. the landscape. This is essential for you to be able to learn about the game, to communicate with others and to anticipate change. To observe the landscape you must have a map of this context e.g. in chess it is the chess board and pieces, in warfare it's often a geographical map and troop movement. Any map must have the basic characteristics of :-
  • being visual
  • context specific (i.e. to the game at hand including the pieces involved)
  • position of pieces relative to some anchor (in geographical maps this is the compass, in chess it is the board itself)
  • movement (i.e. how things can change, the constraint of possibilities)
In business, extremely few companies have maps. Most have things they call maps (e.g. stories, business process diagrams, strategy plans) which turn out not to be maps as they lack the basic characteristics. A simple way of mapping a business is to start with user need, understand the value chain and map it over evolution.

Step 2 - Landscape


Once you have a map, then you can start to learn the next part of the strategy cycle i.e. climatic patterns. These are things that effect all players and can be considered rules of the games. The more you play, the more rules you'll discover. I've added a basic list, to get you started in business.

Step 3 - Learn Climatic Patterns


Even with a few basic patterns you can apply these to your map to start to learn how things could change. There will be more patterns out there but again, you'll need to keep playing the game to learn them. With a map, you visibly communicate in a common language those things you expect to change. This also enables others to challenge your assumptions, a key part of learning.

Step 4 - Anticipate


Now you have an idea of your landscape and how it can change, you'll want to start doing stuff about it. However, there are two classes of choices - universal and context specific. Universal choices are those which are beneficial to all, regardless of the context. To help you on your way I've provided a basic set which we call 'doctrine'. As with patterns, the more you play the game then the more universal forms of doctrine you'll discover.

Step 5 - Learn Doctrine


Of course, knowing about doctrine is not enough - you'll want to apply it. When it comes to doctrine then there are three basic cases:-
  • the map solves doctrine for you (e.g. having a common language)
  • you can use many maps to apply doctrine (e.g. use of multiple maps of different lines of business to reduce duplication and bias)
  • you can apply doctrine directly to a map (e.g. cell based structures, cultural forms such as pioneer - settler - town planner)

Step 6 - Apply Doctrine


The other class of choice is context specific. You will learn there exists many approaches that you can deploy in order to influence the map. These approaches depend upon the map and the position of pieces within it i.e. they are not universal and you have to learn when to use them. I've provided a basic list. As with climatic patterns and doctrine, then the more you play the game then the more context specific patterns you will discover.

Step 7 - Learn Context Specific Play

With your understanding of the landscape, an ability to anticipate change based upon climatic patterns and a knowledge of context specific play then you can manipulate the map. You use the map to determine where you could to attack and then use gameplay (e.g. an open source approach) to determine why you should attack this or that point over another.

Step 8 - Apply gameplay


You then decide to act. You loop around the cycle and repeat this whole exercise. As you go, you will learn more about the environment, patterns, doctrine and gameplay becoming better at the game.

Step 9 - Loop


A few things to remember

1. When companies tell you they have maps, they don't except in the rarest of cases. Most companies rely on things which are not maps (e.g. stories, customer journeys, business process diagrams, value stream maps) and fail to learn about the landscape. They will often use different forms of diagrams to communicate between groups causing endless miscommunication, alignment and duplication issues. The maps above have been used from nation states to individual systems and everything in between (they are also all creative commons, share alike).

2. The map is constantly changing. These are living documents. With practice it should take a few hours to map a business from scratch and these have to adapt as you discover more. This is relatively simple if they become embedded as a means of communication. 

3. Most companies aren't playing chess when it comes to strategy (despite what you read). At best, most are simply meme copying others or running on gut feel and highest paid person's opinion. 

4. Maps are a means of learning about the environment and communicating this. It's an iterative process and it will take you years to become good at it. In fact, I've been using these maps for over a decade and I'm still learning.

5. All models are wrong, some are merely useful. 

6. Without a means of mapping the landscape (i.e. the terrain) then you can never effectively learn the terrain. Do note, when someone says the map is not the terrain, that's all well and dandy except that most companies do not have any form of map but are often reduced to telling stories (a bit like how Viking's navigated).

7. The components in the maps above represents points of capital. In the ones I've shown, I've mapped activities however you can map activities, practices, data, knowledge and other forms of capital. 

8. "How to master strategy" ... well, I'm still learning. I'm sure someone will produce a better map at some point however for now, all I can say is that strategy seems to be a journey of constant learning. If anyone does actually become a master then I'd be pleased to read about how they did it.

Wednesday, September 07, 2016

Keeping the wolves at bay

Chapter 8

To keep funding my research, I took a few more paid gigs which basically meant becoming a gun for hire.  Cloud computing was colliding with the technology scene and there was lots of confusion about.  This meant a constant stream of conferences - including some that actually paid - along with plenty of opportunity for piecemeal work.  This wild west also resulted in some fairly shady practices and exploitation.  I tried not to cause harm and instead gave time to community events like Cloud Camp London.  Hang on, don’t you either decide to do harm or not?  Where’s the ambiguity?  My problem was simplicity.

Making it simple

One of the issues with mapping is people found it complex.  This is not really that surprising because you’re exposing a world that people are unfamiliar with.  It takes time and practice.  However, confusion over cloud computing had also created plenty of opportunity for a new way of thinking.  Alas piling on complexity onto a confused person doesn’t help them hence I looked at ways to simplify, to make it more palatable and more familiar.  I started spot painting.

Spot painting companies

To help people understand maps, I produced mini-cheat sheets (see figure 73) for each of the stages of evolution and then rather than produce a map, I used to paint on whatever existing diagrams they had.  I’ll use the following figures from a Butler Group conference in 2008 to explain.  I’ve taken the liberty of updating the older terms of innovation & bespoke to use the modern terms of genesis and custom built.

Figure 73 — A cheat sheet for commodity


The cheat sheet had a number of features.  It had the evolution scale (point 1) to provide you an idea of what we were talking about (e.g. commodity) and how it fitted into evolution.  It had a basic list of primary characteristics (point 2) and a benefit curve (point 3).  This benefit curve was a reflection of the changing differential benefit of an activity to an organisation e.g. research was an overall cost, commodity provided little difference and the differential value of products declined as they became more widespread.  I then used these mini cheat sheets to annotate existing business process diagrams – see figure 74.

Figure 74 — Annotated diagram



These diagrams have certain advantages and disadvantages over maps.  First, they are more familiar and unthreatening i.e. you can take an existing process diagram and colour it.  They certainly (in combination with the cheat sheets) help you question how you’re building something.  But, as they have no anchor, there is no position relative to a user which means people don’t tend to focus on the user.  Also movement cannot be clearly seen but has to be implied through the changing of colours.  These diagrams enabled me to introduce the concepts of evolution but without position and movement then they were unhelpful for learning economic patterns and forms of gameplay.  However, the simplicity made them moderately popular with a few clients.

Taking it too far

Unfortunately, I didn't stop there.  The next diagram, I’m a bit loathe to show.  I wasn’t trying to cause harm and I hope it hasn’t.  In order to teach mapping then I simplified the evolution curve and focused on the extremes – see figure 75 from the Butler Group ADLM conference 2008.

Figure 75 — Polar opposite IT


The idea conveyed was one of IT consisting of polar opposite extremes, which is perfectly reasonable (Salamon & Storey Innovation paradox, 2002).  These extremes of the uncharted (chaotic) and industrialised (linear) domains exist.  But there’s also the transition in the middle which has different characteristics.  The graph certainly helped introduce concepts of evolution such as why one size doesn’t fit all and also it proved quite popular due to its simplicity.  The danger occurs if you take it too literally and start organising in this manner.  In my defence, I did publish articles during the time that emphasised that you needed to deal with the transition, though it’s fair to say I could have been much clearer often reducing entire concepts to single lines (see Exhibit 1).

Exhibit 1 : Lack of clarity crimes committed by the Author.
Butler Group Review, Dec 2007, The Sum of all Fears

While the use of utility services removes obstacles to new IT innovations, it may create an organisational challenge for IT departments. Innovations are dynamic problems and require dynamic methodologies such as agile development and a more worth-focused VC-like approach to financing. By contrast, the use of utility services requires a focus on cost, standards, and static methodologies. Unless you intend to stop innovating and give up on the long-term source of profit for any organisation, then the IT department must find a way to manage both of these extremes. As commoditisation is ongoing, you’ll also need to continuously deal with the transition between these two extremes. To make matters worse, as “X as a service” grows, barriers to participation in many industries will reduce, causing greater competition and accelerating the rate of new innovations into the marketplace. 

Overall, the pattern suggests reductions in non- strategic costs, more competition for information businesses (including competition from consumers), a faster rate of release in the marketplace, and increasing pressure on IT as it deals with the conflicting demands of two polar opposites in focus. 

The problem with simple

If I believed these simple versions to be unhelpful then why did I use them?  It’s a question of balance and a trade-off.  The problem is Ashby’s law of requisite variety.  Basically, the law states that in a stable system then the number of states in its control mechanism must be greater than or equal to the number of states in the system being controlled i.e. the controlling mechanism must represent the complexity in what is being controlled.  Organisations are very complex things and whilst mapping provides you a window onto this, you need to have a management capability able to cope with the complexity. 

There is unfortunately another solution to Ashby’s Law.  Rather than cope with complexity, you pretend that what is being managed is simple.  We tend to like things such as 2x2 diagrams not because they represent reality but because they obscure it and hence are simple to understand.  We trade-off our ability to learn and to understand the environment for an illusion of simplicity and easy manageability.  This is why we use one size fits all or apply KPI (key performance indicators) across an entire organisation even when they are not appropriate. When it comes to the KISS principle, do remember that keeping it simple can make us act stupidly. 

Eventually, I was faced with a choice.  Do I keep it simple thereby making it more accessible and just accept the flaws or do I take a slower path and try to push organisations towards a higher understanding of position and movement?  This opened the door to another compromise.  I could also do the heavy lifting for others!  I could just give them the result!  However, this would make them dependent upon me, the normal consultant path.   My purpose was to free people from the shackles of consultants and not to chain them up even more.  This compromise was out of the question.  I’d like to think that I stood my ground here but with almost no-one mapping, bills mounting and clients taking an interest in the simplified concepts then it’s fair to say that I was starting to wobble.

Finding my mojo

My salvation was a piece of paid work that I’m particularly fond of. It concerned the issue of efficiency versus effectiveness and to have any hope of explaining it then we need to introduce three concepts – worth based development, pricing granularity and flow.

Worth based development

In 2003, the company that I ran built and operated small sized systems for others.  There were no big systems, these were more of the £100k - £2M scale covering a few million users.  Our clients usually wanted to write a detailed specification of exactly what they needed to ensure we delivered.  That doesn’t sound too bad but even at this small scale then some of the components in these projects would be in the uncharted space and hence no-one knew exactly what was wanted.  Unfortunately, back then, I didn’t have the language to explain this.  Hence we built and operated the systems and inevitably we had some tension over change control and arguments over what was in or out of a particular contract. 

During one of these discussions, I pointed out to the client that we were sitting around a table arguing over what was in or out of a piece of paper but not one of us was talking about what the users of the system needed.  The contract wasn’t really the customer here; the client’s end users were.  We needed to change this discussion and focus on the end user.  I suggested that we should create a metric of value based upon the end user, something we could both work towards.  The idea fell on deaf ears as the client was pre-occupied with the contract but at least the seed was planted.  It wasn’t long after this that another project provided an opportunity to test this idea.  The client gave me a specification and asked how much would it cost to build a system to do this?  I replied – “How does free sound?” 

They were a bit shocked but then I added “However, we will have to determine a measure of value or worth and I’ll get paid on that”. There was a bit of um and ah but eventually we agreed to try out this method of worth based development.  In this case, the goal of the system was to provide leads for an expensive range of large format printers (LFPs).  The client wanted more leads.  Their potential end users wanted a way of finding out more on these printers along with a way of testing them.  I could build something which would marry the two different set of needs. But rather than the client paying up front and taking all the risk, I would build it for free and take a fee on every new lead created.  

We (as in the client and my company) were no longer focused on what was in or out of a contract but on a single task of creating more leads.  We both had an incentive for this.  I also had a new incentive for cost effectiveness because the more efficient I made system then the more profit I retained. We agreed and so I built and operated a system which enabled people to upload an image, test it on a large format printer and get delivery of their print plus information on the kit’s performance plus a sales call. The system soared.  

In three months we had generated more leads than the client normally had in a year and this was accelerating.  It was stunning.  The client’s revenue was rocketing but so was my revenue as the system was based upon a metric of leads.  The more success they had, the more success I had.  It was a win-win situation.  Alas, this actually created two problems and one headache.

The problems were caused by the client being unprepared for this level of interest and internal budgeting systems that weren’t designed to cope with such a variable success.  What has budgeting got to do with this?  Well, the client’s success was more leads which translated into more revenue. This was good from a budgeting point of view.  But the more success the client had then the more my fee increased as it was also based on leads.  This was bad from a budgeting point of view.  The system became so successful that it exceeded an internal budget figure the client had set for costs and this caused an internal conflict with demands to switch off the system until new budget was allocated (a very lengthy process).  Switch off a revenue generating system because it’s doing better than expected and passed some arbitrary budget figure?  This is what happens when an inflexible one size fits all approaches hits reality. 

Before you go “this is daft”, actually it’s not.  Over time companies tend to build up a body of work and processes – the corporate corpus - designed to stop past failure.  It’s all done with reasonable intentions.  The desire to spend money effectively and the desire to know resources are being well used.  That mass of good intentions is often the cause of many problems when you try to change the system.  That corpus can become a zombie, killing off innovation whenever it is found.  I had attempted to change the system by introducing a worth based approach and I should have known that this would cause tensions with the corpus.  I learned that lesson quickly.

Today, these worth based techniques are normally called “outcome” based or something of that ilk.  I’ve used them many times over the last decade, in fact I prefer them.  Whilst they tend to solve the issue of an excessive focus on contracts, they have invariably hit other roadblocks such as a client not being able to describe a value or the purpose of the system or  even conflict and politics within internal processes.  You need to be aware of this and to mitigate it. 

Those were the problems - lack of preparation, the corporate corpus - but the headache that worth based approaches caused was always mine.  There was some financial risk associated with these projects and some investment needed.  I had to be concerned with not only the development but operations.  This included lots of capital intensive investment along with costs that weren’t either truly variable or ones that we could only guestimate at.  To minimise the risk we shared data centres and other common components but in a large heterogeneous application environment then this just complicates allocation of costs.  How much would a user visiting our application cost us in terms of compute, power and data centre usage was an incredibly tough question. 

In my risk models, we also had no clear way of determining operational costs as it scaled.  We had to make lots of estimates on stepwise changes and how much compute resources would be used by an application we hadn’t built.  The financial model was more akin to art than any form of science.  Some of that uncertainty certainly ending up as “padding” in the metric e.g. the price per lead that I would charge.  Other areas had better costs models.  In the LFP example above then distribution systems and even printing were more variable (i.e. price per print or price per package) because we had experience of running an online photo and printing service.  This brings me to the next topic of pricing granularity.

Pricing granularity

With a worth based approach, I have a strong incentive to: -

reduce the operational cost of the project because the cheaper it is then the more profit I make. 
provide reliability because if the system went down, I wasn’t making any money.
ensure the system maximises the value metric which in the LFP case was "generating leads".

But I also had questions on where to invest.  In the case of LFP, it was doing very well and so I decided to invest an additional $100K.  But where do I best put the money?  Improving the site reliability?  Reducing the operational cost of the application through better code?  Maximising the number of users through marketing?  Improving conversion of users to leads?  Which choice brings me the better return?  This is particularly tough to answer if you can’t effectively determine operational cost of an application beyond hand waving or if other data is also guessed at.

One of the huge benefits of Zimki (our platform as a service play) was not only its serverless nature and how you could simply write code through an online IDE but also its pricing granularity was down to the function.  Any application is nothing more than a high level function that calls other functions.  If I developed a function in Zimki, then whenever that function was called then I could see exactly how much it had cost me.  I was charged on the network, storage and compute resources used by that function.  This was quite a revelation.  It changed behaviour significantly because suddenly in the sea of code that is my application, I could find individual functions that disproportionately cost me more.  I’ll talk more about this change of practice in the next chapter but for now, just being aware of this is enough.

So, for a developer on Zimki, I had price granularity down to running a single function.  As far as I know this was unparalleled in the world of IT and we haven’t seen the likes of this since then until AWS Lambda.  Now, obviously I was also the provider of Zimki and behind the scenes was a complex array of basket of goods concepts and all manner of financial instruments to be able to provide those cost figures.  But this was abstracted from the developer.  All they saw was a cost every time their function ran no matter how much it scaled.  There was no capital investment and this turned the operational cost of an application into a manageable variable.

Flow

What we’re now going to do is combine the ideas of worth based (outcome) development and pricing granularity to introduce an idea known as flow.  In order to do this, we’re also going to have to use scope and how a map can have multiple users as introduced in chapter 7.  After this, I’ll show you how flow was used to question efficiency vs effectiveness and why those simplified maps (e.g. the spot diagrams) are ok but not ideal.

Revisiting LFP

To begin with, we’re going to revisit the LFP project but with a map and the knowledge of what a utility platform can bring.  In figure 76, I’ve created a map of the worth based LFP project.  Back when we were working on the project, I hadn’t developed the mapping concept fully and so this is post event analysis.  I won’t mark-up points on this map, hopefully you’ve enough experience now to start reading them.

Figure 76 — Map of the worth based project


The map begins with our client who has a need for more leads and ultimately companies buying their product.  The conversion from lead to actually purchasing a printer is beyond the scope of this project as that was within client’s sales organisation, we’re focused solely on generating leads.  The other type of user in this map is the consumer who hopefully will buy one of these expensive printers.  They have different needs, they want to find out about the right sort of printer for their commercial operations and to test it before buying something they will use.  At that time, this was all done through onsite or showroom visits or glitzy brochures. We aimed to provide an online mechanism for the consumer to find out about the printer (a microsite) and to test it (the testing application).

The test would be a high resolution image that the potential customer would upload which is then printed out on the printer of their choice.  Their poster (this was large format) would be distributed to the potential consumer along with a standard graphical poster (showing the full capabilities), relevant marketing brochures and a sales call arranged.  Each of the components on the map can expand into more detail if we wish e.g. platform needs compute which needs a data centre but this map is good enough for our purpose.  The platform space was the source of my headaches due to my inability to provide a variable operational cost for an application.  But the platform space was evolving towards more of a utility service – in fact, I was the person causing this.

So, let us look at the map again but move further into the future in which a utility platform has emerged. I’m going to add some financial indicators onto this map. See figure 76.

Figure 77 — Finance of the worth based project


From the map, we hope to have visitors to our microsite which will extoll the virtue of owning large format printing and hopefully persuades some of these visitors to go and test it out. The act of turning a visitor into an actual lead requires the user to test a printer.  So we have multiple conversion rates e.g. from microsite to testing application and from visitor to lead.  At the start these will be unknown. We can guess.

Normally, operating a microsite requires all those hard to calculate costs but in a utility platform world, your application is simply a function running on the platform and I’m charged for use.  The operational cost of my microsite is basically the number of visitors x the average cost of the microsite function.  Remember, an application consists of many functions and users can navigate around it which means some “wandering” users turn out to be more expensive than others.  But we can cope with that by taking an average for our microsite.

The same will apply to my testing application but in this case there will be direct visitors plus converted visitors from the microsite i.e. those we’ve persuaded on the benefits of LFP and hence encouraged them to go and test out a printer.  Every use of the testing application (a function) will incur a cost.  The two function costs (microsite and testing application) could be wildly different depending upon what the applications did and how well the code was written but at least we had a granular price for every call.

I could now say
  • We have number of visitors [V1] to the microsite
  • Each call to the microsite costs on average C1
  • The total cost of the microsite would be V1 x C1
  • Of the visitors V1 then a percentage (the conversion rate R1) would visit the testing application.
  • Each call to the testing application costs on average C2
  • The total cost of the testing application would be (V1 x R1) x C2
  • Of the (V1 x R1) visitors to the testing application then a percentage would try a printer (the conversion rate R2)
  • Those visitors who tried a printer (V1 x R1 x R2) are leads
  • Each lead incurs a distribution cost (C3) for the brochure and print which also incurs a printing cost (C4)
  • The total cost of distribution and printing would be (V1 x R1 x R2) x (C3 + C4)
  • Each lead would generate a revenue of P1 (the agreed price)
  • The total revenue generated would be P1 x  (V1 x R1 x R2)
  • The total cost of generating that revenue would be
    (V1 x C1)
    + (V1 x R1) x C2
    + (V1 x R1 x R2) x (C3 + C4)
  • Operating Profit =
    P1 x  (V1 x R1 x R2) - total cost of generating

This is like mana from heaven for someone building a business model.  Certainly I had investment in developing the code but with application being a variable operational cost then I can make a money printing machine if I set the price (P1) right.  No big shocks and no capital investment step.  In fact, any investment can be directed to making that equation more profitable – increasing the conversion rates, reducing the cost of application function call, getting more visitors etc.  Of course, this wasn’t the only path.  The visitor might not come to the microsite but instead go directly to the testing application.  There were a number of potential flows through the map.
When you look at map, there can be many forms of flow within it whether financial or otherwise.  It could be flows of revenue to the provider or flows of risk.  For example, if the utility platform dies due to some catastrophic event then it’ll impact my microsite and my testing application which will impact the consumer needs and stop any lead generation incurring a financial penalty to me in terms of lost revenue.  Whereas, if I run out of brochures, this impacts distribution and I have a choice on whether to send out the prints now or delay until the brochures are available.  In figure 78, I’ve given an example of a flow within a map from potential consumer through their need to microsite to testing application to distribution.

Figure 78 — Flow of the worth based project


It’s important to note that the interfaces between components in a map represent flows of capital.  Such capital can be physical, financial, information, knowledge, risk, time or social.  It could be anything which we can trade.  Often people talk about the marvellous “free” web services that they’re interacting with which provide storage for photos or online blogs or a “free” encyclopaedia.  These are rarely free.  You’re trading something whether it’s information for the service or social capital (e.g. loyalty to a scheme) or even just your time (e.g. to create new entries, to edit the content).  That activity that someone else provides that meets your needs has a price, even if you don’t visibly notice it.  

By using the concept of flow, it is relatively simple to build a financial model for an entire system. I’ve created the skeleton of such a model for the map above in figure 79.

Figure 79 — Building a financial model


Now back when we built LFP in 2004, there wasn’t a utility platform, I didn’t have maps and I didn’t have the concept of flow.  Instead myself and my CFO had a mass of spreadsheets trying to calculate what the above did and cope with all the stepwise investments and capital costs needed.  What was a nightmare then is now child’s play.

Whenever building something novel, then the game is to use operational expense over capital as much as possible in order to reduce risk either due to the system not being used or growing rapidly.  You want to tie the cost as close to the path of revenue generation with any worth based system when you’re gambling on an uncertain outcome.  However, there will always be some investment e.g. writing the application, marketing the microsite.  This sort of modelling can help you identify which options you should consider for the future.

The rage today is all about “DevOps” in the technology world, a moniker combining development and operations.  This shift towards utility platforms is starting to occur in earnest and over the coming years the world of finance and engineering will go through an exciting but disruptive marriage.  I do hope someone comes up with a better phrase than “DevFin” or “DevOps 2.0” or “NoOps” though.


Efficiency vs effectiveness

So there I was in 2008 with an understanding of the importance of maps and of the flow of capital within them.  This helped me explain efficiency versus effectiveness in one of my client’s projects that I was quite proud of.  There is unfortunately a problem.  I can’t explain it to you.

Hopefully, you’re discovering that maps are quite a powerful strategic tool.  The information they contain can be very sensitive.  Even in Government projects, the maps are rarely shared outside of Government itself.  I’m certainly not going to break the trust of a private client by exposing their dirty laundry.  This is why many of the maps that I use in this book are slightly distorted and don’t identify the original owner unless I was the one running the show.  I don’t mind you knowing all the mistakes and failings that I’ve made.  If you’re uncomfortable with that and you need the reassurance of “big company X does this with maps, here’s the map” then I suggest you stop reading or find someone else to help you.  Hopefully, you’ve got enough ideas from what I’ve written to justify your time invested so far.  

The next section covers a hypothetical that blends a story related to a modern company reset into a technology context to help tell a past story.  Yes, maps are part of story telling or as J.R.R Tolkien said on writing the Lord of the Rings, “I wisely started with a map”.

Our story begins, as many do, with a challenge.  The company was expanding and needing to increase its compute resources.  It had created a process flow diagram for this (figure 80) which involved a request for more compute to the actions needed to meet that demand.  The process however had a bottleneck.  Once servers were delivered at “goods in” they needed to be modified before being racked.  This was time consuming and sometimes prone to failure.  They were focused on improving the efficiency of the process flow as it was important for their future and revenue generation.  A proposal was on the table to invest in robotics to automate the process of modifying.  Whilst the proposal was expensive, the benefits were considerably more considering the future revenue (of a not insignificant scale) that was at risk.

Figure 80 — The process flow




I want you to consider the above for a moment and decide whether a proposal to invest in improving the efficiency of an inefficient process makes sense particularly when the benefits of the proposal vastly outweigh the costs and your future revenue stream is at risk?

I had met the company in 2008, talked about the concept of evolution and introduced the “spot” diagram.  We agreed to take a look at the proposal.  I’ve taken those first same steps (see figure 81) and “spotted” the process.  Whilst the ordering and goods in process were quite industrialised, the modify part of the process was very custom.  Have a look at the figure and see if you notice anything interesting or odd before continuing with this story.

Figure 81 — Spot diagram of process.


What was interesting to note was that the racks were considered custom.  On investigation, it turned out the company had custom built racks.  It had always used custom built racks, it had a friendly company that even made them for it and this was just part of its corporate corpus.  This was a body from a long gone past that still haunted the place.  Even in 2008, racks were standardised. 

The modifications were needed because the standard servers that they bought fitted standard racks. They didn’t fit the custom built racks that had been so lovingly built.  Hence additional plates needed to be added, holes drilled etc.  Let us be clear, on the table was a proposal to invest in robotics in order to customise standard servers to fit into custom built racks which they were also paying to be built.  Does the proposal still make sense?  Is it a good investment?  Are there alternatives?

Before you shout “use standard racks”, let us map this space out starting from the user need of more compute.  This actually has two needs, the ordering of a server and the racking of the server once it has delivered.  Of course racking (i.e. mounting, adding power and cabling) needs the server to be modified.  Both of these chains are connected at the point of goods in – see figure 82.

Figure 82 — Mapping the proposal


Now the question is whether we should just use standard racks?  This obviously moves racks towards the commodity (which is where they should be) and the modification part disappears though we still have mounting, cabling and power.  It seems a lot better though (see figure 83).  

Figure 83 — Using standard racks


However, you still have a problem which is the legacy estate.  Are you going to migrate all the racks?  What about our sunk costs?  How are we going to maintain our existing systems?  There will be a long list of reasons to counter the proposed change.  Before you go “this is daft” remember the budget example, the corporate corpus?  Don’t expect to change a system without some resistance.

In this case, despite resistance, we should go a step further.  Computing was becoming a commodity provided by utility services.  We can simplify this whole flow by just adopting utility services.  We don’t need to think about robotic investment or even converting to using standard racks (itself a cost which might be prohibitive). This entire chunk of the value chain should go along with any additional costs it might be hiding (see figure 84). 

Figure 84 — Hidden costs and removing parts of the value chain


These hidden costs can be vast.  Today, when someone provides me with a proposal for building a private cloud, then the first question I ask them is what percentage of the cost is power?  The number of times I’ve been told “that’s another budget” is eye opening.  Power is a major factor in the cost of building such a system.  However, that’s another story for later and I’m digressing.

The issue above is we started with a proposal of investment in robotics based upon improving the efficiency of an existing process.  It sounded reasonable on the surface but if they had taken that route then they would have invested more in maintaining a highly ineffective process.  In all likelihood, it would have exacerbated the problem later because the corporate corpus would have expanded to include this.  If some future person had said “we should get rid of these custom racks” then the response would be “but we’ve always done this and we’ve invested millions in robotics”.  

The “efficient” thing to do might be investing in robotics but the “effective” thing to do was to get rid of this entire part of the value chain.  It’s a bit like the utility platform area, I can either invest in making my infrastructure and platform components more efficient by automation or I could just get rid of that entire part of the value chain.  Often the “efficient” thing to do is not the “effective” thing.  You should be very careful of process efficiency and “improvement”.  You should also be aware of the corporate corpus.

The company in question was a manufacturing company, the problem had nothing to do with computing and yes, they were about to spend many millions making a highly ineffective process more efficient.  They didn’t, they are alive and doing well.  I also kept the wolves at bay.  That’s what I call a “win-win” except obviously for the vendors who lost out.

Before we move on

In the last two chapters, we’ve been sneaking around the strategy cycle again covering mainly purpose and then landscape.  You should be familiar enough with the strategy cycle that I can represent it in a slightly different form just to reinforce the different types of Why (purpose and movement) and the connections between the parts in this book – see figure 85.  In the next section we will focus on climate including common economic patterns and anticipation.  We will keep on looping around this, sometimes diving into interconnections as we go.  Anyway, this will be the last time that I’ll mention that.

Figure 85 — The strategy cycle



We should recap on some of the ideas from this chapter.

Landscape

  • Be careful of simplicity.  There’s a balancing act here caused by Ashby’s Law.  Be aware that you’re often trading your ability to learn for easier management.  In some cases, you can simplify so far that it becomes harmful e.g. one size fits all, group wide KPIs.
  • The map contains flows of capital which are represented by the interfaces.  There are usually multiple flows in a single map.  Such capital can be physical, financial, information, risk, time or social.  It could be anything which we trade.
  • Maps are a means of storytelling.  Despite my dour attitude to storytelling (especially the hand waving kind of verbal garbage often found in strategy), maps are a form of visual storytelling.

Doctrine

  • Focus on the outcome, not the contract.  Worth (outcome) based tools can be useful here but be warned, they can also expose flaws in the understanding of value and become stymied by internal procedures e.g. budgeting processes and inability to cope with variable charging.
  • Use appropriate tools.  When using maps, if I’m looking at financial flows then I’ll often dive into financial modelling when considering multiple investment paths e.g. focus on increasing visitors through marketing or the conversion rate from a microsite.  Equally, if I’ve identified multiple “wheres” that I can attack, then I’ll often dive into business model canvas to compare them.  Don’t be afraid to use multiple tools.  Maps are simply a guide and learning tool.
  • Optimise flow.  Often when you examine flows then you’ll find bottlenecks, inefficiencies and profitless flows.  There will be things that you’re doing that you just don’t need to. Be very careful here to consider not only efficiency but effectiveness.  Try to avoid investing in making an ineffective process more efficient when you need to be questioning why you’re doing something and uncovering hidden costs.  Also, don’t assume that an “obvious” change will be welcomed.  Beware the corporate corpus.
  • When it comes to managing flow then granularity is your friend here.  Be prepared though, most companies don’t have anywhere near the level of granularity that you’ll need and you may even encounter politics when trying to find out.

Gameplay
  • Trading.  Maps are a form of knowledge capital and they tend to have value.  Don’t expect people to just share them with you.  You’ll need to trade or create your own.
----

Next Chapter in Series [to be published soon]
GitBook link [to be published soon]