Wednesday, November 30, 2016

Amazon is eating the software (which is eating the world)

Continuing from my post on the fuss about serverless, I thought I'd riff off Marc Andreessen's famous statement and explain one possible future scenario where all your software belongs to Amazon. There are counter plays to what I'm going to discuss but these would make the post too long and being honest, I'm quite content to watch the executives of software giants flap around like headless chickens whilst their industry disappears. It won't happen overnight, this process will take about 10-15 years but by the time people realise it's happening (if it does) then it's too late. It's a type of economic change known as punctuated equilibrium but ... that's getting too technical, let us keep it simple.

I'm going to use a map to explain what is going to happen. I've quickly cooked one up for a trading system, based upon nothing much at all. It's however a starting point. Now, I'm going to assume we're going to build this trading system in AWS Lambda which means all the software components of the map (trading engine, stock portfolio, risk system, recommendation engine and mood system) are built from functions (which may call many other functions) in Lambda. For why you might want to think about doing this, go read the post on the fuss about serverless.

Ok, our customer in the above map has a desire to make profitable trades (I told you I cooked it up in five minutes and of course, you can make a better map). Making profitable trades requires us to be able to trade and to know what trades are going to be profitable (you wish!)

Now, the secret to our success, the system which differentiates us from everyone else is our recommendation engine. It takes a feed of trades, and uses a magical mood system to determine what's worthwhile and profiles this with our risk system. Before you go "mood system, sounds like gibberish" then let me remind you - this is an example.

In any case, back in 2005 when we had Zimki (the earliest serverless, functional billing environment), I did actually build a mood system in a day or so. It scrapped information from flickr and other sources to generate a mood for the world. It was daft, part of an evil plan I had that involved an animatronic monkey and concept art  .... let's not go there.

So, we've got our hypothetical trading system to which I've also added some metrics. I'm now going to turn this map into a flow and add the metrics. From below, the trade feed creates the list of trades and is governed by the number (#) of trades. The trade feed is a Lambda function and so there is a cost to it.  Each trade is run through the risk, mood and finally recommendation system - each creating their own functional costs. The recommendation system provides a list of recommended trades (#recommended) which impacts the trading engine and the stock portfolio system.

Yes, this is a very basic setup. You can argue with the map / flow diagram as much as you wish. Certainly in most banks then almost every component is treated as something relatively novel as if no other bank manages risks, trading, makes recommendations etc. In fact, from experience they usually have huge numbers of custom built systems all doing the same thing i.e. a single bank can often have a few hundred custom built risk management systems. But let us pretend we're working for some relatively sane bank.

You can see from the above we have a cost for each of the systems such as trade feed = #trades x average cost of the trade feed lambda function. Most banks have no idea what individual functions within their organisation cost, they have no clear way to calculate this but let's ignore that (along with the duplication and bias i.e. custom building what's a commodity). We're sane remember!

Now let us suppose that AWS launch some form of Lambda market place i.e. you can create a lambda function, add it to the API gateway and sell it through the market place. PS I think you'll find they've just done that - Amazon API gateway integrates with API marketplace and Lambda integrates with API gateway.  I haven't had a chance to play but it'll become clear pretty soon.

So, you're thinking about building the above trading system and you notice that someone else is providing an API which provides a risk system (or maybe components of it). Hmmm, I could use that instead of writing it. Cue gnashing of teeth.

You'll probably get a memo from security about the dangers of using third party code they can't check and extolling the benefits of open source. The memo will probably come as a pdf sent via office 365 mail without a trace of irony. What they mean is they don't trust the source. Roll back to 2006 and the various "I wouldn't trust it with production" that applied to AWS EC2. The fact is, trusted sources will appear over time. For startups, you'll be more adventurous which is also why you'll probably end up owning those other companies.

The chances are that huge amounts of your trading system (if broken down and you spent more than five minutes on it) could end up being provided as lambda functions from third parties. I've drawn this in the map. Along with stitching it altogether you will probably build something that is genuinely different e.g. the mood system.

Of course, some of your development team won't be happy with building the mood system and combining component services from third parties (despite all the talk about micro services). They will argue they can do a better job of making a trading engine. The beauty of functional billing is you can say - "prove it!". You have the costs per function call. By the way, if they can do a better job then you probably want to be selling it on the marketplace and making sure you're paying them enough that they don't leave.

In practice people get away with the old line of we can do a better job because no-one can actually measure it. Most don't have cost per function or otherwise they think that their function is free because it's running on their own hardware (p.s. that hardware isn't really free, neither is the power, the building cost etc).

Any Amazon marketplace around such functions will be a two factor market (consumers and providers) and will have strong network effects. The more companies start consuming the function, the more providers will want to build functions and the more consumers this will attract. Pretty soon, rather than working for a company where you're writing your thirtieth consumer authentication service (to go along with the other 29 scattered throughout the place) and copying and pasting the same code or at least what you think is the same code then in this future you'll just be using a service off the marketplace. That marketplace is your service repository.

If you were under the impression that companies used to waste lots of hardware with servers sitting around doing almost nothing (loads of 10% etc) before cloud, just wait until you lift the lid of software development. Ask any software engineer and they'll find you examples of huge amounts of duplication in a single organisation. By huge, be prepared for 100+ duplication as being a "good" day in any company of decent size. Oh, and before anyone in business starts bashing up software engineers ... don't get me started on the utter lack of strategy, miserable understanding of the landscape, leadership based upon gut feel and meme copying the latest trend in the HBR (Harvard Business Review) that goes on.

The future of software development will be lots of lambda functions consumed from a marketplace, stitched together with some new capability. Waste will be reduced, bias (i.e. custom building something which is already well crafted) will start to disappear and we get all that other good "financial development" stuff the last post covered. Hurrah! 

We've barely started yet. This gets a whole lot more interesting.

To explain why, I have to introduce you an old favourite (a decade old ecosystem model) known as innovate - leverage - commoditise. The model is fairly simple, you start off with something novel and new (which is why you need pioneers), as it evolves then you leverage any pattern that is discovered to produce a useful product or library routine (for this you need a different culture, a group of settlers) and eventually the thing will evolve becoming more industrialised (which requires a focus on commoditisation and a different culture known as town planning).

However, genesis begets evolution and evolution begets genesis. Your more evolved components enable rapid development of new things on top of them. The more evolved the subsystem, the faster the speed of developing new things. I've shown this in the map below. 

This is one of about thirty common economic patterns, so if someone from business is taunting you as a software engineer just ask them to name more than five and politely remind them that IT and the business are not different things. Anyway, you can play this game within a company using three different cultures (known as attitudes) and mimic evolution. It's extremely useful for not only encouraging development of the new but encouraging efficiency whilst not creating two warring factions and a host of other problems. However, it has a serious limitation which is your company only has a limited number of people.

What you want to do, is to get everyone else in the world acting as your pioneers. This is actually very simple, you provide the industrialised components as public APIs.  This is best explained in a more circular form, using the trading system.

Your town planners provide a utility coding platform. A whole bunch of other people and companies outside your organisation (i.e. your ecosystem) start using this to build all sorts of things. You provide a marketplace that enables some of them to sell risk system / trading engines to others. Within this entire ecosystem, there will also be people building genuinely new and novel stuff.

Now, everything consumes your platform and so you also get realtime consumption information from every angle. As I've mentioned above, you've got a two factor market with all those nice network effects causing the ecosystem to grow rapidly. The bigger the ecosystem then the more economies of scale you get, the more new stuff being built (i.e. others pioneering) and the more consumption data you get from its use.

The trick is, you use the consumption data to find interesting patterns (i.e. your own settlers leverage all the consumption data to find what consumers really want) and you use this to build new industrialised components. These components make the entire system even more attractive.

By leveraging consumption data you're giving the ecosystem what it wants, you've got increasing efficiencies of scale and your entire ecosystem is also acting as your free research and development department. The more industrialised components you provide, the higher up the stack you go (EC2, S3, Lambda etc) and the more you people you attract. A double whammy of two factor market and ILC - it's a killer!

So when I look at my trading system, then as time goes on then not only will more and more of the components be provided by the AWS marketplace but if AWS is playing an ILC game then many will become industrialised components provided by AWS itself. The marketplace will just be future potential AWS components and on top of this, all the novel exciting stuff (which is directly giving early warning to AWS through consumption data) is just future market components. I've shown an example of this in the map below.

The benefits to consumers i.e. those trying to build stuff will be overwhelming. Amazon will continue to accelerate in efficiency, customer focus and apparent innovation despite the occasional gnashing of teeth as they chew up bits of the software industry. Have no doubt, you can use this model to chew up the entire software industry (or the duplicated mess of bias which calls itself a software industry) and push people to providing either components sold through the marketplace or building actually novel stuff.

Now most executives especially in the software industry will react just as they did with cloud in 2006/07 by trotting out the usual layers of inertia to this idea. It'll never happen! This is not how software works! It's a relationship business! Security! Prior investment! Our business model is successful!

There are ways to counter this play but ... oh, this is going too be such fun.

Thursday, November 24, 2016

The map is not the territory

As the saying goes all models are wrong, some are merely useful. A map is simply an imperfect representation of the territory. This is actually essential for usefulness. A perfect map of France would be a 1:1 scale map at which point it is the size of France and in effect useless. All maps are approximations.

There are a number of discrete characteristics that are essential to any map. These are

1) visual. It’s not a verbal story.

2) context specific. It is a map of specific landscape, it’s not a general map that applies to everything i.e. France is not the same as Spain.

3) position. You can see the position of relevant components (or features) on the map. This requires two things:- first, that you have components. Second, you have some form of anchor. Position is relative to something else and in the case of a geographical map then the anchor is the compass i.e. this hill (a component) is north of that feature. In the case of a game like Chess then the anchor is the chess board itself and a piece (a component) could be at position C1 or A2 etc.

4) movement. With a map you can see where components are moving (assuming they are capable of moving) and where they could move to i.e. the constraint of possibilities. Hence, I can see my infantry troops moving across the map and understand the barriers which force them to change direction i.e. troops walking off a cliff is not a good idea.

In business, you can use a Wardley map (these are provided as creative commons) to describe the landscape. It’s visual, it is context specific (i.e. this business or that industry), it has position of components (on a value chain) relative to an anchor (the user need) and lastly you can see movement. I’ve provided an example below. It also has some advanced mapping characteristics e.g. flow, type and climatic patterns.

A map

Now, most companies use “maps” that aren’t maps i.e. they lack one of the basic characteristics e.g. business process maps, value stream maps, customer journey maps, mind maps … there’s a long list of things called maps which really aren’t. This doesn't mean they are not useful, they are except from the point of effective learning about the territory. These characteristics of a map are essential to learning whether it’s the rules of the game (climatic patterns), doctrine (universally useful approaches) or context specific gameplay. 

But what if my map is wrong! 

First, all maps are wrong, they are all approximations. What you mean to say is "What if my map is badly wrong?" 

Well, a map that is badly wrong can be quite dangerous. There’s a long history here of dangerous maps and poor situational awareness, books like Topographical Intelligence in the American Civil War are a worthy read. But there’s also plenty of examples of armies charging into a battle with no map and no understanding of the territory and the disastrous results that ensue - Ball’s Bluff, Little Big Horn.

The difference here is that even a wrong map provides you with an opportunity to learn. Without maps, you can never learn the territory, the rules of the game, what context specific play works and what is universal. You can’t even effectively communicate with others over the territory.

It’s true that maps are not the territory but if I’m going to lead a significant force against an opponent then I’d rather have a map of what we do know about the territory (even if parts of it says “here be dragons” or “we don’t know what’s in this bit”) than to charge in blindly as if everything is unknown. 

Wednesday, November 23, 2016

Why the fuss about serverless?

[The more edited version I've posted to Medium]

To explain this, I’m going to have to recap on some old work with a particular focus on co-evolution. 


Let us take a hike back through time to the 80s/90s. Back in those days, computers were very much a product and the applications we built used architectural practices that were based upon the characteristics of a product, in particular mean time to recovery (MTTR)

When a computer failed, we had to replace or fix it and this would take time. The MTTR was high and architectural practices had emerged to cope with this. We built machines using N+1 (i.e. redundant components such as multiple power supplies). We ran disaster recovery tests to try and ensure our resilience worked. We cared a lot about capacity planning and scaling of single machines (scale up). We cared an awful lot about things that could introduce errors and we had change control procedures designed to prevent this. We usually built test environments to try things out before we were tempted to alter the all important production environment.

But these practices didn’t just magically appear overnight, they evolved through trial and error. They started as novel practices, then more dominant but divergent forms emerged until we finally started to get some form of consensus. The techniques converged and good practice was born.  Ultimately these were refined and best architectural practice developed. In such confident days, you'd be mocked for not having done proper capacity planning as this was an expected norm.

Our applications needed architectural practices that were based upon (needed) compute which was provided as a product. The architectural norms that became “best practice” were N+1, scale up, disaster recovery, change control and testing environments and these were ultimately derived from the high MTTR of a product. I’ve shown this evolution of practice in the map below. Normally with maps I just use the description of evolution for activities, it's exactly the same with practice but with slightly different terms e.g. novel, emerging, good and best rather than genesis, custom, product and commodity.

Map - Evolution of Architectural Practice

The thing is, compute evolved. As an activity then compute had started back in the 1940s in that uncharted space (the genesis of the act) where everything is uncertain. We then had custom built examples (divergent forms) and then products (convergence around certain characteristics with some differentiation between them). However, compute by the early 2000s had started to transform and become more commodity like with differentiation becoming far more constrained, the activity itself becoming far more defined. In this world a server was really about processor speed, memory, hard disk size, power consumption and how many you could cram in a rack. In this world we built banks of compute and created virtual machines as we needed them. Then we got public utility forms with the arrival of AWS EC2 in 2006.

The more industrialised forms of any activity have different characteristics to early evolving versions. With computing infrastructure then utility forms had similar processing, memory and storage capabilities but they had very low MTTR. When a virtual server went bang, we didn’t bother to try and fix it, we didn’t order another, we just called an API and within minutes or seconds we had a new one. Long gone were the days that we lovingly named our servers, these were cattle not pets.

This change of characteristics enabled the emergence of a new set of architectural principles based upon a low MTTR. We no longer cared about N+1 and resilience of single machines, as we could recreate them quickly if failure was discovered. We instead designed for failure. We solved scaling by distributing the workload, calling up more machines as we needed them - we had moved from scale up to scale out. We even reserved that knowing chortle for those who did "capacity planning" in this world of abundance. 

Map - Emergence of a new practice

We started testing failure by the constant introduction of error - we created various forms of chaos monkeys or masters of disasters that introduced random failure into our environments. One off disaster recovery tests were for the weak, we constantly adapted to failure. With a much more flexible environment, we learned to roll back changes more quickly, we became more confident in our approaches and started to use continuous deployment. We frowned at those that held on to the sacred production and less hallowed testing environments. We started to mock them.

These novel practices - scale out, design for failure, chaos engines and continuous deployment among others - were derived from an increasingly low MTTR environment and such practices were simply accelerated by utility compute environments. Our applications were built with this in mind. The novel practices spread becoming emergent (different forms of the same principles) and have slowly started to converge with a consensus around good practice. We even gave it a name, DevOps. It is still evolving and it will in turn become best architectural practice. 

What happened is known as co-evolution i.e. a practice co-evolves with the activity itself. This is perfectly normal and happens throughout history. Though steel making itself industrialised, we can still produce swords (if we wish) but we have in most part lost the early practice of forging swords. One set of practices has been replaced with another.

I’ve shown the current state of co-evolution in compute in the map below. The former best architectural practice we now call "legacy" whilst the good (and still evolving) architectural practice is called "devops".

Map - Co-evolution of DevOps

This transformation of practice is also associated with inertia i.e. we become used to the “old” and trusted best practice (which is based upon one set of characteristics) and the “new” practice (based upon a more evolved underlying activity) is less certain, requires learning and investment. Hence we often have inertia to the underlying change due to governance. This was one of the principle causes of inertia to cloud computing.

Furthermore any application we had which were based upon the “old” best practice lacks the benefits of this new more evolved world. These benefits of industrialisation always include efficiency, speed of agility and speed of development in building new things. Our existing applications became our legacy to our past way of doing things. They needed re-architecting but that involves cost and so, we try to magic up ways of having the new world but just like the past. We want all the benefits of volume operations and commodity components but using customised hardware designed just for us! It doesn’t work, the Red Queen eventually forces us to adapt. We often fight it for too long though.

This sort of co-evolution and the inevitable dominance of a more evolve practice is highly predictable. We can use it to anticipate new forms of organisations that emerge as well as anticipate the changes in practice before they hit us. It’s how back in Canonical in 2008, we knew we had to focus on the emerging DevOps world and to make sure everyone (or as many as possible) that were building in that space were working on Ubuntu. We exploited this change for our own benefits. As one CIO recently told me, one day everyone was talking about RedHat and the next it was all Cloud plus Ubuntu. That didn't happen by accident.

Complicating the picture a bit more

Of course, the map itself doesn’t show you the whole picture because I've deliberately simplified it to explain co-evolution. Between the application and the architectural practice we used for computing infrastructure layer is another layer - the platform.

Now platform itself is evolving. At some point in the past there was the genesis of the first platforms. These then evolved to various divergent but still uncommon custom built forms. Then we had convergence to more product forms. We had things like the LAMP stack (Limux, Apache, MySql and Perl or Python - pick your poison). 

Along with architectural practice around computing infrastructure, there was also architectural practices around the platform. These were based upon the characteristics of the platform itself. From coding standards (i.e. nomenclature) to testing suites to performance testing to object orientated design within monolithic program structures. The key characteristic of the platform was how it provided a common environment to code in and abstracted away many of the underpinnings. But it did so at a cost, that same shared platform. 

A program is nothing more than a high level function which often calls many other functions. However, in general we encoded these functions altogether as some monolithic structure. We might separate out a few layers in some form of n-layer design - a web layer, a back end, a storage system - but each of these layers tended to have relatively large programs. To cope with load, we often replicated the monoliths across several physical machines.

Within these large program we would break them into smaller functions for manageability but we would less frequently separate these functions onto a different platform stack because of the overhead of all those different platform stacks. You wouldn't want to have machine sitting there with an entire platform stack to run one function which was rarely called. It was a waste! In the map below I've added the platform and the best practice above the platform layer.

Map - Evolution of Architectural Practice (platform)

In 2005, the company I ran was already using utility like infrastructure. We had evolved early DevOps practices - distributed systems, continuous deployment, design for failure - and this was just the norm for us. However, we had also produced a utility coding platform, which happened to allow developers to write entire applications, front and back end in a single language - JavaScript. 

As a developer you just wrote code, you were abstracted away from the platform itself, you certainly had no concept of servers. That every function you wrote within your program could be running in a different platform stack was something you didn’t need to know. From a developer point of view you just wrote and ran your program and it called other functions. However, this environment (known as Zimki) enabled some remarkable new capabilities from distribution of functions  to billing by function. The change of platform from product to utility created new characteristics that enabled new architectural practices to emerge at this level. This is co-evolution. This is normal.

These new practices, I've nicknamed FinDev for the time. The "old" best architectural practices, well, that's legacy. I've drawn a map to show this change.

Map - Co-Evolution of Architectural Practice (platform)

The more mundane of these architectural changes is it encourages componentisation, the breaking down of complex systems into more discrete and re-used coding components provided as services to others. In Zimki, every function could be exposed as a web service through a simple “publish” parameter added to the function. Today, we use the term micro services to describe this separation of functions and provision as web services. We’re moving away from the monolith program containing all the functions to a world of separated and discrete functions. A utility platform just enables this and abstracts the whole underlying process from the developer.

The next mundane point is it encourages far greater levels of re-use. One of the problems with the old object orientated world was there was no effective communication mechanism to expose what had been built. You’d often find duplication of objects and functions within a single company let alone between companies. Again, exposing as web services encourages this to change. That assumes someone has the sense to build a discovery mechanism such as a service register.

Another, again rather trivial point is it abstracts the developer further away from the issues of underlying infrastructure. It’s not really “serverless” but more “I don’t care what a server is”. As with any process of industrialisation (a shift from product to commodity and utility forms), the benefits are not only efficiency in the underlying components but acceleration in the speed at which I can develop new things. As with any other industrialisation there will be endless rounds of inertia caused by past practice. Expect lots of nashing of teeth over the benefits of customising your infrastructure to your platform and ... just roll the clock back to infrastructure as a service in 2007 and you'll hear the same arguments in a slightly different context.

Anyway, back to Old Street (where the company was) and the days of 2005. Using Zimki, I built a small trading platform in a day or so because I was able to re-use so many functions created by others. I didn’t have to worry about building a platform and the concept of a server, capacity planning and all that "yak shaving" was far from my mind. The efficiency, speed of agility and speed of development are just a given. However, these changes are not really the exciting parts. The killer, the gotcha is the billing by the function. 

Billing by function fundamentally changes how you do monitoring. When I provided a service to the world, users of my program could follow very different paths through it. These we call flows. Depending upon their flow in our system then some functions can be called more frequently. Billing by the function not only enables me to see what is being used but also to quickly identify costly areas of my program. I would often find that one function was causing the majority of the cost because of the way I had coded it. My way of retrieving trades in my program was literally killing me with cost. I could see it, I could quickly direct investment into improving that one costly function and reduce the overall cost. Monitoring by cost of function changes the way we work - well, it changed me and I’m pretty sure this will impact all of you.

However, this pales into a shadow compared to the next change. This we will call worth based development and to explain it, I need to give you and example and we need to go further back in time.

Worth based development

In 2003, the company that I ran built and operated small sized systems for others. There were no big systems, these were more of the £100k — £2M scale covering a few million users. Our clients usually wanted to write a detailed specification of exactly what they needed to ensure we delivered. That doesn’t sound too bad but even at this small scale then some of the components in these projects would be in the uncharted space requiring exploration and experimentation and hence no-one knew exactly what was wanted. Unfortunately, back then, I didn’t have the language to explain this. Hence we built and operated the systems and inevitably we had some tension over change control and arguments over what was in or out of a particular contract.

During one of these discussions, I pointed out to the client that we were sitting around a table arguing over what was in or out of a piece of paper but not one of us was talking about what the users of the system needed. The contract wasn’t really the customer here; the client’s end users were. We needed to change this discussion and focus on the end user. I suggested that we should create a metric of value based upon the end user, something we could both work towards. The idea fell on deaf ears as the client was pre-occupied with the contract but at least the seed was planted. It wasn’t long after this that another project provided an opportunity to test this idea. The client gave me a specification and asked how much would it cost to build a system to do this? I replied — “How does free sound?”

They were a bit shocked but then I added “However, we will have to be paid to operate the system. We can determine a measure of value or worth and I’ll get paid on that”. There was a bit of um and ah but eventually we agreed to try out a method of worth based development.

In this case, the goal of the system was to provide leads for an expensive range of large format printers (LFPs). The client wanted more leads. Their potential end users wanted a way of finding out more on these printers along with a way of testing them. I would build something which would marry the two different set of needs. But rather than the client paying up front and taking all the risk, I would build it for free and take a fee on every new lead created. We (as in the client and my company) were no longer focused on what was in or out of a contract but on a single task of creating more leads. We both had an incentive for this. I also had a new incentive for cost effectiveness because the more efficient I made system then the more profit I retained.

With a worth based approach, I have a strong incentive to: -
  • reduce the operational cost of the project because the cheaper it is then the more profit I make.
  • provide reliability because if the system went down, I wasn’t making any money.
  • ensure the system maximises the value metric because the more it did, the more money I made.

So, let us map this out 

Map - the system

The map begins with our client who has a need for more leads which hopefully leads to other companies buying their product. The conversion from lead to actually purchasing a printer is beyond the scope of this project as that was within the client’s sales organisation. We’re focused solely on generating leads. The other type of user in this map is the consumer who hopefully will buy one of these expensive printers. They have different needs, they want to find out about the right sort of printer for their commercial operations and to test it before buying something they will use. In this project, we’re aiming to provide an online mechanism for the consumer to find out about the printer (a microsite) along with a method to test it (the testing application).

The test is a high resolution image that the potential customer uploads and which is then printed out using the printer of their choice. Their poster (this is large format) would then be distributed to the potential consumer along with a standard graphical poster (showing the full capabilities), relevant marketing brochures and a sales call arranged. Each of the components on the map can expand into more detail if we wish. 

From the map, we hope to have visitors to our microsite which will extol the virtue of owning a large format printer and this hopefully persuades some of these visitors to go and test it out. The act of turning a visitor into an actual lead requires the user to test a printer. So we have multiple conversion rates e.g. from microsite to testing application and from visitor to lead. At the start these will be unknown but we can guess.

Normally, operating a microsite requires all those hard to calculate costs of how much compute resource I’m using.  Originally, the platform space was a source of headaches due to my inability to provide a variable operational cost for application use. This was 2003 and I had to worry about capacity planning and all that other "yak shaving". However  let us revisit this in a modern setting. The platform has evolved towards more of a utility service especially with systems like AWS Lambda. In such a utility platform world, your application is simply a function running on the platform and I’m charged for use. The operational cost of my microsite is basically the number of visitors x the average cost of the microsite function. Remember, an application consists of many functions and users can navigate around it which means some “wandering” users turn out to be more expensive than others. But we can cope with that by taking an average for our microsite.

The same will apply to my "test the printer" (testing) application but in this case the users will include converted visitors from the microsite along with those who directly visit. Every use of the testing application (a function) will incur a cost. But as with the microsite, this is a variable. Of course, the actual functional cost of the testing application could be wildly different from the microsite depending upon what the applications did and how well the code was written but at least we would have a granular price for every call.

When you look at map, there can be many forms of flow within it whether financial or otherwise. It could be flows of users or revenue or flows of risk. For example, if the utility platform dies due to some catastrophic event then it’ll impact my microsite and my testing application which will impact the consumer needs and stop any lead generation. This would incur a financial penalty for me in terms of lost revenue. Equally, a user has many paths they could travel, for example they could go to the microsite and never bother to go to the testing application thereby incurring cost but no revenue. Nevertheless, I can take these flows and create a business model from it.

Map - the business model

This is like manna from heaven for someone trying to build a business. Certainly I have the investment in developing the code but with application being a variable operational cost then I can make a money printing machine which grows with users. It also changes my focus on investment - do I want to invest in increasing marketing for more users, or the conversion rate, or maybe the testing application is so badly written (or a function within it) that investing in coding improvement will bring me better returns? Suddenly, the whole way I build a business and invest is changed. 

The co-evolution of practice around platform from componentisation to code sharing to financial monitoring to increases in agility and efficiency is a pretty big deal as we enter this serverless world. But the new business models around worth based development and the collision of finance and development will literally knock your socks off.  Which is why the moniker "FinDev". Beyond the initial investment in coding, I can create an almost variable cost business model and redirect investment to maximise returns in ways that most of you have never experienced. I know, I’ve been there.

These emerging practices will spread despite the inertia. The path of adoption will be a punctuated equilibrium as with all points of industrialisation. This means the growth is exponential and you won't barely notice it until it gets to a few percent of the market and then in the next few years it will take over. On average the timescale for this process is 10-15 years, so expect to see the whole world being overtaken by serverless by 2025. These "FinDev" architectural practices will rapidly become good and then best but what their exact form will be, we don’t know yet. We're not near the stage of reaching consensus and they still have to evolve and refine. 

But serverless will fundamentally change how we build business around technology and how you code. Your future looks more like this.

Map - future of development

You thought devops was big but it’s chicken feed compared to this. This is where the action will be, it'll be with you quicker than you realise and yes, you'll have inertia. Now is not the time for building a DevOps team and heading toward IaaS, you've missed that boat. You should be catching this wave as fast as you can.

Now, a couple of final words.

Containers - they are important but ultimately invisible subsystems and this is not where you should be focused. 

Cloud Foundry -  it's really important they move up the stack and create that marketplace otherwise AWS Lambda et al will own this space. 

DevOps - a lot of what people are trying to build in this space will itself become invisible and if you're building internally then possibly legacy. It's below the line of where you need to be. 

One final note, for those using a pioneer - settler - town planner structure. If you're providing a platform then you're town planners should be taking over this space from the settlers (from platform through devops to infrastructure). Unless you've got scale, you should planning to push most of this outside of the organisation and focusing the organisation around platform.

Your pioneers should be all over the new practices around utility platform. They should be building apps around this space, working out how to combine finance and development into new models, how to build service repositories etc. Experiments in this space should be going wild. 

Your settlers along with helping the town planners take over any existing efforts from IaaS to PaaS, now need to be starting to hunt through all novel and emerging practices both internally and externally that will be developing around utility platform efforts.  They need to be looking for re-occurring patterns and what might have some legs and be useful. They need to be looking for those spaces with potential, finding the MVP for your new line of products. Miss this boat and you'll be kicking yourself come 2025. 

Map - PST

P.S. For everyone's sake, someone please come up with a better name than serverless. The original Zimki server was described as FaaS (Framework as a service) back in 2006.  Unfortunately a bunch of hapless consultants morphed the terminology into PaaS (Platform as a Service) which in many areas has become next to meaningless.  This has now has morphed into FaaS (Function as a Service). It's all the same thing, unless you're a consultant or vendor trying to flog your new term as having meaning. It's all about utility platforms where you just code, where billing is as granular as possible (e.g. down to the function) and don't give two hoots about "yak shaving" (pointless tasks like capacity planning or racking servers etc).

Monday, November 21, 2016

How to master strategy as simply as I can ...

Understand that strategy is a continuous cycle. You don't have all the information you need, you don't know all the patterns and there are many aspects of life that are uncertain ... fortunately not all is. Start with a direction (i.e. a why of purpose, as in "I wish to win this game of chess") but be prepared to adapt as the game unfolds (i.e. the why of movement, as in "should I move this chess piece or that one?").

Your first step on the journey is to understand this strategy cycle.

Step 1 - The cycle

Your next step is to observe the game as it is i.e. the landscape. This is essential for you to be able to learn about the game, to communicate with others and to anticipate change. To observe the landscape you must have a map of this context e.g. in chess it is the chess board and pieces, in warfare it's often a geographical map and troop movement. Any map must have the basic characteristics of :-
  • being visual
  • context specific (i.e. to the game at hand including the pieces involved)
  • position of pieces relative to some anchor (in geographical maps this is the compass, in chess it is the board itself)
  • movement (i.e. how things can change, the constraint of possibilities)
In business, extremely few companies have maps. Most have things they call maps (e.g. stories, business process diagrams, strategy plans) which turn out not to be maps as they lack the basic characteristics. A simple way of mapping a business is to start with user need, understand the value chain and map it over evolution.

Step 2 - Landscape

Once you have a map, then you can start to learn the next part of the strategy cycle i.e. climatic patterns. These are things that effect all players and can be considered rules of the games. The more you play, the more rules you'll discover. I've added a basic list, to get you started in business.

Step 3 - Learn Climatic Patterns

Even with a few basic patterns you can apply these to your map to start to learn how things could change. There will be more patterns out there but again, you'll need to keep playing the game to learn them. With a map, you visibly communicate in a common language those things you expect to change. This also enables others to challenge your assumptions, a key part of learning.

Step 4 - Anticipate

Now you have an idea of your landscape and how it can change, you'll want to start doing stuff about it. However, there are two classes of choices - universal and context specific. Universal choices are those which are beneficial to all, regardless of the context. To help you on your way I've provided a basic set which we call 'doctrine'. As with patterns, the more you play the game then the more universal forms of doctrine you'll discover.

Step 5 - Learn Doctrine

Of course, knowing about doctrine is not enough - you'll want to apply it. When it comes to doctrine then there are three basic cases:-
  • the map solves doctrine for you (e.g. having a common language)
  • you can use many maps to apply doctrine (e.g. use of multiple maps of different lines of business to reduce duplication and bias)
  • you can apply doctrine directly to a map (e.g. cell based structures, cultural forms such as pioneer - settler - town planner)

Step 6 - Apply Doctrine

The other class of choice is context specific. You will learn there exists many approaches that you can deploy in order to influence the map. These approaches depend upon the map and the position of pieces within it i.e. they are not universal and you have to learn when to use them. I've provided a basic list. As with climatic patterns and doctrine, then the more you play the game then the more context specific patterns you will discover.

Step 7 - Learn Context Specific Play

With your understanding of the landscape, an ability to anticipate change based upon climatic patterns and a knowledge of context specific play then you can manipulate the map. You use the map to determine where you could to attack and then use gameplay (e.g. an open source approach) to determine why you should attack this or that point over another.

Step 8 - Apply gameplay

You then decide to act. You loop around the cycle and repeat this whole exercise. As you go, you will learn more about the environment, patterns, doctrine and gameplay becoming better at the game.

Step 9 - Loop

A few things to remember

1. When companies tell you they have maps, they don't except in the rarest of cases. Most companies rely on things which are not maps (e.g. stories, customer journeys, business process diagrams, value stream maps) and fail to learn about the landscape. They will often use different forms of diagrams to communicate between groups causing endless miscommunication, alignment and duplication issues. The maps above have been used from nation states to individual systems and everything in between (they are also all creative commons, share alike).

2. The map is constantly changing. These are living documents. With practice it should take a few hours to map a business from scratch and these have to adapt as you discover more. This is relatively simple if they become embedded as a means of communication. 

3. Most companies aren't playing chess when it comes to strategy (despite what you read). At best, most are simply meme copying others or running on gut feel and highest paid person's opinion. 

4. Maps are a means of learning about the environment and communicating this. It's an iterative process and it will take you years to become good at it. In fact, I've been using these maps for over a decade and I'm still learning.

5. All models are wrong, some are merely useful. 

6. Without a means of mapping the landscape (i.e. the terrain) then you can never effectively learn the terrain. Do note, when someone says the map is not the terrain, that's all well and dandy except that most companies do not have any form of map but are often reduced to telling stories (a bit like how Viking's navigated).

7. The components in the maps above represents points of capital. In the ones I've shown, I've mapped activities however you can map activities, practices, data, knowledge and other forms of capital. 

8. "How to master strategy" ... well, I'm still learning. I'm sure someone will produce a better map at some point however for now, all I can say is that strategy seems to be a journey of constant learning. If anyone does actually become a master then I'd be pleased to read about how they did it.