tl;dr Caveat Emptor
Figure 1 – HS2
Figure 1 provides a visual view of an IT system related to the HS2 project. The map is created by taking a value chain of components required to meet some user need and then plotting those components against how evolved they are. This is not a permanent view but a snapshot in time because the many components that make up the system are evolving due to competition (both supply and demand side).
The components themselves include activities, practices and data and as each component evolves it moves from one defined set of characteristics (known as uncharted) to another (known as industrialised). Since the characteristics of the component change with evolution then how you treat a component depends upon how evolved it is.
On Situational Awareness
Situations or events occur in a context i.e. there are surrounding facts pertaining to the actual event. Being aware that an event has context is known as contextual awareness. Being aware of what that context was is known as situational awareness.
Knowing that the state of evolution impacts characteristics and hence changes how a component should be treated is known as contextual awareness i.e. we know that the context, the state of evolution, has an influence. Knowing that a component like Infrastructure is in a commodity stage of evolution is situational awareness i.e. we know what the context is and how it should be treated.
Contextual and Situational awareness are not the same.
On Composability
Each component is normally part of one or more value chains. The value chain described by the map is therefore composed of many components and we describe it as being composable.
However each value chain is normally a component of one or more other value chains i.e. the output of one (e.g. brick manufacture) is normally a part of another (e.g. housebuilding). Hence the entire value chain may in fact be part of a larger composable system.
Furthermore the components of the map may indeed represent their own value chains. Hence when you look at the map in figure 1, the component ‘Web Site’ is in fact likely to be an entire value chain consisting of many components (from content, data to web farm).
On Componentisation
From figure 1, at the top of the value chain is the user need that we are attempting to provide. At the bottom of the value chain are the myriad of sub components that are consumed to enable this. There is a link between evolution and value chain. It is known as componentisation (from Herbert Simon's Theory of Hierarchy).
As components evolve to provide ever more mature and standard components then they enable the rapid development of higher order systems i.e. standard nuts and bolts enabled machines, standard building materials enabled housebuilding, standard electricity provision enabled consumer goods.
Syntax and Semantics within a composable system.
The components of a system need to interact (i.e. communicate) with each other. There are two important forms of this interaction – semantic and syntactic.
Take a simple tap. The tap has certain physical properties such as size and weight and an interface for use i.e. an angular force. We apply a clockwise force to the tap and it turns (we hope). The syntax refers to the interfaces i.e. the method by which we communicate. In this case the message between one component (such as ourselves or some controlling machine) and the tap is through the application of angular force.
Semantics refers to the understanding of meaning. For example, I apply a clockwise force because I wish the tap to turn off. My meaning is ‘turn off’, the method of communication is ‘clockwise force’. Whether the tap actually turns off or on depends upon how it’s designed and the screw thread. So it’s quite possible that I might mean ‘turn off’, I apply a clockwise force to convey this ‘message’ but the tap ‘understands’ it to mean turn on.
In the above the syntax might be understood but the meaning is not. Hence when talking about a system we often to refer to the level of syntactic and semantic interoperability between components.
In computing terms, syntactic interoperability refers to such things as parameter passing mechanisms and timing assumptions and relates to the ability of one component to communicate with another. Semantic interoperability refers to the issue of common understanding or meaning of what is communicated between the different components e.g. when you pass data to an API that the receiving system understands the data in the same way.
On Substitution
Whilst composability is the ability to assemble components into various combinations of systems, substitution is the ability to substitute one of those components for another.
Take any Meccano set. You have a mass of different components that can be assembled through an instruction set (a booklet on how to build) into various forms with components which have interfaces (e.g. application of angular force) and which operate as expected (e.g. clockwise force means tighten nut).
If you look at the set you also have many duplicate items i.e. many identical nuts and bolts with the same apparent properties. However, the instruction set doesn't tell you which one of the identical pieces to use at a specific point and instead you can use any of the 'identical' pieces. This is a concept known as substitution.
I italicise 'identical' because the components aren't actually the same, they are just different instances (substantiations) of the same thing i.e. you're not replacing a nut or bolt with the same nut and bolt but a different nut or bolt which is hopefully identical. Substitution simply refers to changing one substantiation of a component in a system for another substantiation of the same component. Syntax and semantics are again important here.
For example, I can substitute one substantiation of a component (i.e. tap which is tightened by force) for a syntactically compatible substantiation of the same component (i.e. a tap which is tightened by force) but whose semantics are different and hence it operates in a different way (i.e. use of anti-clockwise force to open rather than clockwise). When you substitute a component for one that is not syntactically and semantically compatible then this often requires a change to the overall system. Such a change means work.
For example, suppose you have a chemical plant and you change one component for a syntactically but not semantically compatible version. Well, at the point your control system might want to cut off flow to a reaction chamber, you might get a nasty surprise and hence you're going to have to spend a bit of time adapting the entire system to this changed component.
The greater the degree of syntactic and semantic compatibility that exists between different versions of the component the less work is needed to change a system. Since work normally involves time and resources then the sensible answer is not to redesign a plant but to use a component which is compatible (i.e. an identical replacement).
The same effects are important in computing. For example, let us assume I'm using Amazon EC2. Let us suppose I decide to change one m1.large machine instance for another then if both instances are not syntactically and semantically compatible then this will require a change to the overall system including management systems, orchestration etc.
Fortunately with Amazon EC2 both machine instances are syntactically and semantically compatible from the point of view of the user. This is even true over regions. Hence a machine instance in one region uses the same API and it operates in the same way as another region, as opposed to having different EC2 APIs for different regions.
Of course, if they weren't syntactically and semantically compatible then the work needed could be alleviated by the introduction of a translation system i.e. an abstraction of the actual interfaces and provision of a common interface which translates to the various incompatible forms. This is always inefficient compared to compatibility between the different substantiations of the same component.
I emphasise 'from the point of view of the user' because it is entirely possible that the systems that Amazon runs in different regions aren't actually syntactically and semantically compatible and the Amazon EC2 API is acting as a translation layer to different underlying interfaces. From the point of a user you won't know unless Amazon tells you.
Unfortunately, whilst we have high degrees of compatibility within and between different regions of AWS, you have varying degrees between AWS and other cloud providers. Hence you’re always faced with an issue of work in substitution or use of a translation layer unless you stick with the one provider.
So why would you consider using another provider? Well, substitution is also important in business terms for various reasons such as pricing competition, balancing buyer and supplier power and second sourcing options. However, substitution is equally important in economic terms.
If we consider componentisation and its ability to enable us to rapidly develop higher order systems then that depends upon three factors :-
1) higher order system being composed of components
2) interactions between the components
3) substitution of components for different compatible instances
Without the ability to replace a component with a different substantiations of the same component then in Meccano terms there would only be one instance of every type of nut and bolt i.e. every nut and bolt would be different. It would be the equivalent of having only one instance of a brick and hence every brick being different. Under such circumstances it would be impossible to have common architectural plans. You could not rebuild any model that I built as you'd have to use different components. This would incur severe costs in terms of work in building any higher order system.
We even have a name for highly interoperable components that can be substituted for compatible versions - we call these commodities. It's the provision of commodity components like bricks, electricity, nuts and bolts that has enabled rapid higher order system development and created the wealth of architectural building, consumer electronics and mechanical devices that we experience today.
It should be noted that large degrees of variation in compatibility of underlying susbsystems can have seriously negative effects on our ability to build. We call this sprawl. However, one nut and bolt doesn’t fit all purposes (i.e. we require specific properties for medical components) and there are also issues with systemic failure (i.e. if all rice was the same type then the entire species can be eliminated by a single type of pathogen).
Hence for reasons of stability and agility then systems normally tend towards a limited range of types of a component with each type having defined semantic and syntactic interoperability with other components of the system. Each type of component also has syntactic and semantic compatibility between multiple substantiations of the same component ideally through multiple sources.
Hence with nuts and bolts we have a range of standard types with defined properties. Each standard type is produced in volume with ‘identical’ nuts and bolts produced by multiple providers.
It's important to understand that these commodities represent a limitation of choice i.e. there is a range of types for bricks or nut and bolts. It's that limitation of choice which enables our agility in building higher order systems.
On Degeneracy
There is another very important term we also need to consider and this (in biology) is known as degeneracy. It's the ability of one component to take on the role of another component within a system. In engineering terms, it's the ability to redeploy a system for another purpose such as turning your refrigerator into a heating device. Degeneracy is very important for adaptability to changing circumstances i.e. turning a wagon train into a defensible but makeshift fort.
On Context
If we look at figure 1 again, we can see it contains many components interacting with each other and higher order systems built from lower order subsystems. However, the overall characteristics of the components change as they evolve.
In the genesis state, a component is relatively unique. Our models of understanding are only just developing (i.e. it's a time of exploration). These components are not ideal for building higher order systems but they can consume lower order components. They generally show low levels of syntactic and semantic interoperability with other components and there is little or no substitution.
As the component evolves (due to competition) and we start to see custom built examples then our model of understanding of what this component is matures. In this stage we normally see early forms of interoperability with other components along with attempts of building higher order systems with it. For example, the early custom-built generators (such as the hippolyte Pixxi) were used to conduct all sorts of experiments in creating higher order systems like lighting.
As the component continues to evolve then we start to see the first products. Syntactic and semantic interoperability with other components starts to improve. Increasingly the product is used as a component of something else, for example Siemens generators being used to power machinery. Our models of understanding of what the component is become reasonably mature and even common understanding appears with expected norms of behaviour. We see early examples of substitution but syntactic and semantic compatibility between different substantiations of the same component from different providers is rare. However, the importance of communication with other systems often leads to standards for communication between components. Hence whilst products can often be used and communicated to in the same way, substitution of one for another is often complex.
As the component continues to evolve it eventually becomes more of a commodity and suitable for utility provision. Whilst our understanding of the component is very mature and expected norms commonplace, there is a period of transition during this change as we move from a product mentality (one of feature differentiation) to a commodity mentality (one of operational efficiency). In this stage, syntactic and semantic interoperability with other components is well established. Syntactic and semantic compatibility between different substantiations of the same component from different providers develops strongly over time. This provision of standard forms of the component with standard interfaces enables a rapid acceleration of building higher order systems.
The connection between these is provided in figure 2
Figure 2 - Evolution, Syntax and Semantics.
The Importance of Evolution
It’s important to understand the process of componentisation and how evolution enables this to happen in order to make sense of the change that occurs around us in business. If you have any doubts about evolution then I suggest you pick a commodity item and go to your local library and look up its history. You’ll find that the types of publications around the item have changed over time from ‘wonder’ through ‘building’ through ‘operation and maintenance’ through to ‘use’. I’ve expanded the certainty axis from a standard evolution graph (figure 3) into figure 4 in order to give you some pointers.
Figure 3 – Evolution
Figure 4 – Evolution and Type of Publication
There are those who would have you believe that evolution doesn’t exist and that the progress from genesis to commodity doesn’t happen or it's governed by some magic that only they know the secrets of. Contrary to such mysticism, evolution is not a belief but instead a model of a surprisingly simple, repeatable and discoverable process driven by competition. You can discover it for yourself by taking that trip to the library and simply looking at how things have changed e.g. from early abundant guides on 'How to Build a Radio Set' to later dominance by use such as radio listings like the Radio Times.
You live in a world where yesterday's rare wonders becomes today's invisible, commonplace and commodity subsystems.
Evolution also has impacts. It creates cycles of change, it causes inertia, it drives things to a more standardised form, it can be manipulated and its course accelerated through open means. If you don’t understand how things evolve then it’s practically impossible to gain strong situational awareness in business. The lack of this has demonstrable negative impacts.
A Case in Point
I wanted to specifically outline the importance of limitation of choice in the above because there is a current vogue for arguments over ‘PaaS is dead’, ‘App Containers are PaaS’ and ‘App Containers are the future of PaaS and all other PaaS is old hat’.
Most of these these arguments appear to be based upon a flawed understanding of evolution, componentisation and the importance of limitation. So, I want to first look at an example of what a PaaS should be – such as Heroku, google app engine or cloud foundry.
If you examine a system like Cloud Foundry then its focus is on the developer rapidly creating higher order systems. This is achieved through a limitation of choice in underlying components i.e. specific buildpacks, defined services for common activities etc. The focus of the developer is pushed towards writing code, building data and consuming services.
Of course, with any new system that is built then the code and data can be packaged into a product and ultimately, if Cloud Foundry observes the ecosystem carefully then new services can be determined from this and provided to all.
Now, containerisation per se (examples being docker, warden, LXC) is a reasonable approach to isolation of compatibility issues in underlying infrastructure systems. PaaS environments like Cloud Foundry use containerisation under the covers. I’ve summarised this in figure 5 and it’s an example of what I would describe as a strong PaaS play combining componentisation, limitation and consideration of how things evolve.
Figure 5 – PaaS and Limitation
However, an alternative view is the idea of App Containerisation. Unfortunately, containerisation is often conflated with shipping containers and how that industry changed through their use. However, shipping changed not because of the introduction of containers (there were a plethora of different shaped cargo containers in the past) but through the limitation of choice and the introduction of highly standardised containers.
The problem with the idea of App Containers is the application, the framework, the configuration is all contained within it and rather than limiting choice it allows for a wide range of permutations (see figure 6). This is often promoted as flexibility but such flexibility is the antithesis of componentisation and any desired agility in creating higher order systems.
Figure 6 – App Containers
Admittedly App Containers are better than what exists in some firms today i.e. building everything yourself where everything is flexible but it’s far weaker in comparison to a PaaS that limits choice for what are common activities. Now, there are ways of solving the problems of App Containers by harvesting the ecosystem to identify common App Containers and weeding out all counter examples but that’s a highly skilled and difficult game. Given that App Containers are often touted in Private PaaS environments then it is also unlikely to occur in many firms.
In all probability, as with the past where every major company seemed to have their own home grown linux distro then we’re likely to see major companies have their own permutations of application, development framework and configuration for every single activity even when it’s common. The sprawl will become horrendous and that sprawl will have negative consequences and cost.
The Pig and the Brick House
The best example of this was a thought model used by Herbert Simon to explain componentisation and it's based upon the story of the three pigs and the big bad wolf. Imagine you're a pig and you've decided to build a brick house.
Now unfortunately you've only time to do twelve things before the big bad wolf appears. Those things could include make a brick or cement two things together. Anything which isn't completed and stable is blown down. The house has four walls, each wall is ten bricks high and ten bricks long.
Now, unfortunately you need to start by building bricks from raw ingredients but fortunately each brick is a complete unit. So in the first turn before the wolf turns up you can build twelve bricks. Now, you'll need 4 (walls) x 10 x 10 (bricks) equals 400 bricks. Hence it'll take 34 turns and visits of the Wolf to get enough bricks. In fact in the last turn, you'll have 8 moves to do other stuff.
Now, you have a problem. If you try to build the whole house in one go then whatever you put together gets blown down by the wolf when it visits. You've got 400 bricks and you can't turn that into a house in one turn! As an alternative you can first cement ten bricks together into a line. Each line is stable component which will resist the power of the wolf. Whatever you do with the other bricks doesn't matter but at least after the next visit you have a stable line of bricks.
After ten more visits of the wolf, you have ten stable lines and so you can use the next visit to create a stable Wall cementing all ten stable lines together (one of top of the other). Repeating this process then after the 78th visit of the wolf you have 4 walls. Before the 79th visit you can cement your four walls together to create your stable component of a house and then be safe from the Wolf forever more.
The point of this thought experiment is to show that building through stable subsystems is essential for development of higher order systems. In practice it has an exponential effect.
An even better scenario for the above would be a BricksRUs service where you can just buy-in pre-built bricks, lines and walls. Of course, those pre-built components will limit your choice - a line is 10 bricks, a wall is 10 lines and a brick is a specified shape and size etc. But use of it will accelerate your rate of development. I could even have my house built in one turn, before the Wolf even gets there.
The key thing to understand is there exists a trade-off between flexibility of the lower order and agility in building higher order systems. The limitation of choice is essential for rapid development of more complex systems.
In the case of systems like Cloud Foundry then they are deliberately limiting choice by provision of defined subsystems e.g. services to be consumed, development environment to build in etc. This is a really good thing to do and is analogous to the BricksRUs example. In the case of App Containers there is no limitation nor enforcement of such and there is only flexibility - you can make any brick you want, any shape, any material etc. This is a really bad idea as the permutations here are vast and this is what causes sprawl and limits agility.
I cannot emphasise enough how important an understanding of how things evolve, how things change with evolution, the benefits of componentisation and the necessity of limitation of choice are to navigating a safe path through the turmoil of today.
PaaS has a bright future when we’re talking about Heroku, GAE, Azure, Cloud Foundry and equivalent systems. I'm also heavily in favour of industrialising apps and any common component where possible, something which CSC's Dan Hushon has recently talked about. Also, I'm very positive about the future of underlying tools and components like Docker.
Unfortunately there’s a lot of stuff trying out there to pass itself of as PaaS and a lot of misunderstanding on componentisation. Whilst components like Docker are extremely useful (and deserve to spread), there are those trying to portray it as a key defining characteristic of a PaaS.
Forget it, Docker will become a highly useful but also invisible component of PaaS and the success of PaaS will depend upon the limitation of choice and certainly not the exposure of underlying systems like Docker to end users. It’s extremely easy to take a path that will lead you down a route of sprawl. There are some very exceptional edge cases where you will need such flexibility but these are niches. I'm afraid some businesses however will probably get suckered into these dead ends.
Forget it, Docker will become a highly useful but also invisible component of PaaS and the success of PaaS will depend upon the limitation of choice and certainly not the exposure of underlying systems like Docker to end users. It’s extremely easy to take a path that will lead you down a route of sprawl. There are some very exceptional edge cases where you will need such flexibility but these are niches. I'm afraid some businesses however will probably get suckered into these dead ends.
Hence the message of today is ... Caveat Emptor.