Thursday, January 30, 2014

Looking for a Sci-Fi novel – recommendation please

I note with interest that Malta is in the process of selling EU passports to the wealthy. The scheme allows those  (and their dependents) with capital to simply buy the right to EU citizenship.  Of course, there are those who will argue that this will benefit inward investment and raise questions on those that oppose it.  I take the view that investment is more than financial capital and we should instead introduce a scheme to provide passports to those who have something positive to contribute rather than wads of cash.

What interests me however is there exists a rather nasty and hypothetical scenario for the future and it certainly seems a good subject for a Sci-Fi novel. 

The scenario would be as follows: -

First, a government might oppose this purchasing of passports but under the auspices of the TPP then a company selling passports would be able to sue for loss of profits if the scheme was denied. 

Second, the act turns citizenship into a commodity that can be bought … and therefore also sold. It’s the later that is the key to this scenario.

Third, bitcoin is growing and becoming more stable. Unfortunately with bitcoin comes long-term issues on taxation. This could mean that in practice income tax is likely to end up voluntary to an extent and the only secure means of raising taxation will be through land and citizenship tax.

Land tax (and as a first example of this then the mansion tax could be used but it’ll have to expand from there) would lead to centralisation of land control in private hands and extensive lobbying efforts to limit it.  Hence, a case could be made that we’re more likely to see a citizenship tax over time.

This itself creates a problem. Suppose I’m fabulously wealthy in bitcoins. There are many ways to obfuscate that wealth and proving the identity of an address holder can be made very difficult. Hence, it’s relatively easy for me to claim I’m poor when I’m not. So, how do you determine who is actually poor (and can’t pay a citizenship tax) against those who are wealthy and hide it? The answer is you can’t.

You can certainly claim I live in a nice house, drive a nice car etc but I could just as easily create charitable vehicles to provide these things. In a world of well managed bitcoin addresses with people specialising in wealth obfuscation then I can make it extremely difficult for any Government to determine any income or wealth owned. However, I’ll pay just for the right of citizenship.

But what about the poor person who can’t play? You can’t know whether they’re wealthy and just hiding it or if they’re genuinely unable to pay? The cost of investigation will be significant and alas you can’t simply ignore this because everyone else would quickly learn that they could dodge the citizenship tax.

Fortunately there’s a solution. If someone can’t pay their citizenship tax then you can simply sell their passport (citizenship being a commodity) to someone who can pay the citizenship tax and buy the person a cheaper passport (i.e. citizenship) in some other cheaper part of the world. 

It’ll become a monetarist’s laissez faire dream enabling the transportation of poor people to poor countries and turning parts of the world into a haven for the wealthy.  Of course, there’d be exceptions for military service - you need some measure of force to keep the divide and naturally we’d have to conveniently forget that poor people don’t make poor financial decisions simply because of incompetence but instead because they’re poor and the stress that being poor creates.

Endless arguments could be created to justify such a dystopian nightmare, the reinforcement of such a social divide and the inevitable repression and dearth of social mobility.  Rather than have 500,000 people in the UK relying on food banks and unable to afford citizenship tax, you could sell their citizenship for 1M Euro (say about 50 bitcoins in the near future), buy them citizenship somewhere else for 100,000 Euros (say 5 bitcoins in the future), give them 30 bitcoins (about 600,000 Euros in the future) as a ‘thank you and on yer bike’ and pocket the remaining 15 bitcoins (say 300,000 Euros). 

It’ll turn citizens into a commodity to be traded and a nice little earner at that.  Which is why someone, somewhere must have written a Sci – Fi novel on this? 

Now, I don’t think this is likely to happen – my view is the idea is ludicrous. I’d respond with about the same level of derision as if you had asked me twenty years ago that the NHS could be privatised. 

Still, I’d love to read a novel on this. Any recommendations?

Monday, January 27, 2014

The evolution of evolution

Back at EuroFoo 2004, I gave two talks - one on 3D printing and the other on Commoditisation of IT. In the latter talk, I discussed a generalised pattern for how things in IT evolved from their 'innovation' to more 'commodity'

By 2006, I had be using this pattern in various forms at around 40 conferences worldwide. The problem with the pattern was though I had numerous examples of it, I had none of the mechanics underlying it. The terms I used for the pattern (and also in mapping) are show in figure 1.

Figure 1 - The general pattern of evolution, 2004-2006


By mid 2007, by collecting and aggregating over 4,000 data points, I had managed to determine some of the mechanics of the pattern which covers activities, practices and data. This wasn't something that could be correlated over time but by comparison of ubiquity vs certainty (see figure 2)

Figure 2 - Evolution, the link between ubiquity and certainty, 2007

By late 2007, I had refined the pattern and determined the driving forces of supply and demand competition (see figure 3) though I did use both figure 2 and figure 3 throughout 2008.

Figure 3 - Forces behind evolution, 2007


Whilst the actual model itself hasn't changed since 2007, the modern version I use today is a cleaner visual representation with the term 'innovation' dropped in favour of genesis in 2011, see figure 4.

Evolution is a cornerstone behind the concept of mapping which is actually where the real value can be found. Naturally mapping had to change (in the sense of the terms used) as evolution did.

Figure 4 - The evolution graph - today


However, for me, it's interesting to note how my concept of evolution has itself evolved over time to a more stable form.

The use of mapping has enabled me to discover many different patterns of strategic play, answer common management issues and explore economic cycles. It has had real world effects since I first introduced the technique (in its earlier form) in 2005. A modern map, using the axis of evolution is provided in figure 5.

Figure 5 - A Map.


I allow myself a small bit of pride in both mapping and evolution because they are both genuine pieces of original work which did not exist beforehand and they both have use. Naturally, I know full well that at some point they will be superseded.

One thing to note is the link between evolution and diffusion. Evolution is derived from a series of diffusing and ever maturing instances of an act caused by competition ... that of course should be the subject of another post.

Saturday, January 25, 2014

Should I be ‘outside-in’ or ‘inside-out’?

One of the areas of focus for LEF at this moment is the issue of outside-in business models where an organisation places emphasis on co-creation with others and the use of ecosystems. This often leads to the question of should a company be ‘outside-in’ or ‘inside-out’? 

The answer, as with other binary questions from 'should I be agile or ITIL' to  'should I be push or pull marketing' is always both. It's never an absolute but one of relative balance.

Our tendency with companies in the recent past has been more ‘inside-out’ and the current vogue represents a shift to a more balanced form and is part of the normal process of organizational evolution. 

In our 2011 study on 'Learning from web 2.0', we tested a model of how new organisational forms emerge via the interplay of evolution with existing activities, co-evolution of practice and inertia to change.  As with past examples - the rise of the American system, Fordism and web 2.0 - the current industrialisation of a range of IT activities from product to utility (nee cloud) has not only caused co-evolution of practice (and the emergence of devops) but also a new organisational form with different practices, activities and focus. A list of these characteristics are provided in figure 1.

Figure 1 – Traditional versus Next Generation


This 'next generation' of companies use ecosystems extensively, are driven by big data (as opposed to simply using it), have cell based structures (e.g. like Amazon's two pizza model), use open approaches as a weapon and demonstrate high level of situational awareness and strategic gameplay. They also show a shift in project management practices from the extremes (e.g. agile or six sigma) to a more balance approach of using mixed methodologies.

The different characteristics of 'next generation' are not independent but highly connected. For example, the use of mixed methodologies requires an ability to break down (deconstruct) large complex environments into components and this in turn requires and enables high levels of situational awareness (see mapping). An example of this is provided in figure 2 and the deconstruction of various aspects of HS2 IT.

Figure 2 – Mapping of HS2 IT


This high level of situational awareness is essential for new forms of gameplay such as the use of an open approach as competitive weapon in order to change a market. It was through mapping the competitive landscape, understanding the basics of economic change and a deep understanding of gameplay that Ubuntu was able to successful steal a march on the future against Red Hat (see figure 3)

Figure 3 – Gameplay of Ubuntu


It's the highly strategic gameplay of Canonical which is likely to be behind the recent acqui-hire of CentOS by RedHat. Don't get me wrong, this is a sensible (but somewhat desperate) move by RedHat to gain mindshare in the cloud given Ubuntu's near total dominance (usually estimated as 65%+ of the market) and RedHat's almost complete absence (usually estimated at between 3% to 5%). This is a world apart from the past dominance of RedHat on the server.

Deconstruction is also a necessary step in the creation of cell-based structures. The successful use of ecosystems requires not only deconstruction but also extensive use of big data. Furthermore the use of cloud not only requires but also has enabled the co-evolution of practice (devops) and the rise of big data. All of these things are tightly interconnected. Before you think there is a sudden appearance of these changes, it's worth noting that many of these changes have been diffusing over the last decade. In several cases we have over stretched and retreated to a more balanced view. 

For example, agile development was all the rage in 2002 but by 2005 many software companies had started to learn the lesson that it wasn't suitable for all classes of problems and a more balanced / mixed approach started. Today, using mixed approaches in a single organisation is becoming essential for competition as the one size fits all (either agile vs six sigma or in-house vs outsource) incurs unacceptable costs. Throughout the industry, more balanced approaches of breaking down complex systems into components and using appropriate methods are growing whether it's mapping (see figure 4) to USAF's implementation of FIST. The binary approach is becoming mixed. 

Figure 4 – Mapping and use of methods.


Hence when I talk of 'outside-in' it's not that all will become 'outside-in' but instead there is rebalancing of the 'inside-out' approach that has became commonplace.

Of all the 'outside-in' changes probably the most interesting for me is the use of ecosystems. Alas, the term itself is as much misused with as many different types as there is with 'innovation'. In this post I'll focus on one specific model known as ILC (innovate-leverage-commoditise) because it simultaneously embodies what I consider to be the best of both 'outside-in' and 'inside-out' approaches.

The starting point of the model is a supplier creates a highly industrialised component for other companies to use e.g. utility electricity provision or a compute utility (such as Amazon's EC2). The ecosystem is comprised of those companies that consume the component hence we often call it a component ecosystem.

The purpose of the model is not just volume operations from use of the component but to encourage other companies to ‘innovate’ in creating the novel and new e.g. building big data systems on Amazon EC2.  Those novel activities created are highly uncertain but potentially of huge future reward (they are uncharted) and provision of the subsystem as a utility simply reduces the cost of production and hence encourages development. As these novel activities start to evolve through multiple waves of diffusion of ever improving systems, then the supplier can spot successful growth through consumption of the underlying component. This allows the supplier to leverage the entire ecosystem to identify future change - an exercise that at scale requires big data systems. Once identified, the supplier can commoditise an act to a new component and in effect harvest part of the ecosystem to provide a benefit to all. 

An example of this is provided in figure 5. Of course, we don't know if Amazon is deliberately running such a model and harvesting its ecosystem, we can only observe that the results are similar.

Figure 5 – ILC model


Each time a component is harvested the supplier has to balance accusations of 'eating the ecosystem' against the overall benefit that the component will provide to the entire ecosystem. If you 'harvest' too much then members of your ecosystem might flee elsewhere.  Once harvested, the new component can then be used to repeat the ILC cycle with members of the ecosystem building novel and new components on top of it (whether activities, practices or data).

The success of the ILC model depends upon numerous factors including the scope of the components built (i.e. specific to an industry or widespread), ability to identify and harvest successful change (leveraging the ecosystem, a big data problem), speed of harvesting and ability to manage the ecosystem (e.g. to ensure more benefit is provided than harvested). 

If implemented successfully then the supplier benefits are huge as their apparent rate of successful innovation (the innovation in fact being done by others), customer focus (leveraging the ecosystem to identify successful and useful change) and efficiency (economies of scale) now all depend upon the size of the ecosystem rather than the physical size of the company.  Furthermore all these effects can increase simultaneously. 

Now whilst we don't know if Amazon is running such a model, the signs of ecosystem harvesting and continual increases in innovation, customer focus and efficiency are all there. This is what potentially makes Amazon such a dangerous beast. The power of Amazon is in its ecosystem and its exploitation of such.

So which bits are 'outside-in' and which bits are 'inside-out'?

Encouraging the ecosystem to innovate (and hence build potential futures) and leveraging the ecosystem to spot future success are all examples of 'outside-in' approach. The decision to harvest the ecosystem by commoditising a component is however 'inside-out'.  The latter is an act of active management and enforcement of the will of the supplier on the ecosystem. The former are acts of nurturing and sensing where the ecosystem tells the supplier what is important.

In some respects, the supplier is acting as the gardener of the ecosystem. Nurturing a crop, sensing what works, harvesting when suitable and ensuring the garden doesn't get out of control. The key difference here is the crop can move and hence the gardener has to act as the benevolent dictator of the garden ensuring the benefits of staying in the garden outweigh the threat of being harvested. A similar dilemma has faced many open source projects and the various commercial interests involved. The existence of a benevolent dictator (whether a person, organisation or company) is often a pre-requisite for success.

Learning these techniques is all part of playing the game of chess that is business and competition. Mapping itself is all about situational awareness and is inherently 'outside-in' starting with a focus on user needs to the components that a supplier has to use to meet those needs. However, the act of altering a map through strategic gameplay such as a decision to use an open approach for one component, to outsource, to insource, to erect barriers to entry or to demolish others barriers to entry is inherently 'inside-out'.  So, should I be 'outside-in' or 'inside-out' … alas, the answer is never simple and binary. Over time you'll need to become both. 

However, it's worth noting that the route to greater situational awareness is always with an 'outside-in' focus.  There is more data, more information and more awareness outside the organisation than inside and if you fail to exploit this then don't assume your competitors will return the favour.  For the time being, since many companies are dominated by 'inside-out' thinking then a strong dose of rebalancing is needed.  Hence if you ask me whether a company should have more of an 'outside-in' focus today then in almost all cases I'd strongly agree.

It is essential to understand that at this moment in time the 'centre of gravity' for the firm is shifting from 'inside-out' to 'outside-in'. It's not an absolute but a movement away from the 'inside-out' dominated approach of the past. This is why I take a great interest in our work at the LEF on ‘outside-in’ and the work of @dmoschella.

Thursday, January 16, 2014

Mapping the competitive landscape

A business is a living thing.  It consists of a network of people, a mass of different activities, practices and data along with reserves of capital including financial, physical, human and social.  It consumes, it produces, it grows and it dies.  We often attempt to understand a business through the use of box and wire diagrams whether business process maps or IT systems diagrams or … choose your poison. However, these diagrams whilst great at making connections between things do little to help us determine user needs, how things change and how we should manage them.

To make matters worse, the box and wire diagrams often have implicit information embedded in the terms and acronyms we use.  This makes them unintelligible to the uninitiated. Try asking someone from the business to decipher a diagram full of labels such as ERPM, BIM, PIMS and task management modules.  Or ask your long suffering IT folk to make sense of Ship Call Reports, Energy Balance, GAP adjustments or Peaking Inventory Projections.  Alas, within most organisations you have disparate groups with their own worldview based upon box and wire diagrams with their own terminology.  Is it any wonder that discussions between those groups are rife with alignment and translation issues?

To emphasise this point, I’m going to use a simple box and wire diagram (figure 1) for an unspecified process from which I have removed the terms - only some of you would have known what they meant anyway.  I’ll ask a few basic questions of the reader.

Figure 1 – A box and wire diagram of an unknown process



Questions 
  1. Which components in the above diagram identify user needs and whatever is produced?
  2. Component E was outsourced to a third party resulting in a high cost overrun and poor delivery. 
    • Can you give an explanation as to why?
    • Would you recommend the outsourcing of component F?
  3. It has been recommended that A, B, C and F use a cloud provider. Do you agree?
  4. It has been recommended we adopt Six Sigma throughout the project. Do you agree?
I’ll now convert the above diagram into a map.  This is simply done by first arranging the items in order of a value chain from the visible user needs to the invisible components consumed and them mapping each component on how evolved it is.  See Figure 2.

Figure 2 – A Map view of an unknown process



From the above it becomes far easier to answer the above questions.

Answers
  1. Components B and C are high up the value chain and likely to represent the actual user need and what is produced from the process.
  2. On outsourcing
    • Component E is at an early state of evolution and hence is unlikely to be suitable for outsourcing as no economies of scale or standardisation exists. 
    • Component F being more of a commodity is highly suitable for outsourcing.
  3. Component A, C and F being more of a commodity are likely to be suitable for provision by a cloud provider. Component B is not.
  4. The process consists of multiple components at different stages of evolution and hence a one size fits all method is inappropriate. In this case: -
    • Components B & E are likely to be suitable for Agile, In-House development.
    • Components A, C and F are likely to be suitable for highly structured methods such as Six Sigma and use of outsourcing.
    • Component D is suitable for using an off the shelf product.
The diagrams (the box and wire versus the map) are of the same process with no labels but the mapping view provides far more situational awareness and enables conversations on what matters and how things should be treated. Of course, adding labels adds further context as more data is always better.

A case in point is when you add data on competitors' value chains. For example figure 3 is a comparison between one company (let us say you, represented by vertical blue lines) and competitors (represented by horizontal red ones).

Figure 3 - Box and Wire comparison of competitors


Now at best, from the diagram above you can see that you're doing something called component B which the competitors are not. Maybe it's a differential, maybe it's a wasted activity? It is difficult to tell.

Now if you map both and compare (see figure 4) then a different story emerges.

Figure 4 - Mapping comparison of competitors



From the above, it is clear that component B is something novel and likely to be a potential differential. But equally there is differences in components D, E, F.  In component D you seem to be using a product whereas competitors are custom building systems - this is likely to be an area of efficiency for you. But in components E and F then you seem to be behind the game and likely to be creating a source of inefficiency.

With a mapping view, I can :-
  1. Determine what is likely to be suitable for outsourcing or insourcing
  2. Determine what should be built with agile, or bought off the shelf or driven by a six sigma approach
  3. Identify areas of efficiency, inefficiency and differential and what is suitable for cloud.
The above however is simply standard housekeeping stuff and the real power of mapping isn’t alignment and management methods but in understanding economic patterns and strategic gameplay. Mapping provides a mechanism for common understanding in an organisation and learning of such patterns.

I've been using mapping for over eight years (since the first maps of Fotango) with exceptional success and during that time I've discovered and tested many different economic and strategic patterns. At its heart, mapping simply provides a view of the chess board - you still have to learn how to play the game and unfortunately I can't condense eight years of experience into a single blog post.

However, I do run one day workshops for the members of LEF (a private research group of large companies) and occasionally I talk about these methods in public.  An hour long video covering some of the topics can be found here and also, for those willing for a more deeper dive then this uncompleted (but useful) series of post will help.  You can always come and find me at OSCON (the greatest conference in the world) which is the one place I almost always attend or alternatively you can search through this blog.

To give you an idea of the scope of mapping, the workshops cover such themes as -
... beyond this, it boils down to experience and developing lots of it. However, that's the point of this post, mapping is not a discrete thing but a journey just like learning to play a game of chess. Of course,  as with chess it always helps to start by looking at the board first! Also, as with chess then I'm afraid you have to learn to play the game and you can't farm this off to someone else.

Strategic gameplay is complex and it doesn't fit into a nice 2x2 diagram. There is a great wealth of patterns to be learnt, understood and applied if you can embrace the complexity of what is competition or as I like to call it, organisational warfare.  As with any competitive engagement, maps and situational awareness are critical to the outcome.

Once you start to get to grips with this, how to manage an environment, how to exploit it, how to deal with uncertainty and the arsenal of tools you can deploy then you can learn to play a good game.

Oh, and before you say 'it's an IT' thing and that you work in business ... well, I developed the technique when I was CEO of a Canon subsidiary to cope with competition and the world of change that I was experiencing.  I happen to use examples of IT companies because they generally seem to play a better game. These days I use the techniques in a wide variety of industries and governments from giants to start-ups.

As a rule of thumb, many organisations I meet have two apparent problems which threaten their future livelihood. The first problem is called 'IT' and it often suffers from a range of difficulties from single size methods to excessive reliance on outsourcing to alignment issues. The second and usually more fatal problem is called 'The Business' and it often suffers from a range of difficulties from inertia to poor situational awareness to lack of strategic gameplay to inability to identify value. Alas, there seems to exist a tendency to always point the finger at 'IT'.

In reality the problem is the division itself, an artificial construct of how we organise by type. In the past, I've found the way of solving this problem was to get rid of the construct but that's another post for another day.

Wednesday, January 15, 2014

One to remember

In my mind, without doubt, I've just witnessed the most enlightened tweet from Neelie Kroes, the EC commissioner for Digital Agenda.
This is exactly what Neelie Kroes should be doing,  reducing barriers to competition, promoting Net Neutrality in the EU and using this as a differential with the US.

Whilst we're at it, since the US is still debating over copyright on APIs then there's a number of other things that can be done.

1) Reinforce the concept that APIs are principles and not expressions and hence are not subject to copyright.

2)  Remove any vestiges of software patents (often as disguised business process patents) throughout the EU and encourage reverse engineering due to the fast and sequential rate of innovation in the industry. Copyright protection is more than enough in this field.

The problem of privacy.

The problem of privacy can be usefully summed up with the line

perceived value of usefulness >  perceived value of any immediate privacy / control concerns

Hence for example when we have events such as Snowden and revelations of secret laws then our general view starts to drift towards privacy / control concerns and we start to question whether the usefulness of services such as Google exceed this?  Prior to Snowden then usefulness generally outstripped any privacy / control concerns for the majority.

The real problem of course here is marketing and perception.  Whether we like it or not, marketing is very effective at promoting perceived value of usefulness and diminishing other concerns when it comes to selling stuff.

Post Snowden, marketing will continuously tip that balance in favour of selling more products 'tailored to us' and collecting more of our information.  Our concerns will diminish, they're doing it for our benefits and our needs - aren't they?


The system which basically intrudes into highly personal details of our lives is promoted as creating values of safety, security, comfort i.e. 'Mother knows best' etc.  You could do a mock up of the entire page with malicious hackers having taken control of your information which would make people run a mile.  But marketing won't do this, it's not their job to provide a balanced view, it's their job to promote a specific view and even create user needs when none existed before.

The problem you're fighting with privacy is therefore Marketing, our own perceived desires (whether real or fabricated for us) and the desire of companies to sell products and gain more access to our lives. Snowden was a fortuitous event that exposed some of the excesses of what is happening but it will be forgotten and ultimately buried. There will be niches, people with longer term memories who are not easily swayed by the bright lights of marketing - but they're niches.

If you want to fight the erosion of privacy and create an environment where users are under more control in a more egalitarian society then you either need to create commercial interests in line with this or robust legislation.  You need to be willing to use marketing to diminish the perceived value of usefulness and raise the perceived value of privacy / control concerns. 

This means getting dirty. 

You'd have to oversteer by promoting web giants as being 'Big Brother', 'friends of criminals' and at the same you have to rebrand the 'Department of NO' (i.e. security) as being 'Mother', 'Keeping you safe' etc.  To some extent this is already done but the balance of power is very much with the selling side of consumer services and most users are easily swayed to accept the loss of privacy in return for baubles of use.

I'm not saying you shouldn't be concerned about the loss of privacy and control - you should.  But in all likelihood privacy will be a notion of the past.  You're in an uphill struggle to prevent what is already evidently happening. Privacy and control concerns are being given away freely through choice in favour of more usefulness even when many of those useful needs are simply figments of marketing's imagination and didn't exist before marketing persuaded us we needed them.

You could take a 'pragmatic' approach of accepting the change and looking at how to minimise any risks but remember the competitive pressures push in one direction, so don't examine today's situation but work from the basis of tomorrow's.  In other words, start by considering a world where all user information, even what we would consider today as your 'private' details are not only a matter of public knowledge and discussion but accepted as such - every place you've been, every conversation you had, everything you ever looked at and every detail of your life.

Imagine a world where offers that are publicly promoted to you are influenced by the fact that six years ago you spent 2 seconds looking at a picture in some shop window (rather than the average of 1.5 seconds) and your heart rate increased slightly. Every minute detail of your life will be catalogued and referred to ... apparently, for your benefit.

Now ask yourself what needs to be put in place so that users retain some element of control?

That said, I must now get back to spying on my family with my 'mother' system - it's in their own good you know, the brochure tells me so ...

Friday, January 10, 2014

Cloud Recap + five and a bit years

tl;dr when faced with a system which claims to be PaaS then ask yourself - can I just write code, build data and consume services? If you need to do anything else other than this then it's not PaaS.

Five and a bit years ago I wrote a post called 'Cloud Recap', a summary of the previous years and the state of play in 2008. In this post, I'd like to revisit that earlier topic and also a concern I raised in 2009 on how 'Platform is being used at all layers of the stack, so we hear of cloud platforms for building IaaS and many other mangled uses of the concepts'

In the early days ('05-'07) back when we had systems like BungeeLabs, Zimki etc then we had a simpler division of the stack though somewhat misguided by the tendency to create too many layers. The key was always the division of responsibility i.e. with different layers of the stack, part of the solution became someone else's problem. Of course that created new problems related to second sourcing options and buyer / supplier relationship and hence portability was going to be an issue.

Since then the terms have changed, hardware is now infrastructure and frameworks are now platforms. The latter change is unfortunate because Framework as a Service clearly spelt out that you would be coding (and adding data, consuming services) in a framework whereas Platform as a Service could end up meaning a multitude of things - a coding platform, a deployment platform etc. This is what has happened and hence my concern back in '09.

Now to explain the difference and where I consider there to be a problem and divergence from the original path, I'm going to explain today's layers of the stack in terms of the old view (which I happen to think is cleaner). 

I'll first take a purist view. So, in reverse order, from figure 1

Figure 1 - An overview of the stack, old and new




The lowest layer of the stack is related to provision of virtual hardware either through virtual machines or containers such as LXC. Along with the evolution of activity (computing evolving from product to utility) then we expected to see a co-evolution of practice (as in architectural practice). We've seen this with the shift from scale-up to scale-out, N+1 to design for failure, disaster recovery to chaos engines etc. This has given rise to the 'DevOps' movement and hence there is a host of tools that have developed around configuration management, auto scaling, auto deployment, policies etc from Chef to Juju. This has unsurprisingly extended to concepts such as an application store with entire images, configuration and policy information bundled together. This low level of the stack is known as IaaS.

The second layer of the stack is related to simply the use of code, data and services. The underlying components including the management layers are de-coupled from the perspective of the user. The user is only concerned with writing of code, data and consumption of services. These sorts of concepts are encapsulated in systems like GAE, Heroku and Cloud Foundry. Obviously behind the scenes is a wealth of configuration management, auto-scaling, use of VMs / containers etc. This layer of the stack is known as PaaS.

The third layer of the stack (application) is related to provision of entire applications and application services e.g. salesforce. This remains relatively intact.

From a purist point of view, each of these layers would eventually be built on the other and each higher layer would provide increasing speeds of creation of higher order systems and agility at the cost of decreasing flexibility. Hence you would also get application stores of SaaS application built on PaaS along with PaaS built on IaaS.

Flexibility vs speed of higher order systems creation is the inevitable trade-off that componentisation creates (a good examination on this is provided by Herbert Simon).  Now key to this and the ideas of componentisation is minimisation of lower orders to good enough standards. So, for example in the IaaS space you would expect to see a limited range of virtual machine types (as you do with AWS). In the platform space, you would expect to see a limited range of coding environments (e.g. buildpacks).

However, there has been a divergence from this path and in particular with the idea of Application Containers. In these environments then applications are described in discrete virtual machines or containers along with configuration and policy information. Those application deployment environments and related app stores which manage these images are now unfortunately called PaaS.

Now, there's nothing wrong with application stores containing VM images or Containers or Cartridges for an application or an application component, offering autoscaling and configuration management with one click install and there have been many efforts to provide effective management of this (e.g. early CohesiveFT, JuJu, OpenShift Cartridges etc). But to encourage development of higher order systems on a plethora of different base components is foolhardy and likely to create a sprawling mess in the future.  Now, I'm not having a pop at Docker here because I happen to like Linux containers but I'm far from convinced that describing containers and the management of them as some form of development platform is wise.

You might have specific containers for specific languages and your code and data will be bound to it - which is all good.  But it's the limitation of choice which is key for effective componentisation.  Creating a platform on solely the idea of configuration of containers might give you freedom and flexibility but it would be the equivalent of enabling a multitude of perl compilers for different purposes. There is a significant long term cost in terms of management, portability and ultimately agility as more effort becomes focused on using the right container rather than simply coding. This is why I happen to like buildpacks and for the record, a limited number of buildpacks as the basis for a platform.

In the past, there was a very VERY hard line between what used to be called Frameworks and the underlying Hardware components. In a framework, I would write code, build data and consume services and that's all. If my framework had to ask me what environment I should use then it's not a framework. This is how the original Platform as a Service environments appeared to be designed, unfortunately the term has now morphed to include all sorts of things.

I raise this because of the Gartner MQ series on Enterprise Platforms which contains many that I would describe as true 'platforms' but also others which can be best described as deployment, configuration, autoscaling and image  / container management environments. The latter have a role to play, certainly for single click deploy of applications but if you intend to use these as development platforms then I would caution you to think carefully about sprawl before you end up having to build another system to cope with a mass of different base components.

Flexibility has a cost, it is not your friend. Such choice won't benefit development any more than being able to choose from a million different types of bricks would help housebuilding.

PaaS is all about writing code, building data and consuming services ... nothing else. It's the embodiment of what it known as 'NoOps' (a fairly awful and misleading term but then as an industry we're good at this - cloud etc).  This doesn't mean that there are literally no ops but instead ops is hidden behind the interface.  In a PaaS world, the developer doesn't care about VM's, containers, auto scaling and configuration between devices any more than they care about what hardware is used. The developer shouldn't even normally care about buildpacks just as long as one which will run their code exists on the platform. All the developer cares about is their code.

Naturally there is lots of misunderstanding and inertia around these concepts, that's normal but NoOps is going to happen whether people like it or not. Get used to it. Of course, that won't stop vendors marketing endless configuration, autoscaling and deployment systems as PaaS. Certainly, all these sub components are part of PaaS but they should be invisible.

So when faced with a system which claims to be PaaS then ask yourself - can I just write code, build data and consume services? If you need to do anything else other than this then it's not PaaS.

In the future, will I be able to sue my local burger bar for irresponsibly selling me a burger?

Location services are not new but the technology around this subject has certainly developed and improved over the last decade. In particular I take an interest in iBeacon. Now beyond the rather mundane from virtual post-it notes, development of the attention economy, in place transactions, enhancements to intelligent software agents to nefarious uses and what is called MALT (micromapping advertising location transaction) then there is a number of interesting possibilities.

For example, if my wearable happens to know how much I've been exercising, my heart rate, my cholesterol intake and transfers that information to the local burger bar as I walk in then not only can they change the menu and grass me up to my local health insurer for an unhealthy lifestyle but will they have a legal responsibility or duty of care to make sure they deliver healthy food? Will I be able to sue for failure to do this?

Will this encourage the formation of a black market of burger bars, off the grid with dampening technology preventing anyone snooping on my bad habits? Oh, this should be fun. The combination of wearable tech and transactional location based services has endless interesting possibilities.

Thursday, January 09, 2014

On mapping - some help needed.

I spend a great deal of time teaching people how to map the commercial landscape and exploit this to their advantage. An example map is provided in figure 1.

Figure 1 - Map from a large engineering company.


The process of mapping I've explained elsewhere but in principle - you start from user needs then you explain the value chain that meets those needs in terms of components and then you determine how evolved each of those components are. Maps are incredibly useful for determining how you should manage something, finding areas of inefficiency and strategic gameplay but that's not the point of this post (for more details on mapping the LEF has a video etc).

The axis at the bottom of a map is evolution and it describes a common pathway for how any activity, practice or data evolves from an uncharted and highly uncertain space to a more industrialised form. The path of evolution is described in figure 2.

Figure 2 - Evolution


Now, evolution is not the same as diffusion (a time based examination of adoption) but instead derived from supply and demand side competition. It cannot be measured over time due to an uncertainty barrier known as the 'future' and the impossibility of predicting individual actors actions.

However, there is a link between diffusion and evolution. For example, some actor will create a new activity, others will copy with custom built examples that diffuse, others will introduce products with diffusion of continually improving versions in the market until some actor introduces a commodity variety that will then diffuse. In each case adoption will be through a larger market and each variant (improved product etc) will increase certainty (i.e. maturity, feature completeness) of the act. 

A single line of evolution can consist of hundreds if not thousands of diffused examples of that activity and that evolution can take considerable time. For example if you consider the evolution of electricity, the nut and bolt and computing then you have a common pathway but over significantly different time scales :-

Electricity 
The history of electrical power generation can be traced from its genesis with the Parthian battery (around 200AD) to custom-built examples of generators such as the Hippolyte Pixii (1832) to the first products such as Siemens Generators (1866) to Westinghouse’s utility provision of AC electricity (1886) and the subsequent standardisation of electricity provision from the introduction of the first standard plugs and sockets to standards for transmission and the formation of national grids (UK National Grid, 1926).

Nut and Bolt
The genesis of the humble screw can be traced back to Archytas of Tarentum (400 BC). The principle was later refined by Archimedes and also used to construct devices to raise water.  Over the next two thousand years most screws (and any associated bolts) were cut by hand however demand for screw threads and fasteners created increasing pressure for a more industrialised process.  J and W Wyatt had patented such a concept in 1760 and Jesse Ramsden in 1770 introduced the first  form of screw cutting lathe.  However without a practical means of achieving industrialisation and with no standards then the industry continued primarily as was.  Maudslay then introduced the first industrially practical screw-cutting lathe in 1800 that combined elements such as the slide rest, change gears and lead-screw to achieve the effect.  However, whilst screws and bolts could be manufactured with inter-changeable components, the lack of any standards thwarted general inter-changeability. In 1841, James Whitworth collected a large number of samples from British manufacturers and proposed a set of standards including the angle of thread and threads per inches. The proposals became standard practice in 1860 and a highly standardised and industrialised sector developed that we recognise today.

Computing
The history of modern computing infrastructure can be traced from its genesis with the Z3 computer (1943) to custom built examples such as LEO or Lyons Electronic Office (1949) to the first products such as IBM 650 (1953) to rental services such as Tymshare (1964) to commodity provision of computing infrastructure and more recently utility provision with Amazon EC2 (2006).

The vast differences in timescale are in fact related to the evolution of the means of communication but that is a different story to this post.

I first described the pattern of evolution above at a talk at EuroFoo in 2004. I then collected a set of examples in 2005. However, it wasn't until 2007 that I was able to collect around 4,096 data points to propose that evolution was something more than just a nice idea and could actually be described as a weak hypothesis.

The pattern of evolution is based upon the properties of the component in question and the type of publications that surround it.  Table 1 provides a description of this and the terms I use for different classes of components such as activities, practices and data. For completeness I've also added the old terms I used to use.

Table 1 - Properties of Evolution


Now, it should be noted that when mapping an environment the components not only evolve due to competition but can also co-evolve. Most common examples of this are practices with activities. For example, best practice in a product world is not the same as best practice in a utility world.

The use of this pattern of evolution has been essential for me in not only mapping and strategic gameplay but determining the fundamental patterns of economic change, how organisations evolve, mechanisms for exploiting ecosystem, how to use open as a competitive weapon and the cycles of our economy. However, again this is not the purpose of the post.

So what is the point of this post? 

In table 1, under the class of data I have two labels - un-modelled for the uncharted and modelled for the industrialised. I have not been able to determine suitable labels for the transitional phases. You can see the properties above and examples for practice and activity class.

Hence, I'm looking for suggestions for the two labels.

Tuesday, January 07, 2014

Getting the property market moving

A lunchtime noodle ...

There is over 220,000 long term vacant properties in the UK along with apparently over 500,000 plots of land with planning permission that remain unfinished. They may have started, the proverbial dig a hole, pour some concrete and get building control to sign it off as a major start but they remain incomplete.

On the one hand we have a housing shortage, low levels of social house building, low levels of construction output, large amounts of land not being completed and still relatively high (though improving) unemployment in those industries whilst on the other hand we have concerns over house price inflation and high profits for building companies. Something isn't quite right here.

We need to fix this but we don't want to burden taxpayers given the time of austerity. Ideally, we want to turn it into a money spinner. So, here's a very rough suggestion.

1) Introduce legislation preventing pension schemes from investing in property, construction or land based funds. Whenever there is a problem, pension schemes often seem to be a likely dumping ground and so we want to protect them.

2) Introduce legislation to redefine any land with planning permission that is uncompleted in three years or any land which could have planning permission but not applied for in six years or any residential property which has less than six months main occupancy in a three year period as derelict. The period of time should not take into consideration whether the land has changed ownership through private sale or public auction.

3) Any land which is defined as derelict should be put up for compulsory government auction with no minimum price. The proceeds of which should go to the original owner minus a 40% administration fee  of the selling price payable to the Government for running the auction. Yes, the Government should run the auction and take a hefty cut and yes, the Government should also be able to bid. Any land which is unsold is valued at £0 and immediately purchased by Government. Any land which is sold through the Government auction should have the planning permission reset to the date of sale and the clock should start ticking again. 

4) Create a building fund from direct investment, unsold properties and from proceeds of the government auction for the purchasing of land for future social housing. 

5) Create a government owned national building company ideally through nationalisation of an existing building company after pressure has been applied to reduce its value i.e. encouraging short selling, any other dirty tricks you can think of etc. Allow the national building company free access to the building fund, invest in its growth through hiring and training and compete on the open market with an emphasis on social housing.

6) Allow and encourage local authorities to build up social housing through the national building company.

7) Make a declaration that the Government will consider privatising the national building company once four million new houses including at least one million social houses have been built or brought back into circulation in the marketplace.

8) Introduce a land value tax as an additional means of raising revenue.

Now, I'm certainly not saying you should the run the scheme exactly as I've written it above but what I want to point out is that the Government really needs to become a bit of a pirate here. The market is clearly not working as it should and we shouldn't be soft touches about dealing with the problem. In a time of austerity the Government needs to be a ruthless operator. I'm also not convinced by the compulsory purchase ideas that abound and I'm far more in favour of compulsory auctions with the Government taking a hefty cut in administration fees. But mostly I'm convinced by giving the building industry a kick up the backside and so simply threatening to do this is probably enough if it gets the results.

... whilst we're at is, can we please finally change the House of Lords to a House of Representatives selected by random from the electoral register to serve for a period of five to ten years.

Saturday, January 04, 2014

How to spot a benevolent dictator

There are many different types of ecosystems (consumer, provider, co-creation, two factor market etc) and the dynamics of any ecosystem are never static (unless it is dead). There are also numerous forms that a specific ecosystem can take, for example a free-for-all. a collective prisoner dilemma and a co-operative collective.

In the open source world, the most successful ecosystems tend to share a common key factor in the role of the benevolent dictator. So, I thought I'd write down a few lines on how you spot the benevolent dictator.

Who are they? A benevolent dictatorship can be provided by any form of entity. It might be a single person, a small clique, a company or even an organisation set up to govern the project. However, these entities have common characteristics.

First, they lead. They set the vision, the purpose and the technical direction. This can be achieved by an individual or a committee or even by use of community dynamics. For example, the technical direction could be determined by setting up incubation projects with a decision that those successful community projects will become part of the core. Such, incubation projects are a relatively effective means of identifying user needs. The key point to note is the benevolent dictator determines the means by which this technical direction is set.

Second, they set boundaries.  This can be through technical policies or ways of operating and governance including structures, hierarchies, appointment of roles, elections and order of succession etc. The benevolent dictator exercises control over these.

Third, they are willing and have the ability to say 'no'. For example, if the technical direction is chosen by an individual acting as the benevolent dictator then that individual must be able to refuse other alternatives. Hence, even if community dynamics are used then a 'successful' incubation project can still be rejected / cancelled by the benevolent dictator because the project is considered harmful to the overall ecosystem e.g. the development approach might be considered incompatible. Absolute authority derives from the ability to say 'no' and to exclude all other choices.

Lastly, they demonstrate benevolence. This doesn't mean that exploitation of the ecosystem doesn't occur but instead the long term overall health of the ecosystem is prioritised over shorter term commercial concerns and self interest. This is particularly difficult when a company acts as the benevolent dictator for a project which represents change to an existing business model of that company i.e. it has inertia to the change due to past business success. In such circumstances the company may tend to push the project in a direction of supporting its existing business models rather than focusing on end user needs and hence this is likely to endanger the long term health of the project.

If a project doesn't have an entity that shows those characteristics of leadership, setting boundaries, willingness plus ability to say 'no' and prioritisation of the long term overall health of the ecosystem over short term commercial concerns and self interest then it doesn't have a benevolent dictator.

It might have a dictator that isn't benevolent (which is not normally healthy long term as this tends to result in forking of the ecosystem) or it might have a benevolent committee which can't dictate (again not normally healthy long term as this to tends to create a collective prisoner dilemma or free-for-all). 

In my experience, I've yet to see a successful, purposeful and healthy ecosystem which doesn't have at the heart of it a recognisable benevolent dictator.

---- additional notes

1. For reference, privately I prefer to describe the benevolent dictator as the gardener of the ecosystem because the acts of shaping, nurturing, maintaining and harvesting are analogous. I also find that groups don't like being described as the benevolent dictator despite their characteristics pointing to them having that role. However, since benevolent dictator is in common use then I stick to that term to avoid confusion.

2. Absolute authority is derived from the ability to exclude choices. For example, suppose you have the right to say 'yes' but not 'no'. Then the right is simply one of blessing certain approaches, nothing can be excluded and at best you aim to seek a consensus. In such cases, anyone can do anything just some actions are blessed.  However, let us take the other extreme and assume you have the right to say 'no' but not 'yes'. Under such conditions you can exclude all other choices / actions bar the one you determine is the correct approach. The ability to say 'no' and to exclude is the bedrock of absolute authority and is essential to any form of dictatorship. Without it, you are forced to rely on creating a general consensus with the knowledge that anyone can do anything regardless and you are powerless to prevent it.

3. Political systems are the interplay of government, legislature and economic systems. Most of the systems we describe as meritocratic or democratic are in essence republics where members elect representatives to rule, mandate and enforce. The ability to do so depends upon the ability of the representatives to say 'no' to the members. If the representatives cannot dictate then you are likely to form a free for all with just the hope of an emerging consensus but no means of enforcement. Election does not also guarantee that the representatives will be benevolent, you merely hope they will be which is why it's important that matters of governance should be transparent. If you're lucky then your representatives will act as a benevolent dictatorship for the period of their rule. If you're unlucky then they'll either fail to be benevolent or fail to prevent a free for all or some other disadvantageous ecosystem forming.

4. There is always the hope that a continual consensus will form that no-one will break, no enforcement is required and no-one will ever have to say 'no' to anything. I've yet to see this successfully work in practice at scale. I would love to find a successful example where this is the case.