Friday, February 11, 2011

Pioneers, Town Planners and those missing Settlers.

All business activities evolve, they share a fairly common lifecycle described in the following diagram. From innovation, to custom built examples, to productisation (including the appearance of rental services) and finally to commodity (including utility services).

Figure 1 - Lifecycle (click on image for higher resolution)

As those activities evolve, their properties change from a chaotic to a linear extreme. In the chaotic stage, the activity :-
  • deviates from what has existed before and is a novel practice.
  • is dynamic and constantly changing.
  • is rare and poorly understood.
  • has high levels of uncertainty and it is not possible to predict future outcomes.
  • has no market data, competitor analysis or well understood trends.
  • has characteristics which emerge as we learn about it.
  • is strongly affected by serendipity, chance encounters and discovery.
  • is a potential source of future worth, differential and hence competitive advantage.
  • is a gamble

By the linear stage, that same activity has evolved and:-
  • is mature and rarely changes.
  • is standardised with a wealth of best practice.
  • is commonplace and well understood.
  • has a high degree of certainty and known impacts.
  • has an abundance of market data, competitor analysis and trends are well known.
  • has well defined characteristics.
  • has well defined procedures and plans for implementation.
  • is a cost of doing business with little or no differential advantage except through operational efficiencies.
  • is a known quantity.

Now all businesses consist of a mass of activities, each of which may be at different stages of their lifecycle (stage of evolution). You can map a single business by examining the components involved in a line of business and their stage of lifecycle. You can also examine broader effects by plotting the frequency of activities at different stages of lifecycle thereby creating a profile for an organisation or an industry. This is shown in the figure below, to which the chaotic, linear and in-between stage of transition has been added.

Figure 2 - Profile (click on image for higher resolution)



The techniques which you use to manage each of the phases of profile (chaotic, transition, linear) are entirely different because the fundamental characteristics are different. Which is why no one size fits all approach to management exists. For example, agile development approaches are ideal for the innovation (chaotic) and early transition phases but are superseded by more structured approaches such as six sigma in the late transition and commodity (linear) stages. You can't apply one size fits all without either hampering innovation or impacting efficiency. You need multiple techniques, multiple types of people and even multiple cultures. Alas we ignore it.

In many areas of management, this creates a constant yo-yo between one extreme approach and another such as : agile vs six sigma, networked vs hierarchical, push vs pull. The answer is invariably you need a balance of both. The trick is to learn when to use each.

Given all this, here are my questions :-

1. Since lifecycle is constant and the properties of activities change as they evolve through their lifecycle, why do we organise ourselves around type of activities (e.g. IT, Finance, Operations) especially as a "typed" approach leads to outsourcing of inappropriate activities, misapplied techniques & alignment issues between groups?

2. Why don't we organise ourselves instead by lifecycle with specialist groups managing each stage of lifecyle regardless of the type i.e. an organisation based upon Pioneers, Settlers and Town Planners?

3. Most companies have Research & Development groups (equivalent to Pioneers) and common or shared service groups (equivalent to Town Planners) but Settlers seem to be invisible. Why is this? Who manages the transition from innovation to commodity in your organisation?


-- Update 8th March 2015

A bit of digging in the old memory banks, brings me to Robert X. Cringely's book, Accidental Empires reissued in 1996, page 235 - 238. Copying some quotes from that book (which I recommend people go buy and read), the ideas of pioneers, settlers and town planners are all there. I knew it had come from somewhere.

Think of the growth of a company as a military operation, which isn't a stretch, given that both enterprises involve strategy, tactics, supply line, communication, alliances and manpower.

Whether invading countries or markets, the first wave of troops to see battle are the commandos. Commando's parachute behind enemy lines or quietly crawl ashore at night. Speed is what commandos live for. They work hard, fast, and cheap, though often with a low level of professionalism, which is okay, too, because professionalism is expensive. Their job is to do lots of damage with surprise and teamwork, establishing a beachhead before the enemy is even aware they exist. They make creativity a destructive art.

[Referring to software business] But what they build, while it may look like a product and work like a product, usually isn't a product because it still has bugs and major failings that are beneath the notice of commando types. Or maybe it works fine but can't be produced profitably without extensive redesign. Commandos are useless for this type of work. They get bored.

It's easy to dismiss the commandos. After all, most of business and warfare is conventional. But without commandos you'd never get on the beach at all. Grouping offshore as the commandos do their work is the second wave of soldiers, the infantry. These are the people who hit the beach en masse an slog out the early victory, building the start given by the commandos. The second wave troops take the prototype, test it, refine it, make it manufacturable, write the manuals, market it, and ideally produce a profit.  Because there are so many more of these soldiers and their duties are so varied, they require and infrastructure of rules and procedures for getting things done - all the stuff that commandos hate. For just this reason, soldiers of the second wave, while they can work with the first wave, generally don't trust them, though the commands don't even notice this fact, since by this time they are bored and already looking for the door. While the commandos make success possible, it's the infantry that makes success happen.

What happens then is that the commandos and the infantry advance into new territories, performing their same jobs again. There is still a need for a military presence in the territory. These third wave troops hate change. They aren't troops at all but police. They want to fuel growth not by planning more invasions and landing on more beaches but by adding people and building economies and empires of scale.

Robert X. Cringely, Accidental Empires, 1996 (the reissued, I don't own the original).

What Robert called Commandos, Infantry and Police is what I later called Pioneers, Settlers and Town Planners. The two are identical except Robert was there much, much earlier. In fact 1993, a good ten years before I implemented the tri-modal structure,

I owe Robert Cringely a debt of thanks and hence the update.

-- Update 4th May 2016

These days I use terms such as uncharted to describe the more chaotic and industrialised to describe the more linear.

Sunday, January 16, 2011

Top tips for start-ups

By popular request, here are Andrey Markov's top tips for startups, as derived from many leading VCs' posts on what makes a successful startup.

P.S. Before taking this seriously, please read the note at the end.

=== Start Text ===

Top Tips for 99% at least of small startups.

  1. When seeking investment make the founder Geraldine Brooks.
  2. You avoid firing people. But when you're not qualified, the end is the most important thing.
  3. Embrace change. If you are offering to investors hold you pick the MP3 player space, all because you better be afraid as investors are lots of weakness.
  4. Launch into something. Go ahead with this thing. You need to Google, and see the search terms accordingly.


  5. Choose a way everyone knows what's the bat. Have the agility to further understand your team. Know when you need in three years and good long staff meetings that there is now a market.
  6. Become an expert on revenue. It helps to deploy? Understand your product just as you reach them, and maybe also their business plan.
  7. Get a stunning logo or generate feedback; that means engaging a small business advisors. Let your audience in the company, acknowledge them, but avoid defensive responses. Convince the angels of a sign of myself, will you be “Can everyone win this business?"
  8. Good isn't working the performance. "There are the ship keeps moving forward. Otherwise, you'll soon be abroad when the good are offering to clear."
  9. This may make it a Fortune 500 company, and former colleague told me on it. In most budding entrepreneurs, you can. Make the business school code for startup culture: a friend of your team, and cannot stop less than "I have an airline".

In summary, you get experience and torn jeans as you integrate yourself - they needed helping, say, but the early days we didn't tell you so.

=== End Text ===

The real shame of this, is it actually makes more sense than some of the stuff I get to read. Special thanks to Doctor Nerve's Markov Text Generator and yes, this is the last one.

A full and workable definition for cloud?

Definition provided courtesy of Andrey Markov and the texts of many prominent cloud figures.

P.S. Before taking this seriously, please read the note at the end.

=== Start Text ===

Cloud data centers have lower spend per data centers. The authors lay out peaks and reassigned according to recognize that facilitate incremental improvement and four deployment models. Any CIO will use software to take full advantage and ultimately overrides initial concerns (e.g., mission, security holes, high compute task implementation).

When computing offer enough economic advantages will come to achieve higher level of use and smooth out peaks and elastically provisioned, in terms of IT being service oriented (as the paper "The Economics of service models") and reliability will undoubtedly be able to be increased significantly, thereby reducing operational costs caused by heterogeneous thin client interface or thick client platforms (e.g. networks) and can be recognized, and cost of adoption despite initial reservations, resulting in a private cloud with different physical characteristics and reduces cost of the same subject.

"It's the entire industry group and reduces cost advantage include operating systems, storage, applications, and applications".

The basis for that underlie cloud computing? There is likely to a third party and possibly application architecture design pattern learning present themselves as examples. Their point is nothing new (application capabilities, fragility, security spend per data center). The cloud bursting for a significant input to upgrade. We put up as needed automatically without doubt.

Beyond the consumer is in part, because of power management is composed of the TCO (per server, based on the cloud with fragility, security and compliance considerations). It may be unlimited and reliability will adopt technologies despite technical concerns is to vastly increase the rapid innovation we've grown accustomed to. To this is remarkable.

In concluding the authors then ventures into what types of disruptions, as a much easier to use and supports a cloud computing initiative, or service models (and services) that of scale available over time. Their point is available to yourself to be its article; nevertheless, the consumer does cloud computing in the TCO factor compared to the original UC Berkeley Cloud Platform as noted, this question, the economic advantage, organizations will gradually fade, thus resulting in terms of service (e.g. host firewalls).

Deployment Models: Private cloud. The basis for the deployed applications, and accessed through the possible exception of the provided resources include operating systems or acquired applications. Programming languages are reported providing transparency for use of 1000 hours, more compute-intensive tasks will result from (titled "Data center is identified, negotiate extremely low rates than that"). I feel this white paper will be summed up as well. Instead the provider’s applications running on to this white paper is nothing but application hosting environment configurations. Cloud Software as a private variant runs a single document and helps shape the same subject:

"It's the implications of configurable computing economics".

Essential Characteristics: On-demand self-service. A copy of business units, or live with cloud infrastructure is operated solely for 1000 servers for a thin or getting ready to be very large volumes of the ratio of public cloud computing. I would be recognized, and may be able to upgrade. We put up with minimal management is composed of scale advantage.

Supply-side savings: Cloud computing capabilities, such as server data center buildouts, the scale of these cost = increased use. Elasticity of the cloud infrastructure is made available over operating systems, storage, processing, memory, network bandwidth, and lay out peaks and consumer of the underlying economics that economic advantage, organizations and concerns is going to deploy onto the consumer with cloud infrastructure.

In end, the cloud computing does not manage or getting ready to which cloud infrastructure is.

=== End Text ===

The real shame of this, is it actually makes more sense than some of the stuff I get to read. Thanks to @b6n for the suggestion of running a set of popular cloud posts through a Markov Text Generator.

Wednesday, January 05, 2011

My top 10 influential thinkers in cloud ...

I've never been a great fan of top ten lists, however since it appears fashionable I thought I'd have a go. So, here are my top 10 most influential thinkers to the cloud in order of priority:

  1. Douglas Parkhill: For predicting the entire field and writing the exceptional book "The Challenge of the Computer Utility" (1966)
  2. Joseph Schumpeter: For providing a basic economic framework which explains why cloud computing should enable further innovation through creative destruction (1942)
  3. Herbert Simon: For providing a basic economic framework which explains why cloud computing should accelerate further innovation through componentisation (1973)
  4. William Stanley Jevons: For outlining why cloud computing won't reduce overall IT expenditure (1865)
  5. Leigh Van Valen: For providing in "a new evolutionary law" a framework to explain why you wouldn't have choice over cloud computing (1973)
  6. Tim O'Reilly: For signposting the cloud computing future with the concept of infoware and highlighting the role of the internet and open source in this (1999)
  7. Everett Rogers: For postulating that diffusion and maturation of a technological innovation results in increased information about the technology and therefore reduces uncertainty about the change. A key cornerstone of the idea of commoditisation (1981)
  8. Paul Strassmann: For demonstrating that there was no correlation between IT spending and business value, hence showing that not all IT is the same and that some was little more than a cost of doing business i.e. it had become more of a commodity (1990s)
  9. Nick Carr: For showing that ubiquity was the key to diminishing strategic value in business and providing the crucial link to explain commoditisation (2003)
  10. John McCarthy: For being the first person to publicly state the idea of utility computing (1961)

Mystic Me 4.0

After hitting a 50% prediction rate and fortuitously realising the importance of that, I thought I'd enter the spirit of things again by upping the stakes even further in terms of both specificity and non-obviousness.

As per normal, I'll keep with my insistence on all elements being correct for the prediction to be successful, a dead parrot is a dead parrot in my book. These predictions are what I believe is going to happen but I've pushed them to a level that I can't be certain. The target is to be 50% right (i.e. above that they not either specific or non-obvious enough and vice versa)

So, with no more pining for the Fjords, the Mystic Me 2011 set are :-

  1. Cloud: 2011 will be an interesting year as conventional wisdom within the popular press shifts towards seeing open source architectures dominating the cloud computing space. Cost efficiency arguments around cloud computing will increasingly be replaced with customer innovation stories and the adoption rates of cloud computing will outstrip many early analyst predictions. In particularly pundits will cite AWS as exceeding $1 billion in revenue. With the growth of cloud computing, Enterprise IT will increasingly focus on new value creation, architecture and vendor management techniques and hence there will be increasing mention of terms like supply chain management and new business models based upon outcome. There will be a marked sea change as Platform as a service (PaaS) will overtake Infrastructure as a service (IaaS) as the main buzz of cloud computing. There will also be no let up in the pace of mergers and acquisitions in this industry. Governments will also increasingly become engaged in discussing regulation of the cloud and somewhere, some official will be talking up the idea of licensed cloud operators.
  2. Environment: Total Arctic Ice volume will decline to the lowest level on record with the melting season considered to have extended by several weeks. The UK will suffer another cold winter.
  3. Economy: Inflation, as measured by RPI, will continue to rise however despite this and because of instabilities in the recovery the MPC will hold interest rates low and implement a final last gasp effort for more quantitative easing - principally because they're all barking mad. In the UK, London will experience a property bubble for high value residential property whilst the overall housing market, according to the Halifax House Price Index, will suffer a fall in prices. The double dip will finally arrive in 2011, driven by overexposure of banks to instruments based on sovereign debt during a time when there are increasing market attacks on sovereign debt and a drop in consumer confidence. The FTSE 100 will drop below 3,000 during the year.
  4. Society: As a consequence of austerity measures, general strife and frustration, we will see increasing civil disobedience in many countries. In the UK, I expect to see further protests and increasingly strike action. Despite the necessity to reduce debt, the coalition (in particular the Liberal Party) will continue to wain in popularity polls but despite many pundits predicting an election, the coalition will muddle through.
  5. Technology Business: There are five particular events I expect to see in 2011. First, VMWare will increasingly act as two operational divisions - one focused on infrastructure, the other on platform. Some public pundits will start to question whether one of the units will be sold. Secondly, I expect that whilst CPTN holdings will turn out to be a patent troll, its target is not Android or FOSS specifically but Cloud in general. Thirdly, we should see the examples of companies publicly trading on variability in cloud infrastructure prices through the provision of true brokerage services. Fourthly, the volume of tablet sales will sky rocket with new competitors flooding into the market and the only thing more surprising than the growth of tablets will be the rapid decline of traditional laptops. Lastly, the concept of social searching will become increasingly important with a continuation of the plethora of start-ups providing new ways of ranking, mining and determining social reputation however those pundits discounting the future of Google will get a rude awakening.
  6. Media Technology: In the UK, there will be further high profiled efforts to carve up the Internet. Not only will we see two tiered internet services but also the use of government regulation to introduce censorship based services designed to "protect the most vulnerable". Whilst Paywalls will continue to be the rage, the largest effect will come through the proliferation of devices with on-chip DRM. Many pundits will raise the question whether these devices and the introduction of two tier environments means the Internet can be effectively controlled for the average consumer. Online video will continue to grow exponentially, with YouTube becoming increasingly seen as the future distribution channel of media - i.e. if it ain't on YouTube, did anyone see it?
  7. Manufacturing Business: Printed electronics will have a robust year in the popular press, with pundits talking up the potential for this technology especially when combined with 3D printing. Whilst this is nothing new, there will be a significant increase on the hype around the subject with a couple of high profile articles.
  8. Words to watch for: Consumerization, Shadow IT and Ecosystem are the watch words for 2011 and this is the year in which they'll reach fever pitch. Cloud computing will still cause confusion for many and unfortunately there will be a continuation of marketing efforts to distinguish between enterprise and public cloud.
  9. Social Mobility: Despite increases in tax and a crackdown on tax avoidance causing the usual round of noise about the "wealthy will leave the country" there will be no mass exodus of wealth from the UK.
  10. MISOG's: Despite the occasion, there will be a considerable amount of grumbling over the Royal Wedding and how much coverage it's getting. There will be numerous articles on whether the Royals are still relevant, endless facts on the cost to the economy and assertions that public money would have been better spent elsewhere. Whilst the direct cost of the wedding (ignoring overall economic impacts) could be paid for several times over out of the tax efficiency measures that Philip Green (our cost Czar) has employed or the huge profits institutions have made on quantitative easing, unfortunately no private company will step upto the plate and offer to pay the bill. Someone, somewhere will write an article about how the cost of giving everyone an extra days holiday could stall the UK recovery.

Saturday, January 01, 2011

Review of Mystic Me 3.0

I've not posted for a short while, so I thought I'd start with a round-up on how I did with last year's predictions before providing some more cowardly custard predictions for 2011.

I thought that 2010 was a bad year for me in the prediction stakes because I hit a 50% rate. However, on reflection this doesn't seem to be the case. The issue with prediction is uncertainty and value i.e the level of detail and non-obviousness within the prediction. It is trivially easy to define a broad set of predictions on well established trends that means you're almost always 100% right but then they're not much use.

Taking the case of specificity, a prediction of a single number coming up in any of the 52 weeks of the lottery is very high whereas the odds of predicting what the entire set of lottery numbers are on a specific week is very low. Equally, predicting continuation of an established and understood trend (e.g. "I predict the sun will come up tomorrow") is of little value. Not all predictions are equal.

So how do you strike the balance? Each year, I've upped the stakes on both the specificity and non-obviousness of the predictions hence increasing the probability of failure. However in doing so I've failed to understand that the target should not be to achieve 100% success but instead 50% as an optimal goal i.e. a level of specificity and non-obviousness that means you're entirely right half the time. The future is uncertain and predictions should reflect this otherwise we're not pushing the prediction hard enough.

With that said, here's how I did :-

  1. The number of mergers & acquisitions in the cloud computing and open source industries will reach fever pitch, surpassing previous years: Well, that happened. It was a fairly insane ride in the world of M&A from Novell, 3Tera, CloudKick, Makara to Heroku and a host of others. Score:1
  2. The first examples of people trading on variability in cloud infrastructure prices and the early formation of brokerage concepts will appear: Whilst examples of brokerage concepts appeared, companies started to use reserved, spot and on-demand instances to reduce costs and we even had announcements regarding spot markets, there were no clear examples of people publicly trading on variability. Score: 0
  3. There will be no let-up in end user confusion surrounding cloud computing: Whether it's surveys of Canadian Executives or Technology Resellers, confusion over cloud continues unabated. Score: 1
  4. Despite many predicting the death of the book, paperbacks will have a surprisingly good year: Despite the doomsayers and the rise of kindle, published figures show that paperbacks have held up remarkably well with only modest declines in volume. Score: 1
  5. RPI in the UK will rise sharply and the FTSE 100 will drop below 3,000 during the year. Judging on past performance, the MPC will keep interest rates low because they're barking mad: Whilst RPI rose sharply and the MPC held interest rates level, the double dip has been somewhat delayed by financial engineering. So, whilst I still hold to this prediction, any major drop has been delayed until 2011. Score: 0.
  6. Under howls of protest, banks will be given more taxpayers cash. This will be despite being bailed out, given free cash through quantitative easing and then splashing lots of dosh on bonuses: Well, we have the bonuses, healthy rounds of free cash aka quantitative easing courtesy of the USA but with no double dip and the bail-out of the credit default swap bonanza that was building around sovereign debt, any effect has been delayed until next year. Score: 0
  7. House prices will continue to drop in the UK: Using the same metrics as last year, house prices in the UK (except in London) have dropped. Certainly, the lack of the double dip has kept prices higher. Score: 1
  8. Summer ice disappearance exceeding the worst predictions of current climate models: Artic summer sea ice volume was 4,000 km^3, by far the worst figure on record. Score: 1
  9. There will be legal attempts to claim and quantify ownership of social networks as company IP: Actually, this had already happened before the prediction was made, so that has to be a score : 0 for not checking properly.
  10. The new Doctor Who will be pants and the attempts to spice it up and make it more gritty will look rather sad: Whilst the attempts to spice up and make Dr Who a bit more gritty are public record, I have to say that Matt Smith is excellent in the role. Score: 0

Overall, 5/10. I had originally though this was a "could do better" score instead I realise that this is exactly what I should aiming for.

Friday, November 19, 2010

All in a word.

In my previous post, I provided a more fully fledged version of the lifecycle curve that I use to discuss how activities change. I've spoken about this for many years but I thought I spend a little time focusing on a few nuances.

Today, I'll talk about the *aaS misconception - a pet hate of mine. The figure below shows the evolution of infrastructure through different stages. [The stages are outlined in the previous post]

Figure 1 - Lifecycle (click on image for higher resolution)


I'll note that service bureau's started back in the 1960s and we have a rich history of hosting companies which date well before the current "cloud" phenomenon. This causes a great deal of confusion over who and who isn't providing cloud.

The problem is the use of the *aaS terms such as Infrastructure as a Service. Infrastructure clouds aren't just about Infrastructure as a Service, they're about Infrastructure as a Utility Service.

Much of the confusion has been caused by the great renaming of utility computing to cloud, which is why I'm fairly consistent on the need to return to Parkhill's view of the world (Challenge of the Computer Utility, 1966).

Cloud exists because infrastructure has become ubiquitous and well defined enough to support the volume operations needed for provision of a commodity through utility services. The commodity part of the equation is vital to understanding what is happening and it provides the distinction between a VDC (virtual data centre) and cloud environments.

If you're building an infrastructure cloud (whether public or private) then I'll assume you've got multi-tenancy, APIs for creating instances, utility billing and you are probably using some form of virtualisation. Now, if this is the case then you're part of the way there, so go check out your data centre.

IF :-
  • your data centre is full of racks or containers each with volumes of highly commoditised servers
  • you've stripped out almost all physical redundancy because frankly it's too expensive and only exists because of legacy architectural principles due to the high MTTR for replacement of equipment
  • you're working on the principle of volume operations and provision of standardised "good enough" components with defined sizes of virtual servers
  • the environment is heavily automated
  • you're working hard to drive even greater standardisation and cost efficiencies
  • you don't know where applications are running in your data centre and you don't care.
  • you don't care if a single server dies

... then you're treating infrastructure like a commodity and you're running a cloud.

The economies of scale you can make with your cloud will vary according to size, this is something you've come to accept. But when dealing with scale you should be looking at :-
  • operating not on the basis of servers but of racks or containers i.e. when enough of a rack is dead you pull it out and replace it with a new one
  • your TCO (incl hardware/software/people/power/building ...) for providing a standard virtual server is probably somewhere between $200 - $400 per annum and you're trying to make it less.
Obviously, you might make compromises for reasons of short term educational barriers (i.e. to encourage adoption). Examples include: you might want the ability to know where an application is running or to move an application from one server to another or you might even have a highly resilient section to cope with many legacy systems that have developed with old architectural principles such as Scale-up and N+1. Whilst these are valuable short term measures and there will be many niche markets carved out based upon such capabilities, they incur costs and ultimately aren't needed.

Cost and variability are what you want to drive out of the system ... that's the whole point about a utility. Anyway, rant over until next week.

Sunday, November 14, 2010

IT Extremists

The problem with any transition is that inevitably you end up with extremists, cloud computing and IT are no exception. I thought I'd say a few words on the subject.

I'll start with highlighting some points regarding the curve which I use to describe the underlying transition (evolution) behind cloud. I'm not going to simplify the graph quite as much as I normally do but then I'll assume it's not the first time readers have seen this.

Figure 1 - Lifecycle (click on image for higher resolution)



The points I'll highlight are :-
  1. IT isn't one thing it's a mass of activities (the blue crosses)
  2. All activities are undergoing evolution (commonly known as commoditisation) from innovation to commodity.
  3. As activities shift towards more of a commodity, the value is in the service and not the bits. Hence the use open source has naturally advantages particularly in provision of a marketplace of service providers.
  4. Commoditisation of an activity not only enables innovation of new activities (creative destruction), it can accelerate the rate of innovation (componentisation) of higher order systems and even accelerate the process of evolution of all activities (increase communication, participation etc).
  5. Commoditisation of an activity can result in increased consumption of that activity through price elasticity, long tail of unmet demand, increased agility and co-evolution of new industries. These are the principle causes of Jevons' paradox.
  6. As an activity evolves between different stages risks occur including disruption (including previous relationships, political capital & investment), transition (including confusion, governance & trust) and outsourcing risks (including suitability, loss of strategic control and lack of pricing competition.
  7. Benefits of the evolution of an activity are standard and include increased efficiencies (including economies of scale, balancing of heterogeneous demand etc), ability of user to focus on core activities, increased rates of agility and tighter linking between expenditure and consumption.
  8. Within a competitive ecosystem, adoption of a more evolved model creates pressure for others to adopt (Red Queen Hypothesis).
  9. The process of evolution is itself driven by end user and supplier competition.
  10. The general properties of an activity changes as it evolves from innovation (i.e. dynamic, deviates, uncertain, source of potential advantage, differential) to more of a commodity (i.e.repeated, standard, defined, operational efficiency, cost of doing business).

The above is a summary of some of the effects, however I'll use this to demonstrate the extremist views that appear in our IT field.

Private vs Public Cloud: in all other industries which have undergone this transition, a hybrid form (i.e. public + private) appeared and then the balance between the two extremes shifted towards more public provision as marketplaces developed. Whilst private provision didn't achieve (in general) the efficiencies of public provision, it can be used to mitigate transitional and outsourcing risks. Cloud computing is no exception, hybrid forms will appear purely for the reasons of balancing benefits vs risks and over time the balance between private and public will shift towards public provision as marketplaces form. Beware ideologists saying cloud will develop as just one or the other, history is not on their side

Commoditisation vs Innovation: the beauty of commoditisation is that it enables and accelerates the rate of innovation of higher order systems. The development of commodity provision of electricity resulted in an explosion of innovation in things which consumed electricity. This process is behind our amazing technological progress over the last two hundred years. Beware those who say commoditisation will stifle innovation, history says the reverse.

IT is becoming a commodity vs IT isn't becoming a commodity: IT isn't one thing, it's a mass of activities. Some of those activities are becoming a commodity and new activities (i.e. innovations) are appearing all the time. Beware those describing the future of IT as though it's one thing.

Open Source vs Proprietary : each technique has a domain in which it has certain advantages. Open source has a peculiarly powerful advantage in accelerating the evolution of an activity towards being a commodity, a domain where open source has natural strengths. The two approaches are not mutually exclusive i.e. both can be used. However, as activities become provided through utility services, the economics of the product world doesn't apply i.e. most of the wealthy service companies in the future will be primarily using open source and happily buying up open source and proprietary groups. This is diametrically opposed to the current product world where proprietary product groups buy up open source companies. Beware the open source vs proprietary viewpoint and the application of old product ideas to the future.

I could go on all night and pick on a mass of subjects including Agile vs Six Sigma, Networked vs Hiearchical, Push vs Pull, Dynamic vs Linear ... but I won't. I'll just say that in general where there exists two opposite extremes, the answer normally involves a bit of both.

Tuesday, September 21, 2010

A run on your cloud?

When I use a bank, I'm fully aware that the statement I receive is just a set of digits outlining an agreement of how much money I have or owe. In the case of savings, this doesn't mean the bank has my money in a vault somewhere as in all likelihood it's been lent out or used elsewhere. The system works because a certain amount of reserve is kept in order to cover financial transactions and an assumption is made that most of my money will stay put.

Of course, as soon as large numbers of people try to get their money out, it causes a run on the bank and we discover just how little the reserves are. Fortunately, in the UK we have an FSA scheme to guarantee a minimum amount that will be returned.

So, what's this got to do with cloud? Well, cloud (as with banking) works on a utility model, though in the case of banking we get paid on both the amount we consume and provide (i.e interest) and in the cloud world we normally only have the option to consume.

In the case of infrastructure service providers, there are no standard units (i.e. there is no common cloud currency) but instead each provider offers it own range of units. Hence if I rent a thousand computer resource units, those units are defined by that provider as offering a certain amount of storage and CPU for a given level of quality at specified rate (often an hourly fee).

As with any utility there is no guarantee that when I want more, the provider is willing to offer this or has the capacity to do so. This is why the claims of infinite availability are no more than an illusion.

However, hidden in the depths of this is a problem with transparency which could cause a run on your cloud in much the same way that Credit Default Swaps hit many financial institutions as debt exceeded our capacity to service it.

When I rent a compute resource unit from a provider, I'm working on the assumption that what I'm getting is that compute resource unit and not some part of it. For example, if I'm renting on an hourly basis a 1Ghz core with 100Gb storage and 2Gb memory - I'm expecting exactly that.

However, I might not use the whole of this compute resource. This offers the service provider, if they were inclined, an opportunity to sell the excess to another user. In this way, a service provider running on a utility basis could be actively selling 200 of their self defined compute units to customers whilst it only has the capacity to provide for 100 of those units when fully used. This is quaintly given terms like improving utilisation or overbooking or oversubscription but fundamentally it's all about maximising service provider margin.

The problem occurs when everyone tries to use their compute resources fully with an overbooked provider, just like everyone trying to get their money out of a bank. The provider is unable to meet its obligations and partially collapses. The likely effect will be compute units being vastly below their specification or some units which have been sold are thrown off the service to make up for the shortfall (i.e. customers are bumped).

It's worth remembering that a key part of cloud computing is a componentisation effect which is likely to lead to massively increased usage of computer infrastructure in ever more ephemeral infrastructures and as a result our dependency on this commodity provision will increase. It's all worth remembering that black swan events, like bank runs do occur.

If one overbooked provider collapses, then this is likely to create increased strain on other providers as users seek alternative sources of computer resource. Due to such an event and unexpected demand, this might lead to a temporary condition where some providers are not able to hand out additional capacity (i.e. new compute units) - the banking equivalent of closing the doors or localised brown-outs in the electricity industry.

However, people being people will tend to maximise the use of what they already have. Hence, if I'm renting 100 units with one provider who is collapsing, 100 units with another who isn't and a situation where many providers are closing their doors temporarily, then I'll tend to double up the workload where possible on my fully working 100 units (i.e where I believe I have spare capacity).

Unfortunately, I won't be the only one doing this and if that provider has overbooked then it'll collapse to some degree. The net effect is a potential cascade failure.

Now, this failure would not be the result of poor utility planning but instead the overbooking and hence overselling of capacity which does not exist, in much the same way that debt was sold beyond our capacity to service it. The providers have no way of predicting black swan events, nor can they estimate the uncertainty with user consumption (users, however, are more capable of predicting there own likely demands).

There are several solutions to this, however all require clear transparency on the level of overbooking. In the case of Amazon, Werner has made a clear statement that they don't overbook and sell your unused capacity i.e. you get exactly what you paid for.

Rackspace also states that they offer guaranteed and reserved levels of CPU, RAM and Storage with no over subscription (i.e. overbooking).

In this case of VMWare's vCloud Director, then according to James Watters they provide a mechanism for buying a hard reservation from a provider (i.e. a defined unit), with any over commitment being done by the user and under their control.

When it comes to choosing an infrastructure cloud provider, I can only recommend that you first start by asking them what units of compute resource they sell? Then afterwords, ask them whether you actually get that unit or merely a capacity for such depending upon what others are doing? In short, does a compute unit of 1Ghz core with 100Gb storage and 2Gb memory actually mean that or could it mean a lot less?

It's worth knowing exactly what you're getting for your buck.

Saturday, August 28, 2010

The bets ...

Every now and then I make bets about the future, a gamble on the actions of individual players (which is, as Hayek would explain, a really bad idea).

I thought I'd write down some of the outstanding bets I have, leaving out the counterparty (unless they're ok with it) & how much it is for (which is always a cup of tea). Naturally these are reckless (I like to gamble) and whilst I don't expect to win, I do expect to get the general direction of travel roughly right. The day I win is the day I haven't pushed it far enough and I'll be disappointed.

[2006] : The Open Source Bet.
By the end of 2015, it will be accepted wisdom that open source provides the only viable mechanism of creating competitive markets in the utility computing world. It will dominate this space and be considered the norm.
[LOST. There's a lot of talk about the importance of open source especially in the cloud market and some good examples such as Cloud Foundry, however the case for creating a competitive market is unproven and open source doesn't dominate the space. Heading in the right direction but NO cigar.]

[2006] : The JS Bet.
By the end of 2015, JavaScript will be seen as a rapidly growing enterprise language for developing entire applications, both client and server side.
[LOST. Despite all the chortling, Javascript has gone on to become a massive language with client and server side components built with this using numerous frameworks. However, I don't think we can say unequivocally that it has become a rapidly growing enterprise language. Heading in the right direction but NO cigar.]

[2007] : The Broker Bet.
By the end of 2013, we will see adverts for computer resource brokers.
[LOST. Alas it looks like it will be end of 2014 / 2015Heading in the right direction but NO cigar.]

[2008] : The Three Step Bet.
By the end of 2013, Oracle would have bought Sun in order to get into the cloud space, RedHat would have bought Novell and Microsoft will have bought Canonical.
[LOST. Oracle did buy Sun but then RedHat waivered on Novell and without which the triggers for MSFT / Canonical were less likely to happen in the timeframe. NO cigar!]

[2009] : The Escape Button Bet.
By the end of 2014, VMWare will have divided into two companies, one focused on existing virtualisation technology (IaaS Group) and the other focused on PaaS technology. The IaaS group will have been sold off to another company.
[LOST. VMware did build an open source platform play called CloudFoundry and it was this that EMC spun off into a company called Pivotal. However, EMC did not jettison the virtualisation business despite rumours. Heading in the right direction but NO cigar!]

[2010] : The Rackspace Bet.
By Aug 2013, Rackspace will be providing EC2/S3 APIs alongside its own.
[LOST to +James Watters. James argued the team at Rackspace were culturally against such adoption. He was right, I was wrong. Even if by 2016 Rackspace is all over Amazon, NO cigar!]

Added Friday 25th March 2011
Rather than creating new lists, I thought I'd extend this list.

[2011] : The Apple Bet (@GeorgeReese & @mthiele10)
By the end of 2017, Apple will have or will be in the process of filing for protection under chapter 11 of the U.S. bankruptcy code.
[LOST. This was based upon the assumption that Steve Jobs was still in charge and continued a focus on innovative leadership. However, Cook took over and has done a marvellous job in rebalancing the ship. Oh, do I think that AAPL is out of trouble ... not yet ... but Cook has added many years to that company. We're not there yet but this bet is lost. NO cigar!]

Added Friday 30th July 2012
[2012] : The Public Public bet (@lmacvittie)
By the end of 2016 the common view (as in held more often than alternatives) will be that the future is hybrid as in public / public and that private cloud will be seen as niche.
[LOST. Depends upon which circles you walk in but there's enough marketing / vested interests and inertia that there's still a cacophony of hybrid as in public + private. We're close to the tipping point of people waking up not quite there yet. So, NO cigar!]

[2012] : The "Who's the Daddy" bet
The first $1 trillion market share company will be Amazon.
[Hmmm, sadly it looks like I might actually win this one. Obviously didn't push it far enough.]

[2012] The Inertia bet (@jeffsussna)
By mid 2014, a competitive market based around an open source technology which contains at least four IaaS providers (each with >100K servers) that have initiated a price war with AMZN will exist or by 2020 IBM, HP and Dell's server infrastructure business will be in severe decline verging on collapse (unless they have previously exited those businesses).
[Well, the competitive market didn't appear - so we will have to wait until 2020 to see the result]

Added Friday 6th August 2012
[2012] The Modified VMWare Bet (@botchagalupe)
By the end of Feb 2014, VMWare will have :-
1) Acquired Basho or an equivalent company providing distributed storage capabilities to Riak CS
2) Launched an open source IaaS offering.
[LOST to +John Willis. Well VMWare did acquire Virsto and has adopted a closer relationship to OpenStack providing support for it. Heading in the right direction but NO cigar!]

Added Monday 15th October 2012
[2012] The Amazon Two Factor Market Bet (@jeffsussna)
If GCE (Google Compute Engine) launches at scale with success then by end of 2015 Amazon's AWS consoles will have extended to enable you to control your GCE environment.
[LOST to @JeffSussna. AWS didn't launch a console for this but then it is also arguable that Google hasn't launched at either the scale or success that might warrant it. Still, NO cigar. Jeff wins.]

Added 6th May 2013
[2013] AWS Revenue Bet (@geoffarnold)
By end of 2017, AWS revenue will exceed 50% of 2011 worldwide server revenue [approx. $26Bn]

Added Friday 19th July 2013
[2013] The Dead Duck Bet (@cloudbzz, @EdLeafe, @kylemacdonald)
By end of July 2016, either OpenStack will be flourishing through provision of a massive market of AWS clones or the project and its differentiate from Amazon strategy will be widely seen as a dead duck. 

For clarification:-
1. AWS clone does not mean that all AWS services are provided but instead the market is focused solely on providing as many AWS compatible services as possible. The differentiation strategy will have been abandoned or the project will fail.

2. The counter bet is that OpenStack won't be a dead duck nor will there be a massive market of AWS clones built on OpenStack. In this circumstance, the differentiation strategy of OpenStack will be seen as successful.

[LOST to @cloudbzz, @EdLeafe, @kylemacdonald. Well the flourishing market of AWS clones didn't get off the ground and OpenStack pretty much gave up the public space and headed to the private market to compete with VMware, which also adopted OpenStack. Amazon naturally is setting the market ablaze and whilst "OpenStack continues to struggle in its attempts to broaden appeal beyond core customers among service providers and telcos" it has enough supporters along with niches in Telco and projects such as the EC Horizon effort aka Anything but Amazon - that we can't describe it as being widely seen as a dead duck. Certainly in some quarters. We'll just have to wait a few more years. Still, NO cigar!]

Added Monday 9th September 2013
[2013] The Highly Aggressive Smart Phone and Tablet Collapse Bet
(@reverendted, @pwaring )
By the end of 2016, numerous press articles will have proclaimed the future death of smart phones and tablets.  By the end of 2018 both categories will be widely viewed as 'dead markets' and expected to become rapidly niche by end of 2021. NB. The original dates were 2018, 2020 and 2025 respectfully but in this case I took a hyper aggressive stance rather than my normal reckless.

[2013] The It's Alive! Bet (@crankypotato)
By the end of 2023, research labs will have an entirely biological clock including display with no mechanical or electronic components.

Added Saturday 13th September 2014
[2014]  The "I'm not allowed to drive here anymore" Bet (@h_ingo and @lhochstein)
By end of 2030, at least one city with a population in excess of 1 million will have outlawed people driving cars.

[2014]  The "Silicon Valley isn't what it used to be!" Bet (@quentynblog)
By end of 2030, Silicon Valley will not be commonly ranked in the top 5 most innovative places in the world.

[2014] The Amazon doesn't lose Bet (@littleidea)
By end of 2025, then neither MSFT nor Google will have wiped out AMZN which will remain a dominant player in cloud infrastructure. 'Wiped out' will mean less than 15% of market share by use. 'Dominant' will mean in the top 2 of public infrastructure providers by use.

For clarification
* 15% then I buy the tea
* Still in the top 2, then @littleidea buys the tea
* 15% but out of top 2 then we just hang out drinking tea

[2014] The Disruption Bet (@crankypotato)
By end of 2030, the period 2025 to 2030 will be commonly considered (by the general press, population and business press) as far more 'disruptive' than the period 2010 to 2020.

[2014] The Big Data's gone horribly wrong Bet [@thinkinnovation]
By end of 2023, Big Data will have experienced its first casualties with more than Five Big Data (product) vendors having gone bust or sold in a fire sale. For clarification, the vendor must be a well known name to experts in the field in 2014.

[2014] The IoT didn't save us Bet [@ZakKissel]
By end of 2030, many of today's large Tech company leaders in IoT will be reported (in the common tech press) as facing severe disruption by new entrants. For clarification. Many will be four or more company names associated as tech leaders in the field in 2014. Large will mean greater than 5,000 employees.

[2015] The Warren Buffet Bet[@jstodgill]
By the end of 2030, Warren Buffet will have made a small fortune from printed electronics. Key metrics will be that Warren Buffet will have a) invested in printed electronics b) made a 10x return.

Added 3rd July 2015
[2015] The "I blew the company but still got paid millions, it was the staff wot dunnit" executive sweepstake.
By the end of 2025, how many of these companies will survive as stand-alone entities? For clarification, being acquired does not count as surviving nor does merging with another. The candidate companies are - Cisco, IBM, HP, Oracle, Microsoft, SAP, Dell, NetApp, VMware, Amazon and RedHat

0 survive - @frabcus
1 survive - @rorti33
2 survive - @zeruch
3 survive - me (@swardley)
4 survive - @sddc_steve 
5 survive - @saulcozens  
6 survive - @WorkingHardInIT
7 survive - @rbramley (would prefer 6 but that was taken).
8 survive - @codebeard (for a laugh, though he'd rather have a lower number).
9 survive - @yoz (though he'd rather have 7)
10 or 11 survive  - alas, no-one was willing to take this. Rather telling.

To be honest there was much nashing of teeth that the lower numbers had gone and so the high numbers bets are more gestures of sporting behaviour. The usual bet, I'll buy a cup of tea for the winner next time I see them.

On a general note, everyone expects some - if not a a significant number of these companies to be gone by 2025. So, if you're into collectables ... do remember this. There might be some useful freebies at events that you can hold onto for 40 years or so as reminders of past greatness.

Added 25th August 2015

[2015] The "We succeeded by changing the definition" bet. [@WorkingHardInIT] 
By end 2020 Gartner will state that :-
1) hybrid means public plus public cloud consumption i.e. not just public plus own hosted (i.e. private) environments.
2) bimodal means three (or more) groups, not two.

[2015] The I had to up the risk factor (i.e. drop from 10%) until someone accepted - "OpenStack is sooooo niche" bet. [@Vecchi_Paolo]
By the end of 2020, the total revenue from product, direct licensing and service provider sales (excluding general hardware) of the entire OpenStack ecosystem will be less than 1% of the revenue generated by AWS.

To be frank, I expect that OpenStack will make a reasonable sum of money in the network equipment vendors space ... however, it's only a cup of tea. Still can't believe I had to drop from 10% to 1% of AWS revenue to get any takers. Not exactly a great deal of confidence out there.

Added 8th December 2016

[2016] The somewhat extreme mode "Hey, just what happened to Uber?" bet [@CloudOpinion, @GeorgeReese]
By the end of 2024, Didi Chuxing will dominate the US self driving taxi market.
Had to reduce from 2025 before anyone would happily take it.

[2016] The extreme mode "your entire company runs on Lambda?" bet [@davidajbagley, @jmwiersma, @LarryLarmeu]
By end of 2021, a $5Bn revenue company will announce it is running entirely on AWS lambda.
Had to reduce the date from 2024 all the way to 2021 before anyone would take it.

[2016] The somewhat extreme "What's an FPGA?" bet [@bmkatz]
By the end of 2023, a modular FPGA based device will dominate the mobile market.
Original had a target of 2025 but hey ... extreme is extreme.

Wednesday, August 18, 2010

Arguably, the best cloud conference in the world?

For those of you who missed the OSCON Cloud Summit, I've put together a list of the videos and speakers. Obviously this doesn't recreate the event, which was an absolute blast, but at least it'll give you a flavour of what was missed.

Welcome to Cloud Summit [Video 14:28]
Very light introduction into cloud computing with an introduction to the speakers and the conference itself. This section is only really relevant for laying out the conference, so can easily be skipped.
With John Willis (@botchagalupe) of opscode and myself (@swardley) of Leading Edge Forum.

Scene Setting
In these opening sessions we looked at some of the practical issues that cloud creates.

Is the Enterprise Ready for the Cloud? [Video 16:39]
This session examines the challenges that face enterprises in adopting cloud computing. Is it just a technology problem or are there management considerations? Are enterprises adopted cloud, is the cloud ready for them and are they ready for it?
With Mark Masterson (@mastermark) of CSC.

Security, Identity – Back to the Drawing Board? [Video 25:12]
Is much of the cloud security debate simply FUD or are there some real consequences of this change?
With Subra Kumaraswamy(@subrak) of Ebay.

Cloudy Operations [Video 22:10]
In the cloud world new paradigms and memes are appearing :- the rise of the “DevOps”, “Infrastructure == Code” and “Design for Failure”. Given that cloud is fundamentally about volume operations of a commoditized activity, operations become a key battleground for competitive efficiency. Automation and orchestration appear key areas for the future development of the cloud. We review current thinking and who is leading this change.
With John Willis (@botchagalupe) of opscode.

The Cloud Myths, Schemes and Dirty Little Secrets [Video 17:38]
The cloud is surrounded by many claims but how many of these stand up to scrutiny. How many are based on fact or are simply wishful thinking? Is cloud computing green, will it save you money, will it lead to faster rates of innovation? We explore this subject and look at the dirty little secrets that no-one wants to tell you.
With Patrick Kerpan (@pjktech) of CohesiveFT.

Curing Addiction is Easier [Video 18:41]
Since Douglas Parkhill first introduced us to the idea of competitive markets of compute utilities back in the 1960s, the question has always been when would this occur? However, is a competitive marketplace in the interests of everyone and do providers want easy switching? We examine the issue of standards and portability in the cloud.
With Stephen O’Grady (@sogrady) of Redmonk.

Future Setting
In this section we heard from leading visionaries on the trends they see occurring in the cloud and the connection and relationships to other changes in our industry.

The Future of the Cloud [Video 29:00]
Cloud seems to be happening now but where is it going and where are we heading?
With J.P. Rangaswami (@jobsworth) of BT.

Cloud, E2.0 – Joining the Dots [Video 30:04]
Is cloud just an isolated phenomenon, or is it connected to many of the other changes in our industries.
With Dion Hinchcliffe (@dhinchcliffe) of Dachis.

The Questions
The next section was a Trial by Jury where we examined some of the key questions around cloud and open source.

What We Need are Standards in the Cloud [Video 45:17]
We put this question to the test, with prosecution Benjamin Black (@b6n) of FastIP, defence Sam Johnston (@samj) of Google and trial by a Jury of John Willis, Mark Masterson, Patrick Kerpan & Stephen O’Grady

Are Open APIs Enough to Prevent Lock-in? [Video 43:21]
We put this question to the test, with prosecution James Duncan (@jamesaduncan) of Joyent, defence George Reese (@georgereese) of Enstratus and trial by a Jury of John Willis, Mark Masterson, Patrick Kerpan & Stephen O’Grady

The Debates
Following the introductory sessions, the conference focused on two major debates. The first of these covered the “cloud computing and open source question”. To introduce the subject and the panelists, there were a number of short talks before the panel debates the impact of open source to cloud and vice versa.

The Journey So Far [Video 10:59]
An overview of how “cloud” has changed in the last five years.
With James Urquhart (@jamesurquhart) of CISCO.

Cloud and Open Source – A Natural Fit or Mortal Enemies? [Video 8:44]
Does open source matter in the cloud? Are they complimentary or antagonistic?
With Marten Mickos (@martenmickos) of Eucalyptus.

Cloudy Futures? The Role of Open Source in Creating Competitive Markets [Video 8:43]
How will open source help create competitive markets? Do “bits” have value in the future and will there be a place for proprietary technology?
With Rick Clark (@dendrobates) of OpenStack.

The Future of Open Source [Video 9:34]
What will cloud mean to open source development and to linux distributions. Will anyone care about the distro anymore?
With Neil Levine (@neilwlevine) of Canonical.

The Debate – Open Source and the Cloud
 [Video 36:24]
Our panel of experts examined the relationship between open source and cloud computing.
With Rick Clark, Neil Levine, Marten Mickos & James Urquhart

The Future Panel followed the same format with first an introduction to the experts who will debate where cloud is going to take us.

The Government and Cloud [Video 10:27]
The role of cloud computing in government IT – an introduction to the large G-Cloud and App Store project under way in the UK; what the UK public sector hopes to gain from a cloud approach, an overview of the proposed technical architecture, and how to deliver the benefits of cloud while still meeting government’s stringent security requirements.
With Kate Craig-Wood (@memset_kate) of Memset.

Infoware + 10 Years [Video 10:38]
Ten years after Tim created the term infoware, how have things turned out and what is the cloud’s role in this?
With Tim O'Reilly (@timoreilly) of O'Reilly Media.

The Debate – A Cloudy Future or Can We See Trends? [Video 50:12]
The panel of experts examine what’s next for cloud computing, what trends can they forsee.
With Kate Craig-Wood, Dion Hinchcliffe, Tim O’Reilly & JP Rangaswami

So, why "arguably the best cloud conference in the world?"

As a general conference on cloud, then the standard and quality of the speakers was outstanding. The speakers made the conference, they gave their time freely and were selected from a wide group of opinion leaders in this space. There was no vendor pitches, no paid for conference speaking slots and hence the discussion was frank and open. The audience themselves responded marvelously with a range of demanding questions.

It is almost impossible to pick a best talk from the conference because they were all great talks. There are real gems of insight to be found in each and every one and each could easily be the keynote for most conferences. In my opinion, if there is a TED of cloud, then this was it.

Overall, the blend of speakers and audience made it the best cloud conference that I've ever attended (and I've been to 50+). This also made my job as a moderator simple.

I'm very grateful to have been part of this and so my thanks goes to the speakers, the audience, the A/V crew who made life so easy and also Edd Dumbill (@edd), Allison Randal (@allisonrandal), Gina Blaber (@ginablaber) and Shirley Bailes (@shirleybailes) for making it happen.

Finally, huge thanks to Edd and Allison for letting me give a version of my Situation Normal, Everything Must Change talk covering cloud, innovation, commoditisation and my work at LEF.

Wednesday, August 04, 2010

Islands in the sky

I'm often asked how will the cloud develop to which I'll answer -"imperfectly, very imperfectly".

I was reminded of this through a long discussion with Benjamin Black, hence I thought I'd write something to explain my general thoughts on the problem. First, let me apologise as this will be a long post. Second, we need to start by recaping some basic concepts about risks. The barriers to adoption in cloud cover three basic forms of risk :-

Disruption Risks : Change to existing business relationships combined with issues around political capital and previous enterprise investment. It's often difficult to let go of that which we have previously invested in.

Transitional Risks: These risks are related to the shift from a world of products to a world of services and they include confusion over the models, trust in the service providers, governance of this service world, transparency from the providers and security of supply. Many of the transitional risks can be mitigated with a hybrid (private + public) cloud approach, a standard supply chain management technique. This approach has been used in many industries which have undergone a similar change, for example in the early decades of power generation it was common to combine public generation with private generators. Even today most data centres mix a variety of public suppliers with backup generators and UPS systems. Fortunately, these transitional risks are relatively short lived.

Outsourcing Risks: These cover lack of pricing competition between the new providers , lack of second sourcing options between providers, loss of strategic control to a specific technology vendor, lock-in and unsuitability of the activity for such service provision (i.e. it's not ubiquitous or well defined enough for such volume operations based service provision). The outsourcing risks can be reduced through the formation of a competitive marketplace of providers with easy switching between them and ideally the option to in-house service provision. The outsourcing risks are long term.

For a competitive market to form, you need easy switching which means portability. The basic ingredients of portability include a choice of providers, access to your code and data from any provider and semantic interoperability between providers i.e. both the origin and destination providers need to understand your code and data in the same way. There is limited value in having access to your code and data if no other provider understands it and operates to provide the same functionality e.g. getting access to your data in salesforce is great but what do you do with it?

In such circumstances, there does exist a weaker form of syntactic interoperability, which means both providers can exchange data but the end result may not function in the same way and your data may not retain its original meaning. Often, this is where we see translation systems to convert from one system to another with the usual abundance of translation and semantic errors.

The ideal situation is therefore semantic interoperability, which generally means a common reference model (i.e. running code) which providers either operate or conform to. Unfortunately, common reference models come with their own risks.

Let us suppose you have a marketplace of providers offering some level of service at a specific level of the computing stack (SPI Model) and these providers operate to a common reference model. The model provides APIs and open data formats, giving you access to your code and data. You therefore have a choice in providers, access to your data and semantic interoperability between them. You have portability. BUT, if that common reference model is owned by a vendor (i.e. it's proprietary code) then that market is not free of constrant but instead controlled by the vendor. All the providers & consumers in that marketplace hand over a significant chunk of strategic control and technology direction to the vendor, who is also able to exert a tax on the market through license fees.

To reduce this loss of strategic control and provide a free market (as in free of constraints), then that common reference model must not be controlled by one party. It has to be open sourced. In such an environment, competition is all about operational efficiency and price vs QoS rather than bits. This makes intuitive sense for a service world, which is why I'm pleased openstack is following that route and I hope it will become the heart of a market of AWS clones. Obviously, you'll need different common reference models at different layers of the computing stack. Whilst only one is probably needed for infrastructure, you will need as many as there are competitive application marketplaces (CRM, ERP etc) in the software later of the SPI model.

Before anyone cries the old lie of standardisation hampers innovation, it's worth remembering that utility service provision (which is what cloud is really about) requires volume operations which in turn requires a ubiquitous and well defined activity. Whilst the common reference models certainly won't be perfect in the beginning, they don't need to be, they only have to create "good enough" components (such as a defined virtual machine). They will improve and evolve over time but the real focus of innovation won't be on how good these "good enough" components are but instead what is built with them. This concept, known as componentisation, is prevalent throughout our industrial history and shows one consistent theme - standardisation accelerates innovation.

So everything looks rosy … we'll have the economics benefits of cloud (economies of scale, increased agility, ability to focus on what matters), competitive marketplace based around multiple providers competing on price vs QoS, the options to use providers or install ourselves or to mitigate risks with a hybrid option, "open" API & data formats giving us access to our code and data, open sourced common reference models providing semantic interoperability, "good enough" components for ubiquitous and well defined activities which will cause an acceleration of innovation of new activities based upon these components … and so on.

Think again.

In all likelihood, we're going to end up with islands in the cloud, marketplaces built around specific ways of implementing a ubiquitous and well defined activity. Don't think of "good enough" components but instead a range of different "good enough" components all doing roughly the same thing. Nuts? It is.

Hence, in the infrastructure layer you're likely to see islands develop around :-
  • EC2/S3 (e.g. core of AWS) including the open source implementations such as Open Stack, Eucalyptus and Open Nebula.
  • vCloud principally provided through VMWare technology.
  • a Microsoft infrastructure based environment.
  • any Openstack APIs, particularly if Rackspace implements this.
All of these will be providing their own versions of "good enough" units of virtual infrastructure. Within those islands you'll head towards multiple service providers or installations, a competitive marketplace with switching between installation and semantic interoperability based upon a common reference model. The open source projects such as OpenStack are likely to form assurance industries (think moody's rating agencies, compliance bodies) to ensure portability between providers by comparison to the common reference model whereas the proprietary technologies are likely to develop certification bodies (e.g. VMWare Express).

Between islands there will be only syntactic interoperability (with exceptions such as OpenStack which will try to span multiple Islands), which will mean that you'll require translation of systems from one island to another. Whilst management tools will develop (and already have started) to cover multiple islands and translation between them, this process is imperfect and a constant exercise in chasing different APIs and creating a lowest common denominator (as per libcloud). Of course, I wouldn't be surprised if the libcloud folk were hoping that as a community develops around them, then the providers will offer libcloud as a native API. Such command & conquer strategies rarely succeed.

Given this complexity and since there will be multiple service providers within an island, it's likely that consumers will tend to stick within one island. If we're lucky, some of these Islands might die off before the problem becomes too bad.

Of course, these base components could effect the development of higher order layers of the computing stack and you are likely to see increasing divergence between these islands as you move up the stack. Hence, the platform space on the vCloud island will differ from the platform space on the EC2 / S3 island. We will see various efforts to provide common platforms across both, but each will tend towards the lowest common denominator between the islands and never fully exploit the potential of any. Such an approach will generally fail compared to platforms dedicated to that island, especially if each island consists of multiple providers hence overcoming those general outsourcing risks (lack of second sourcing options etc). Maybe we'll be lucky.

So, the future looks like multiple cloud islands, each consisting of many service providers complying to the standard of that island - either vCloud, EC2/S3 or whatever. Increasing divergence in higher order systems (platforms, applications) between the islands and whilst easy switching between providers on an island is straightforward, shifting between islands requires translation. This is not dissimilar to the linux vs windows worlds with applications and platforms tailored to each. The old style of division will just continue with a new set of dividing lines in the cloud. Is that a problem?

Yes, it's huge if you're a customer.

Whilst cloud provides more efficient resources, consumption will go through the roof due to effects such as componentisation, long tail of unmet business demand, co-evolution and increased innovation (Jevons' paradox). Invariably one of the islands will become more price efficient i.e. there is no tax to a technology vendor who collects their annual license and upgrade fee through a drip feed process. It's this increased dependency combined with price variance which will result in operational inefficiencies for one competitor when compared to another who has chosen the more efficient island. The problem for the inefficient competitor will be the translation costs of moving wholesale from one island to another. This is likely to make today's translations look trivial and in all probability will be prohibitive. The inefficient competitor will be forced therefore to compete on a continual disadvantage or attempt to drive the technology vendor to reduce their taxation on the market.

The choices being made today (many are choosing islands based upon existing investment and political choices) will have significant long term impacts and my come to haunt many companies.
It's for these reasons, that I've recommended to anyone getting involved in cloud to look for :-
  1. aggressively commoditised environments with a strong public ecosystem.
  2. signals that multiple providers will exist in the space.
  3. signals that providers in the space are focused on services and not bits.
  4. an open source reference implementation which provides a fully functioning and operating environment.
In my simple world, VMWare is over-engineered and focuses on resilient virtual machines rather than commodity provision. It's ideal for a virtual data centre but we're talking about computing utilities and it also suffers from being a proprietary stack. Many of the other providers offer "open" APIs but as a point of interest APIs can always be reverse engineered for interoperability reasons and hence there is no such thing as "closed" API.

The strongest and most viable island currently resides around EC2 / S3 with the various open source implementations (such as UEC), especially since the introduction of Rackspace & Nasa's service focused openstack effort.

I don't happen to agree with Simon Crosby that VMWare's latest cloud effort Redwood == Deadwood. I agree with his reasoning for why it should be, I agree that they're on shaky grounds in the longer term but unfortunately, I think many companies will go down the Redwood route for reasons of political capital and previous investment. IMHO I'm pretty sure they'll eventually regret that decision.

If you want my recommendation, then at the infrastructure layer get involved with open stack. At the platform layer, we're going to need the same sort of approach. I have high hopes for SSJS (having been part of Zimki all those years back), so something like Joyent's Smart platform would be in the right direction.

---  Added 19th August 2013

Gosh, this is depressing. 

Three years and 15 days later Ben Kepes (a decent chap) writes a post on how we're coming to terms with what are basically "islands in the clouds".

OpenStack followed a differentiation road (which James Duncan and I raised as a highly dubious play to the Rackspace Execs at the "OpenStack" party in July at OSCON 2010). They didn't listen and we didn't get the market of AWS clones. In all probability if AWS compatibility had been the focus back in 2010 then the entire market around OpenStack could have possibly been much larger than AWS by now. But, we will never know and today, OpenStack looks like it has almost given up the public race and is heading for a niche private role.

In his article, Ben states that companies never wanted "cloud bursting" - a term which seems to be a mix of 'live' migration (a highly dubious and somewhat fanciful goal to aim for which is more easily managed by other means) combined with the ability to expand a system into multiple environments.

Dropping the 'live' term, then both can be achieved easily enough with deployment and configuration management tools. One of the reasons why I became a big fan of Chef in '08/'09 (and not just because of my friend Jesse Robbins). This sort of approach is simple if you have multiple providers demonstrating semantic interoperability (i.e. providing the same API and the same behaviour) as your cost of re-tooling and management is small. It becomes unnecessarily more complex with more Islands.

Anyway, that aside the one comment I'll make on Ben's post is the goal was never "cloud bursting" but instead second sourcing options and balancing of buyer / supplier relationship. Other than that, a good but depressing post.