Saturday, October 20, 2012

Something for the future

A list of companies in two groups. I'm putting this here in order to return to the list in 2020 and explain why I had the companies listed as such.

It's a prediction test (useful for me, hence I'm putting it somewhere public) but probably not useful for anyone else.

Group 1
  1. Amazon
  2. Google
  3. Samsung
  4. BP
  5. Baidu
  6. China Telecom
  7. EMC
  8. Lloyds Bank
  9. ARM
  10. NetFlix
  11. Ebay
  12. Yahoo
  13. Intel
  14. Facebook
  15. BAE Systems
  16. Lenovo
  17. Salesforce
  18. Time Warner
  19. Huawei
  20. Canonical
  21. Citrix
  22. Fastly
  23. Bromium
  24. Opscode
  25. Juniper Networks

Group 2
  1. DuPont
  2. GSK
  3. WalMart
  4. Microsoft
  5. Berkshire Hathaway
  6. Goldman Sachs
  7. Barclays Bank
  8. RedHat
  9. Walt Disney
  10. IBM
  11. Cable and Wireless
  12. Canon
  13. SAP
  14. HP
  15. CouchBase
  16. Oracle
  17. Cisco
  18. Puppet Labs
  19. Apple
  20. Rackspace
  21. Twitter
  22. Dell
  23. Nokia
  24. Zynga
  25. PistonCloud

Friday, October 19, 2012

On Open Source, Standards, Clouds, Strategy and Open Stack


On Standards and Open Source

The issue of standards and in particular open standards is a hotly debated topic. The recent UK Government consultation on open standards was embroiled in much of the politics on the subject with even a media expose of the chair of a consultation meeting as a member of a paid lobbyist group. The rumours and accusations of ballot stuffing of ISO meetings with regards to Microsoft’s OOXML adoption as an open standard are also fairly widespread. The subject matter is also littered with confusing terms from FRAND (fair, reasonable and non discriminatory licensing) being promoted as an “open” standard despite it being IP encumbered by definition.

In general, the principle of standards is about interoperability. In practice, it’s appears more of a battleground for control of a developing market. Standards themselves can also create new barriers to entry into a market due to any onerous cost of implementation. There are also 17 different definitions of what an “open standard” is varying from international bodies to governments. Of these the OSI definition is probably the most respected. Here, open standards are defined as those which have no intentional secrets, not restricted by patents or other technology, no dependency on execution of a license agreement, freely and publicly available and royalty free.

When I talk about evolution of systems (whether activities, practices and data) these are all about meeting specific needs such as providing a web site or carbon dioxide emissions of a country. However standards refer to generic needs that apply across many stages of evolution – the need for things to work together and the need to be able to switch between solutions. These generic needs change in implementation as the underlying system evolves.

For example, in the world of software products then standards can provide interoperability between products and hence the potential to replace one product with another. The standard is normally articulated as a principle of working (i.e. how data is transferred and interpreted) in a document and the expression of this standard (i.e. the code itself) is left up to the product vendor.

This process is often imperfect as one vendor’s interpretation and expression of a principle might not match another’s. However since the switching between solutions is not usually time critical for the end user then some level of imperfection is acceptable i.e. you own the product and if you decide to switch you can migrate at your own pace. Though it should be noted that the process of migration is often a fraught one due to the imperfections.

In the world of utility services, the switching time can be critical and immediate such as the termination of a service. Here imperfections in migration are extremely undesirable and effective switching requires semantic interoperability between the services. In other words, any code and data is understood in the same way between the providers. In practice this can only be achieved if both providers are running the same system or strictly conforming to a reference model (an expression of code) rather than their interpretation of a documented principle.

Switching is an essential part of any competitive market and with utility services then the principle in a standards document is not enough and a reference model (i.e. running code, the expression) is required. If that market is to be unconstrained (i.e. free) then that expression needs to be open itself.

Hence open standards in the product world can simply be open documented principles but in the utility world they require open source reference models (i.e. running code). This simple fact is counter to how standards have been used in the past battles between products and hence the confusion, debates and arguments over the terms are unsurprising.

On Clouds, Open Source and Standards

So, we come to the cloud, which is simply the evolution of a range of IT activities from a product and product rental model to one of commodity and utility services. If we are going to see competitive free markets in this space, then open source reference models are essential. But the value to the end user is the market, not whether this system is more open or not.

An example of this is in the IaaS space. If you ask some of the big users of AWS what they want, they often reply with multiple AWS clones and rarely with another IaaS with another API. To create a marketplace of AWS clones you're going to need to start with an open source system that provides the EC2/S3/EBS APIs with multiple implementations (i.e. providers of such).

I first raised this issue at Web 2.0 in 2006 and by 2007 the whole debate of "open APIs" was kicking off and it has raged ever since. The argument today goes that EC2/S3/EBS are not "open". However APIs are simply principles, they cannot be owned only your expression can be. This has been re-affirmed in several court cases over the last few years.

This debate has already started to switch to "But the process of creating EC2/S3/EBS isn't open" ... well neither is the process for the development of Android and the end user doesn't care. As Benjamin Black once said "Solve user problems or become irrelevant".

The user problem is a competitive free market of multiple utility providers offering the defacto standard (which in case you haven't realised is  EC2/S3/EBS) and not a plethora of multiple APIs basically doing the same thing, with super APIs attempting to manage this mess under various cries of being more "Open".

Matt Asay does a good job hitting the point home with his post of "Whose cloud is the open sourciest ... who cares?". Let us be crystal clear here, Matt isn't against open source in the cloud, he (like most of us) understands its essential importance. However, Matt also understands that what needs to be focused on is the user need for competitive markets.

Open source is absolutely essential for creating a competitive free market but the focus should be on solving users need i.e. creating the market.

The focus should not be on differentiation of stuff that doesn't matter whether because of some belief that you can out innovate the biggest ecosystem in a utility space or you view it as an on ramp to your own public services. The focus shouldn't be on protecting existing industries (something which concerns me with the EU communication on Cloud). To reiterate for the umpteenth time over umpteen years, the focus should be adapting to this new world and using open source to create a utility market which meets users needs.

Mark Shuttleworth nailed this point many years ago "Innovation and Open Stack : Lessons from HTTP"


On Strategic Play and Open Stack

So when it comes to playing this game, especially because of the powerful effects that ecosystems can create, then the smart strategic play is to build an open source equivalent to EC2/S3/EBS (i.e. all the core AWS features) and create a market place of multiple providers. The goal is to co-opt, sure you can differentiate later when your ecosystem is bigger but at this moment in time co-opt and build a bigger ecosystem through a competitive market.

But how do we know we can match the APIs. Well, the beauty of competitive markets is that they allow for assurance services and exchanges i.e. there is value in the test scripts which ensure that an API is faithful. So, build the test scripts, use those to build your open source IaaS and allow people to create a Moody / Rating agency style business as your market forms.

Whilst CloudStack and Eucalyptus have made steps in this direction, Open Stack (and more likely the RackSpace people involved) seem more reluctant. "Anything but Amazon" seems to be the watch cry and the adoption of EC2/S3/EBS in Open Stack appears to have been something the community forced upon it.

Differentiation on APIs etc is a pointless hangover to the product world. Adoption of APIs and differentiation on Price vs Quality of Service is the way forward. Being an AWS clone and creating a market around such isn't "giving in" but instead it's "competing on what users rather than vendors want".

I would like to see Open Stack succeed, there are many talented people (and friends involved). But I have my doubts because I feel it has wasted an opportunity, it has been very poorly led in my view despite its success in marketing.

The battle in IaaS is going to heat up between AWS and GCE in the next year or so. Open Stack during that time needs to create a large competing market and an equivalent ecosystem which will only be hampered if it doesn't co-opt AWS.  Hence it has probably twelve months or so to get the technology rock solid, multiple large scale implementations (by large I mean a minimum $500 million capital spend for each installation), overcome the potential collective prisoner dilemma issue (of everyone differentiating rather than contributing to core) and form the competitive market.

If it fails, then my view is the good ship Open Stack will become the Mary Celeste 2.0 and that will be a terrible waste of effort.

In my view, Open Stack needs a strong benevolent dictator (like a Mark Shuttleworth for Ubuntu). An individual, who is willing to do what's in the interest of the end user and community and ride roughshod over others where necessary. Of course, the essential part is benevolence and its easy to fall foul here. With Open Stack, my advice is they need to focus on engineering quality and build the best open source AWS equivalent there is. Anything else (especially when the words differentiate and innovate are used to describe it) should be unceremoniously dumped of a high cliff for the time being. Along with this should go the every API, every Hypervisor and any "be all to everyone" concepts - focus, focus and more focus on one thing e.g. the best AWS clone. At a push, you could make an argument for AWS and GCE as your native APIs ... but I'd advise against it.

I'm glad we have an Open Stack Foundation, as the chances of that benevolent dictator emerging have grown. I'm hoping someone like Lew Tucker will step up to the plate.

However, where would I place my bet? Currently in the three horse race between Open Stack, Cloud Stack and Eucalyptus then the latter two have been playing a better game in my view though this race is far from over. If forced to choose one, well that's splitting hairs between the likes of Eucalyptus and Cloud Stack but I'd probably bet on Cloud Stack. They have a good focus, they're part of the ASF, they have a well funded backer and they have numerous production deployments at scale. It is however, too early to tell ... the next 12 months are critical.

Monday, October 15, 2012

At last ... a great definition for cloud computing

I'm not a fan of the term 'cloud computing' nor the umpteen number of definitions. I don't like NIST's mechanistic definition of 'cloud computing' which misses the nuances and so I prefer to stick with 'computer utilities' (as described by Parkhill in his 1966 book).

A definition of 'cloud computing' has to consider the economic changes due to the evolution of computing infrastructure (a technology activity) to more of a utility but at the same time it has to be mindful of niches and the different organisational and security requirements (resulting in various forms of hybrid environments) during the transition. Many of these won't last (it is a transition after all) but they need to be considered.

Somehow, in all of this time, I've missed this wonderfully simple definition of 'cloud computing' by Ramnath K. Chellappa in 1997 at INFORMS




All, I can say is this definition is almost perfect in simplicity and at the same time incredibly sophisticated in nuance (and vagueness). It also happens to be the first known definition of 'cloud computing' (being in 1997) and as far as I'm concerned it has been downhill since then.

Tuesday, October 09, 2012

Some trivia questions on cloud computing

 ... just for amusement.

Questions

1. Which year was the future of computing being provided by public, private and mixed utilities like "electricity", first published in a book?

2. Which came first, a utility based IaaS or a utility based PaaS?

3. Was Amazon EC2 built on selling Amazon's spare capacity?

4. When did Amazon start working on the concept of EC2?

5. In which year was the idea of future utility markets, federated grids and the role of open source in cloud computing first publicly presented?


Answers

1. 1966, Douglas Parkhill, The Challenge of the Computer Utility.

2. Utility based IaaS. The first utility based PaaS (Zimki) was publicly launched at D.Construct, 14 days after the launch of the most commonly well known utility based IaaS of EC2 on the 25th August 2006.

3. No. The myth of Amazon EC2 being built on spare capacity of Amazon is one of the unquenchable and totally untrue rumours.

4. 2003, though the implementation of the idea started in 2004. A good bit of background on this can be found on Benjamin Black's post.

5. Whoot, I'd like to claim that was me in 2006, an earlier version of the talk I repeated at OSCON in 2007 but then that would be completely untrue. (see http://blip.tv/swardley/commoditisation-of-it-419213).

The reality is these ideas was fairly common by 2007 and I don't know when it actually started. Some of the federated ideas certainly dates back to 1960s and many of the concepts above were described in this 2003 paper on Xenoservers by Ian Pratt et al.

There are many earlier publication, normally around the notion of markets of common service providers (CSPs). You can also bet your bottom dollar that many academics were looking into this issue between 1995-2005.

So I'm afraid this was a trick question and the answer is ... no idea but earlier than people normally think.

Comments

The point I want to get across is that the concepts of cloud computing are much older than many realise, that there still are many myths (such as the Amazon spare capacity story) and we're in our 7th year of commercial applications. Normally, these changes take 8-12 yrs to become widespread and mainstream, hence expect this over the next year or so. If you're just getting upto speed with Cloud Computing then to be honest, you're perilously close to being a laggard.

Friday, October 05, 2012

Don't try to out innovate Amazon

Amazon is famous for it's two factor market and platforms play. The modus operandi of Amazon in general is :-

1. Find an activity which should be so well understood and ubiquitous that is suitable for provision as a commodity (ideally utility service). Example's would be online marketplace, infrastructure, certain data sets etc.

2. Provide a platform to exploit this.

3. Enable an ecosystem to build on the platform. This should be either an ecosystem of providers and consumers (two factor market) or consumers of an API (e.g. developers building higher order systems).

4. Mine that ecosystem for new information on trends (i.e. things diffusing in the market or through consumption of the API).

5. Commoditize those new trends either through copying or acquisition in order to provide new component services to both feed the ecosystem but encourage it to innovate more.

This model is far from new. The basics are - get other's to Innovate, Leverage the ecosystem to spot trends and Commoditise to component services - ILC for short. It enables the company to appear to be highly innovate, highly customer focused and highly efficient all at the same time and the ability to do all three increases as the ecosystem grows.

So, when you come up against Amazon in your industry, here are two simple Do's and Dont's.

Don't try to out innovate Amazon : You're not actually fighting Amazon over innovation but instead you're fighting the entire ecosystem that has built upon its platform and is doing much of the innovation (in terms of new activities). It's worth remembering that some of Amazon's ecosystems can have hundreds of thousands of developers and companies. If you're going to try this alone then you'll need an enormous R&D group to out compete on these terms. If you've not got this then reality is you'll just get spanked. This is despite, if my sources are correct, of Amazon not having a traditional R&D group. It wouldn't surprise me if every time Amazon hears a company say "We're going to out innovate Amazon" then they cross them of their list of competitors to watch and mark them "RIP". The only time its really worth fighting on these terms is when you have an equivalent size of ecosystem (or you're pretty confident of quickly getting one) combined with the ability to exploit it. In which case you're not really trying to out innovate Amazon, you're focused on getting your ecosystem to out innovate their ecosystem.

Do try to co-opt and out commoditise Amazon : Critical to this game is to try and build a bigger ecosystem and one way is to exploit the main weakness of Amazon being a single provider. So, try and build a competing market of equivalent providers enabling customers to easily switch between its members. Co-opt Amazon's ecosystem as much a possible. Provide the systems as open source and don't fall into the trap of all the members trying to differentiate (the collective prisoner dilemma issue). Once your ecosystem is big enough then you can use the ecosystem to out innovate Amazon and its ecosystem.

Thursday, October 04, 2012

Spoiler Alert ... 3D printing

Anyone who knows me, knows I've spent well over a decade keeping tabs on and occasionally being actively involved in 3D printing and printed electronics. My real interest, as it has always been, is the wider economic cycle of change of which 3D printing will be part of the next cycle.

To cut a long story short, 3D printing is about commoditisation of the means of manufacture in the same way that the Internet was about commoditisation of the means of mass communication and Cloud (a term I despise) is about commoditisation of a range of IT activities.

Anyway, I spent rather a dull afternoon in the company of a respected "future"-ologist who clearly has no idea what he is talking about. So, I thought I'd be a miserable old fart and tell everyone what 3D printing is actually going to do.

This is a SPOILER for the future.

1. Commoditisation of the manufacturing process will result in an explosion of new activities (higher order systems) as the means of manufacture become ubiquitous. This will create a time of wonder and new industries (see electricity led to radio, Hollywood, consumer electronics, computing etc) but what those new things are and what new industries form, well we don't know yet because they're uncertain (this is the one bit which you still have to wait and see). However, this won't stop endless pontification on the subject and book writing about the massive transformation underway and "Does Manufacturing Matter?"

2. The manufacturing industry will shift from a state of "peace" where relative competition exists and sustaining change tends to exceed disruptive change to a state of "war". It will become a fight for survival, where many past giants who have created inertia to change due to their past success fail to adapt and subsequently collapse. During this "war" disruptive change will exceed sustaining change and new entrants (not encumbered by past business models) will lead the charge. Among the new entrants will be our future giants.

3. Executives of former giants that are collapsing will start once again blaming culture (i.e. others) for their failure (lack of vision and action).

4. We will see a flood of capital from the past manufacturing industries to these new industries. Some economist somewhere will write a book on Schumpterian economics and 3D printing.

5. As the activity of manufacturing changes (from custom built factories to ubiquitous 3D printers) then practices in manufacturing will change. Practices often co-evolve with activities (e.g. architectural practices have shifted with computing infrastructure moving from product to utility). Someone will come up with a catchy name for this, like "DevOps" with cloud today.

6. The new practices will result in new forms of organisation as per electrical age (Fordism), Internet age (Web 2.0), Cloud (Next gen) and every other age. Some management guru will write a book on these new organisational structures probably calling them the "new Fords" or another equally lame term.

7. At the height of this change someone will write a book about how commoditisation of the manufacturing process (though they won't call it that as we will have created some daft term like "cloud" etc) will lead to mass unemployment etc. This book will probably overlook every other book which has been written about the same phenomenon (Hawkins and the electrical age) and how each time the author failed to anticipate the new activities (higher order systems) and related industries that will form e.g. electrical age led to radio, Hollywood etc etc. The author will of course fail to anticipate because the activities are uncertain (see point 1).

8. In a desperate attempt to save their bacon, past industries will promote the importance of physical DRM to prevent people stealing copyrighted ideas or making dangerous items. Security and the threat of people being able to manufacture things like guns will be used to explain why this change is dangerous for us all and must be stopped. Some lobbyist groups will persuade some Government somewhere that progress is bad.

9. Along with disruption of past giants many secondary industries will discover that their industries will be disrupted due to rapid reduction in barriers to entry. See Newspapers and the "Internet is good for us as it'll reduce our distribution and printing costs ... wait ... bloggers ... oh, no ... stop the Internet" moment. Someone will write a book on how 3D printing is killing our culture / industry / society etc.

10. Because of the reduced barriers to entry and the rise in competitors that were once former consumers (e.g. Amazon vs Infrastructure Providers, Bloggers vs Newspapers), someone will write a book on "Democratizing the means of manufacturing" or something like that. I wish they wouldn't.

11. The trickle of adoption to these new manufacturing techniques and practices will become a flood as the combined forces of efficiency (through commoditisation of the activity), increased agility in building higher order system (componentisation) and future sources of worth (Schumpeter) kick in. This will take everyone by "surprise" especially the analyst who expected the change to occur slowly in a linear like fashion (which it never does).

12. With the new techniques and practices there will be a growth in concepts of Agile Manufacturing followed by endless arguments over Agile vs Six Sigma equivalents. Eventually people will realise that one size doesn't fit all and both approaches have their place.

13. As commoditisation of the manufacturing process spreads and the explosion in higher order systems and new industries accelerates there will be a corresponding explosion in un-modelled data. This is data which we don't know how to model yet but will eventually be modelled as we understand it more. Annoyingly, people will call it unstructured and probably create a term like "Big Data 2.0" ignoring the fact that the current Big Data revolution is about the fifth one we've been through in the last 200 yrs.

14. As hardware becomes more malleable like software, some bright spark will realise that the function of a device consists of digital and physical elements both of which can be described by code and hence a new language will form. In this new language you will describe the function of the thing you want and a compiler works out which bits should be code and which bits should be bits to be printed. They probably won't call it "SpimeScript" but they will come up with an equally daft name.

15. The open meme (e.g. open source, open hardware, open data, open APIs) will happily continue its symbiotic relationship with the Internet and grow rapidly in this space. Former product design companies will get fairly grumpy about this and there will be endless patent arguments. The patent system will become hopelessly outmatched for this world and will become as harmful for manufacturing as it has been in the software industry.

16. Google, Amazon and the like will swoop in with their normal two factor market and platforms plays to grab the developing ecosystems in this space. Many manufacturing, architectural and construction companies will find themselves now competing with the likes of Amazon backed by huge ecosystems of companies selling designs for direct printing (from that new watch to your new home). The results won't be pretty as usual.

17. Some Government official (probably pushed on by lobbyists from past industries) will start to talk about the needs for certification of designers, architects and the like on the grounds of "consumer benefit". Some standards body will jump on this as a potential new revenue stream. It will all go badly wrong.

18 ... oh, it's late. You get the gist. Just go back and replay every other industrial cycle of change.