IT is currently undergoing a transformation as parts of the industry shifts from a product to a service based economy. This transformation is a normal consequence of the commoditisation of an activity from a novel and new innovation through to bespoke to products and then finally to commodity like services (e.g. utility services). The clearest evidence of this transformation can be seen from the abundance of service related messages in current IT trends whether this is in web services, software as a services, service orientated architecture or mashing up services.
It should be noted that not all of IT is becoming commoditised as standard services, only those parts of IT which are ubiquitous within the industry and as a consequence are well defined or near feature completeness.
The benefits of a shift to services for consumers are as follows :-
- Acceleration in business innovation due to rapid creation of new higher order systems through componentisation (i.e. it's faster to build a house with planks than to start with cutting down your own trees).
- Economies of scale.
- Conversion of significant Capex to utility based Opex with a stronger link between cost and usage.
- Ability to focus resources on core activities.
- Minimisation of capacity planning effects.
Due to these benefits and the Red Queen effect (i.e. the need for business to remain competitive with each other) we are likely to see a large scale movement towards the services world once the barriers to adoption are removed.
At this moment the main adoption barriers are :-
- Concerns over reliability and security of providers (in many cases this is not grounded in fact).
- Concerns over legal issues (data transfer over geographic boundaries).
- Concerns over lock-in to providers and the lack of second sourcing options.
Understanding the Cloud.
The service world can be broken into a computing stack created from a number of discrete components (as per Herbert Simon's componentisation work on the theory of hierarchy). The main components of interest are:-
The Application layer : for example a specific application (such as CRM) or application data services (such as those provided by AMEE, Google Maps etc).
The Framework layer : for example, the development framework or platform, the messaging systems, any file storage service, the database and all the components used in providing an environment for the application to exist within.
The Hardware layer: for example, the operating system, the virtual machine and the bare metal. All the components used in providing an operating computing environment for the framework layer to exist within.
Each of the service layers can be built upon each other; for example an application which is provided by one company as a service can be built upon a framework which is provided by another company as a service and so on.
The organisation of these layers into stable components provided as services creates stable subsystems for the higher layers of the stack and this is what accelerates business evolution and innovation. For example mashing up a stable CRM service with Cartographic and CO2 emissions data provided by other services to create an application for calculating customer CO2 emissions is a much faster operation than having to build the application by creating the operating system first. This effect is known as componentisation.
This effect can occur throughout the organisation i.e. if you consider your company has inputs and outputs and between this a value chain of components and systems which convert those inputs to ouputs then this entire value chain can evolve. In other words, every part of your business is commoditising and hence enabling new higher order systems to appear ... nut and bolts enable machines, electricity enables radio etc. These cloud services will enable numerous new things to appear which in turn will commoditise.
Any service (at whatever layer of the computing stack) can be provided either by a single company or through multiple providers. In the later case, where there is freedom to switch between one provider and another, the providers form an ecosystem known as a competitive utility computing market. The word utility in this case is used as comparison to existing utilities such as electricity, telephone and gas.
The ability to switch between providers overcomes the largest concerns of using such service providers, the lack of second sourcing options and the fear of vendor lock-in (and the subsequent weaknesses in strategic control and lack of pricing competition). Where such switching between providers can occur, the users own systems may also act as a provider. This allows for the creation of hybrid arrangements with the use of external services as a top up for internal systems.
Whilst higher layers of the computing stack may be built upon lower layers which are provided as services, the lowest layer (hardware) is concerned with the bundling of bare metal (from CPU, I/O to memory) and its provision as an operating environment.
Separate machines can be bundled together to act as a "single machine" through cluster and grid technologies and single machines can be made to act as though they were many separate machines through the use of virtualisation technologies. Furthermore an application may cluster together or balance its load across several virtual machines. The use of cluster, grid and virtualisation strategies is one of balancing total available computing resources (no matter what the source) to match the demand for computing resources.
For the sake of clarity (or not), the reader should be aware that as with operating systems there is more than one type of virtual machine format with each format controlled by a specific hypervisor or virtual machine monitor. These monitors may run either on bare metal or within an operating system and hence it is theoretically possible to run multiple layers of virtual machine to operating system to virtual machine and so on. The most well known hypervisors are Xen, VMWare, KVM and Hypervisor V.
This combination of concepts from computing stack (from hardware to software) provided as a service, competitive utility computing markets, hybrid systems, virtualisation, clustering and grid technology is commonly grouped together under the term “cloud computing”.
Competitive markets vs monopoly.
Eventually either through market competition or government regulation, it is likely that we will end up with competitive utility computing markets. However during the early stages of transition from a product to a service based economy we are more likely to have monopoly like environments. Such environments will occur because there will be lack of interoperability and portability between providers at any particular layer of the stack. For example, in these early stages we are more likely to see heterogenous providers of hardware services offering different APIs, VMs and ancillary services rather than homogeneous providers offering the same environment with easy switching between them. That said, there are a number of efforts such as vCloud and OVF which directly attack this problem.
OVF deals with the packaging of a virtual appliance (a pre-configured software stack comprising one or more virtual machines) in a hypervisor neutral format in order that the virtual appliance can be installed on several different hypervisor environments. OVF does not, at this time of writing, deal with the portability of a running instance between virtual platforms. However, with the vCloud initiative, vMotion does deal with the portability of a running virtual machines from one physical server to another, however this movement is confined to the VMWare hypervisor.
I am not aware of any system which will currently move a running virtual machine from one hypervisor environment to another, however please feel free to correct me. (I have not yet checked out Citrix's Kensho environment)
In the “cloud” world, portability and interoperability in the longer term will almost certainly require open sourced standards i.e. operational open sourced code which acts as the standard. There are several reasons why open sourced standards are more likely to work than open standards (where the specification alone is open), these include :-
- Open sourced standards provide a means for potential consumers to test a service by implementing it operationally within their own environment.
- Open source also provide the fastest means of creating a defacto standard by reducing any barriers to adoption.
- Open standards rarely do anything to prevent additional information or features from being implemented outside the scope of the standard. In the service world, any feature differentiation which results in any new information is a source of lock-in and is therefore not of a consumer benefit.
- If the cloud technology is not open sourced or an open source reference model does not exist then this will create a dependancy for the service provider on a technology vendor. Whilst closed source technology can potentially solve consumer issues regarding portability and interoperability, the dependancy it creates can cause significant longer term issues for any provider.
The usual counter argument given to the creation of open sourced standards is that they inhibit innovation (in the case product innovation and feature differentiation). However given that the activity in question is ubiquitous and of declining strategic value, feature differentiation provides little or no advantage to the consumer and can only be seen as an advantage to the provider in ensuring lock-in.
The distinguishing feature between competitive markets and monopoly in the "cloud" world is therefore the use of open source and any consumer should carefully consider second-sourcing issues when choosing a cloud provider.
For a summary of my earlier thoughts on this matter, see OSCON 2007 presentation on Commoditisation of IT (2014 : the original was on blip.tv, this is a copy which unfortunately seems out of sync but it'll do) :-