Sunday, February 24, 2008

Market forces .... part III ..... SaaS

At Web 2.0 summit in 2006, I was concerned that a lack of portability between SaaS (software as a service) providers would prove a stumbling block to adoption. During 2007, from E-Tech to IGNITE to OSCON to FOWA to Web 2.0 Expo, I emphasised this point and argued that if you want wide-scale adoption of SaaS, you need a competitive utility computing market. Such a market requires an ecosystem of providers with portability between them.

This stuff is old hat, however, I thought I'd do one last impression of a stuck record just in case it was beneficial to someone new to the field.

First, I'm going to define some terms including software as a service. Then I'm going to go through the benefits of such services and the main reasons given for not adopting them. Lastly I'll explain why open standards are not enough and why we need competitive utility markets. Let's start with some definitions:

Software as a Service is "a software application delivery model where a software vendor develops a web-native software application and hosts and operates the application for use by its customers over the Internet". Now such applications include CRM (such as Sales Force) or development and deployment frameworks (such as Ning) and even operating environments (such as Amazon EC2).

SaaS can be built upon SaaS. For example an application (such as ERP or Enterprise Resource Planning) can be built upon a development framework (such as Bungee Labs) which can built upon an operating environment (such as Amazon EC2 + S3). I personally find it useful to consider this as a stack of software, framework and hardware. However due to the explosion of XaaS (X as a Service) terms, I now agree that the term software as a service is sufficient as the stack is applicable whatever the delivery model is.

Utility computing is "the packaging of computing resources, such as computation and storage, as a metered service similar to a physical public utility, This system has the advantage of a low or no initial cost to acquire hardware; instead, computational resources are essentially rented". In the most narrow sense (ignoring overall comparison to the utility industry), it is a billing and provisioning model.

Software as a Service can be provided on a utility computing basis. For example an operating environment can be delivered and operated online (software as a service) and charged on the basis of consumption of CPU, bandwidth and storage (utility). This is exactly what happens with Amazon EC2 & S3.

So now we have some basic terms, let's looks at reasons why people should adopt these services.

Benefits of Software as a Service
Software as a Service works by providing a standard service to many consumers in a multi-tenanted architecture. The principle idea is that management, operation, monitoring, security, network, capex and other costs are shared more efficiently among many consumers. This allows for an overall reduction in the cost of service for any particular quality of service. As the service already exists, it allows for faster deployment and reduces the amount of commonly repeated and tedious tasks (aka "Yak-shaving") normally involved. Finally, in order for this to be practical, the service must be well defined and commonly used. Hence the services offered are likely to cover cost of doing business (CODB) and non-strategic activities. The provision of services in these areas will also encourage further standardisation and hence result in increasing cost reductions.

The overall benefits are:-

  1. Reduction in cost and / or improvement in quality of service, due to economies of scale.
  2. Faster deployment compared to self build models.
  3. Reduction in Yak-Shaving.
  4. Shifts non-strategic activities to a third party provider.
  5. Encourages standardisation for CODB-like activities.

Benefits of Utility computing.
With utility computing, resources such as bandwidth, storage and computer operations are paid for on a per use basis. There is no need for initial capex to acquire hardware or the costs associated with setup. There is no need to over provision capacity for demand spikes as the provider balances supply and demand for many customers. As Chaki Ng stated: "it is more efficient to have multiple network services share a common infrastructure that can absorb failures and bursts in client demand than it is to have every service over provision resources to accommodate peak requirements"

There are also benefits for the consumer in terms of reducing the financial risk and complexity involved in any new business venture. Furthermore, there are none of the delays normally involved in acquiring and installing physical hardware, air conditioning, racking and power.

The overall benefits are therefore:-

  1. Reduction in business risk in terms of capital outlay and planning.
  2. Reduction in Yak-Shaving.
  3. More efficient energy and resource usage.
  4. Minimal delays in provisioning.

Overall, using Software as a Service on a utility computing basis should mean:-

  1. Reduction in cost and / or improvement in quality of service, due to economies of scale.
  2. Faster deployment.
  3. Reduction in business risk in terms of capital outlay and planning.
  4. Reduction in Yak-Shaving.
  5. Shifts non-strategic activities to a third party provider.
  6. Encourages standardisation for CODB-like activities.
  7. More efficient energy and resource usage.

Why on earth would anyone say no? Well these are the main reasons:-

Excuse No.1: We have concerns over availability.
"I can only access the service if the internet and the provider is available"
"I need access to the service when there is no internet connection"

I completely understand this concern when you are talking about desktop applications such as word processors, spreadsheets or maybe a presentation machine for a roadshow. However most employees don't have the companies accounting system or CRM package on their desktop and mail services are fairly pointless without the internet. A huge number of systems are remotely accessed from the desktop and the real question here is whether your internal systems are more reliable than the internet and the provider. Well the internet is almost certainly more reliable than your own personal corporate network simply because there are more routable nodes. As for the provider, that brings us on to :-

Excuse No.2: We have concerns over the reliability of the vendor.
"We'd never use a vendor, if something goes wrong, my guys will fix it ... I trust them"

Whilst this argument has some validity in the short-term, in the long term it is utter nonsense. If we ALL operated on the basis of this argument, then there would be no banks, no railways, no airlines, no supply chains, no power supply, no commodities and no change through commoditisation. Instead every company would be trying to do everything on the grounds that "our guys do it best". It's simple minded, protectionist drivel. Reliability in the software as a service world will increase over time, especially as competitive markets form and as we see third party assurance services and computer resource brokerages emerge.

Excuse No.3: We can't use a standard system.
"Our systems are tailored to us and our way of doing business."
"Our systems are a source of competitive advantage."

The majority of activities and processes that organisations undertake are common within their industry or the market as a whole. Few activities are a genuine innovation or a source of competitive advantage. For your average company, electronic book-keeping is not a source of competitive advantage, nor are health and safety forms, holiday request services, payroll payment systems and the list goes on. Even where such a system is a competitive advantage, the provisioning of resources for such a system is a commonly repeated problem. If builder's built houses in the same way that software engineers built systems, then every house would have its own power station, sewage works and brick factory. More often than not, upon investigation "can't" turns out to be "won't". This brings me onto one of my favourite excuse:-

Excuse No.4: It's not worth it.
"The amount of money we spend on IT is small compared to the value of the data it holds. It's just not worth us considering using a vendor, what if they lost our data or someone else got hold of it?"

Ask yourself, does your company use banks or does it keep all its money in its own guarded safe? Even banks use banks. Any investigation shows that we've been using 3rd party providers in many industries such as manufacturing for a considerable length of time. Such outsourcing is not a new phenomenon. Which leaves the "it's not worth it". I find this alarming as it is akin to saying "cost is not important in our business" or "why spend less when you can spend more". By not accepting the same or better quality for a commodity service with a provider at a lower cost, then you're actively putting yourself at a cost disadvantage (no matter how small). As a shareholder, it annoys me when I hear management talk in such a manner. This leads to me the final reason.

Excuse No.5: We have concerns over lock-in.
"We'd be tied into a particular vendor"

This is a real and genuine concern and without portability between vendors there will be no competitive pressure to keep prices / quality keen and an issue of lock-in and vendor dependency.

Now for me, portability is the key for adoption of these services. Taking the example of banking, we have portability between banks as we are able to move our account and hence our money from one bank to another. When we transfer our balance, our money means the same thing at one bank as it does to another - a pound still means a pound. However our statements don't transfer, this additional "meta data" on activity stays with the original bank. This isn't really a problem as we use secondary systems, accounting packages, to collate such information.

Imagine that the accounting package was from a Software as a Service provider and we decided to move to another provider. If we wanted to move then we'd need to have our data move, and our data to be interpreted by the new provider in the same way as the old. We don't want our data to change meaning when we move providers, in much the same way that I don't want my £100 to become $100 because I switched bank. However since what we a moving here is data rather than currency, we'd want ALL our data including any "meta data" to move.

In order to have such "true portability" and to avoid the necessity for further systems we would need :-

  1. A choice of providers of the same service.
  2. Portability of all data (including any meta data) from one provider to another.
  3. Interpretation of all data (including any meta data) to be identical in the new provider.
  4. The switching from one provider to another to be a useable process.

Now, I agree that open standards are necessary to solve these problems but I don't believe they are sufficient. Rishab Ghosh's 2005 paper describes the economic effect of an open standard as when:

"a natural monopoly arises (de facto) or a monopoly is defined and agreed upon (de jure) in a technology, but the monopoly in the technology is accompanied by full competition in the market for products and services based on the technology, with no a priori advantage based on the ownership of the rights for the rights holder. This occurs when access to the technology is available to all (potential) economic actors on equal terms providing no advantages for the rights holders. In particular, rights to the standard are made available to economic actors other than the rights holders under terms that allow all potential competitors using all potential business models to compete with the same degree of access to the technology as the rights holders themselves."

I highlighted the phrase above because open sourced implementations of open standard services provide a means for other providers to access and rapidly implement a compliant service. A provider can do so without losing any strategic control of their business and on equal terms to any other. Furthermore with an open sourced system, any meta data issues become more transparent and any consumer can implement their own in-house version in order to ease adoption fears.

Open source is an essential part of the portability conundrum. It is only when you have such portability that we are likely to see the development of competitive utility computing markets and adoption fears truly overcome.

We're not concerned about using a bank, as long as we believe we can move our money from one to another and to ourselves. The movement of our accounts is also what keeps the banks competitive with one another and provides some element of market regulation. Without such portability, if you were locked into a bank and couldn't transfer your money in a useable form to either yourself or another bank, then you'd be more inclined to hide it under your own bed.

Software as a Service and utility computing when combined together have a bright and rosy future. By co-operating on open sourced implementations of services with portability in mind, SaaS providers could create a competitive ecosystem where they could enjoy a slice of a very large pie. Unfortunately, it seems likely that many companies will try to control and lock-in their customers and open standards alone will not create the portability desired. Eventually the whole marketplace could well require Government intervention.

I've had long held views on the commoditisaton of IT and how we are heading towards a future where a mix of small and large "computing power plants" feed computer resources into a grid (or a few grids) and multiple brokerages sell those computing resources bundled as services to customers. Before we get there I expect there will be many more outages as happened with Amazon's S3 service, as well as a lot of arguments and dubious practices.

All I can suggest for now, is "caveat emptor". For me, Strassman and Carr put these issues firmly on the map. If you want to learn more about this field I would recommend reading Carr's blog and books as well as following James Urquhart and Rich Miller.

I've also included links to videos of my talks at OSCON and Web 2.0 Expo in 2007, just in case you wanted to know more.