Sunday, March 30, 2008

Can't see the wood for the trees ....

Service Orientated Architecture (SOA) is an architectural style of providing processes as services. It does not limit the use of :-

  • verbs - what you do. For example get, put, add, delete, send, fire ...
  • nouns - what you do it to. For example employees, people, camels ...

Representational State Transfer (REST) is an architectural style which defines an uniform interface and hence it limits the use of:-
  • verbs - you can only use post, get, put and delete.
REST is often described as more about nouns because it limits the number of verbs. Whereas SOA in principle is equally about verbs and nouns. I use the words in principle, because you can have many different types of SOA, such as Resource Orientated Architecture (ROA) which uses a REST approach (see figure 1).

Figure 1 - The SOA vs ROA argument in Full.
(click on image for larger size)

My favourite SOA is in fact Simon Orientated Architecture, which has one verb, DO, and one noun, STUFF. It is both more verb-like than traditional SOA having only one noun, and even more noun-like than REST having only one verb.

Unlike arguments based upon the relative use of grammar, I also find it has some sort of point.

Saturday, March 29, 2008

Say it with pictures ....

Essential IT knowledge in Venn diagram form:

Figure 1 - Cost vs Benefit of customisation.
(click on image for larger size)

Figure 2 - Strategic value and software products.
(click on image for larger size)

Figure 3 - Innovation and software products.
(click on image for larger size)

Figure 4 - What is web 2.0?
(click on image for larger size)

Figure 5 - The relationships between SaaS, utility computing and virtualisation.
(click on image for larger size)

Figure 6 - What is SaaS?
(click on image for larger size)

Figure 7 - The SaaS vs Open Source argument in Full.
(click on image for larger size)

Figure 8 - The SOA vs Mashups argument in Full.
(click on image for larger size)

Figure 9 - The SOA vs ROA argument in Full.
(click on image for larger size)

Figure 10 - The Davenport vs Carr vs McAfee argument in Full
(click on image for larger size)

Figure 11 - Everything you need to know about portability.
(click on image for larger size)

Thursday, March 27, 2008

Move over Bellatrix ....

In the software business, there are several unforgivable curses which deserve a one way ticket to Azkaban. These include "What backup?", "What version control?" and "What testing environment?". Fortunately, the days when you might have heard such words are long gone as we all have learnt the folly of the dim distant past.

However, we are always living with our future curses, e.g. the naive and foolhardy things we do today which we will laugh at tomorrow. So, I've decided to pick out a couple of candidates for future curses.

What model?
The purpose of business process modelling (BPM) is not just to provide a view of what we do as an organisation but also to enable an architecture to be built to support our current and future activities. Obviously we cannot model innovations with any certainty, they are constantly changing. However commonly repeated activities can be modelled and subsequently used to create a supporting infrastructure. I was under the misguided impression that companies embarked on a Service Orientated Architecture (SOA) after first having examined what they do by Business Process Modelling (BPM). Apparently this is not the case, some companies start building without actually knowing what it is they do. There is always a balance to be struck between the two morals of "an imperfect plan executed today is better than a perfect plan executed tomorrow" and "proper planning prevents poor performance". I'm not sure that this novel approach achieves this.

What lifecycle?
Whilst BPM will give you an overview of what you actually do and help in the design of an architecture, it doesn't actually tell you how you should manage an activity or process. As I've mentioned before, I tend to "colour-in" my models to identify activities at different stages of their life-cycle. This provides me with information on how to deal with an activity, for example :-

  • whether an activity is ripe for outsourcing or SaaS (assuming a suitable external ecosystem of providers exists).
  • how I should manage a particular activity (for example more agile or more defined processes e.g. SCRUM or Prince 2).
  • how I should measure it.

I'm fully aware that this runs contrary to our desire for simple measures; however even a simple measure such as ROI (return on investment) is only valid for particularly stages of the life cycle. For innovations, you need to work on a worth based mechanism whilst cost is your only ally with CODB (cost of doing business) activities.

Arthur C. Clarke said that "any sufficiently advanced technology is indistinguishable from magic". We shouldn't forget that we are all just underage magicians and tomorrow will look back at much that we thought was "magic" and just cringe.


I couldn't resist but add a few more unforgivable curses.

We only release once per month
Commodity like activities should be released or updated as little as possible, once per year is probably once too often. Innovations need a completely different timescale, once a week is possibly too slow. The one size fits all idea is instead one size doesn't fit anyone.

Anyone feeling cold?

I was recently asked, "where is IT heading?"

Apparently, there seems to be a view that commoditisation will lead to the end of IT. Though it will certainly lead to changes and a shake-out in some of the practices and personnel, the idea that the end is nigh for IT is greatly exaggerated. Outside the support and training function, I would argue that IT is more likely to fragment into three different types of role.

One role, which I'll call the Pioneer, will focus on helping the business create new mashups, widgets and services to exploit the interactions between a common framework of services and information in the outside world. The expertise of such a role will be in experimental modelling and agile processes & novel business development. In my diagram (see figure 1), the pioneer's domain is in the innovation (think novel, first time) and the custom built sections of the activity graph.

Figure 1 - Stages of an Activity life-cycle.
(click on image for larger size)

To gain an understanding of what this Enterprise Mash-up world will look like, I'd suggest following the writings of Dion Hinchcliffe

A second role, which I'll call the Coloniser, is probably the most demanding. Their focus will be on bridging the gap between the new innovations of the pioneers and the common framework of services upon which the organisation depends. The coloniser's job is to find, investigate and make the call on whether a new activity (an internal mash-up or process or an external one) should make the journey to be included in the framework, to start to turn those custom made things into products. Their world is in the minutiae of tactical decisions from open source to standards plays. Their focus is to gain as much advantage as possible from an activity that is becoming more common, whilst avoiding the dreaded cost of migrating to an alternative standard. They need to constantly decide which horse to back and to understand the future potential consequences in a world of tactical play and misdirection. In my diagram, their domain is the product (transitional) section of the activity graph.

The final role, which I'll call the Town Planner, will be focused on managing componentisation, outsourcing and providing the framework for all common processes and services that a business uses. The expertise of such role will include service orientated architecture (SOA), enterprise architecture (EA), business process modelling (BPM), six sigma, volume operations, software as a service (SaaS), contract negotiations, risk & security management, compliance and all the activities we associate with a well ordered world. In my diagram, the town planner's domain is in the commodity section of the activity graph.

So, will commoditisation lead to the end of IT? No, IT is only starting to get interesting. However it will cause a big shake-up and the army of half-competents who hide behind the obscurity of today's "complex" systems will find themselves butt bare naked in a world where a cold wind of change blows.

If you make your living around telling business that ERP / CRM or any of such ilk are a source of competitive advantage, you're wearing the Emperor's new clothes and you'd better find some real ones.

--- 4th August 2014

It's over six years later and ... we're slowly starting to see the signs of change in organisations. I estimate it'll take another 10-15 years after the first major move which probably puts this around 2030. The most recent set of organisation changes mainly involved cell based structures. This more adaptive structure is still a long way off.

The original image link was broken, thanks for pointing that out. I used the same image from another blog post to replace. If you find other broken links, do tell me. I keep all my old presentations and can go back to source.

--- 30th June 2015

Slowly we're starting to see more public discussion on the spread of cell based structures and also structures that combine not only aptitude but attitude e.g. the three party system of the hybrid dynamic model. We've probably got a good decade to go before this start becoming more noticeable. Alas, we've also got a resurgence of a dual operating system model which to be honest I thought was dead and buried long ago. A dangerous space to be, re-invoking those images of Elois and Morlocks. Been there, done that, bought the flares, wish I hadn't, not doing it again.

Wednesday, March 26, 2008

For Hire!

Though my book is progressing, it is going to take far longer than I originally anticipated and budgeted for. Whilst the writing is fairly straightforward, there is an awful lot of basic research that needs to be completed.

I also seem to have started building a fledgling consultancy career along the way. This has not been a purposeful act, it's just something that has emerged from my public speaking.

Well, I now face a crossroads between building a consultancy or working for another company. Ideally, I'm looking for opportunities which will enable me to fund my research, writing and speaking.

Any assistance or useful advice on these matters would be welcome.

Monday, March 24, 2008

A car is a car is a car is a ....

For all practical purposes, a Resource Oriented Architecture (ROA) is a Service Oriented Architecture (SOA) that uses RESTful web services.

The arguments for a difference between ROA and SOA depend upon :-

  • a misreading of SOAs as more than the decomposition of processes to re-useable software services.
  • an assumption that only ROA uses RESTful web services (this is clearly untrue as many SOAs use RESTful web services).
  • an assumption that SOA only uses SOAP / WS-* (this is clearly untrue).
  • there being a fundamental difference between a RPC (Remote Procedure Call) and a RESTful web service (they are instead a specific subclass of RPC).

Whilst I can happily debate the merits of SOA+SOAP vs SOA+REST, I find the repackaging of SOA+REST as ROA both confusing and unhelpful as it does nothing to highlight the differences or similarities.

Renaming an architectural style because of implementation details leads to confusion. Imagine if different names were used for object oriented design (OOD) depending upon which software language was being used. This has all the hallmarks of vendor-led marketing rather than an attempt to move the debate forward in any real sense. I find this disappointing.

My background is in SOA+SOAP and SOA+REST environments and I don't expect REST to be the end of the story. I do not look forward to future debates of the ROA vs SOA vs XOA kind, especially when at the heart of each is the idea of "decomposition of processes as re-useable software services" + implementation detail.

For me, SOAP vs REST is a debate whereas SOA vs SOA is a pointless head banging exercise.

Whilst my local garage might try and convince me that "a BMW is not a car, it's a driving experience", my view is it's a make of car.

Wednesday, March 19, 2008

I'll huff and I'll puff and ... ohh I like how you've painted it.

Whenever I'm mapping out the activities (for example business processes) of an organisation, I try to use colour codes for the different lifecycle stages of an activity. I find this helps me when visualising what the organisation needs to do and how it needs to change.

It's just something I do. I've provided four images to show the colour codes I use.

Stage 1 - Innovation, First Instance
(click on image for larger size)

Stage 2 - First Movers, Bespoke examples
(click on image for larger size)

Stage 3 - Transition, Products
(click on image for larger size)

Stage 4 - Commodity.
(click on image for larger size)

It's worth remembering that any activity has an effect on the organisation. In the case of a created product the effect is overall on cashflow, however you could equally have a process whose effect is on the efficiency of operations and cost reductions.

Generally, any innovation should be built upon many commodity or transitional like activities. If it's not, you need to ask yourself why not?

Componentisation of systems into activities and use of commodity components where possible is a massive accelerator for innovation. It is the reason why I advocate using Service Orientated Architectures (SOA), Software as a Service (SaaS) and Enterprise Architecture (EA) frameworks like Zachman.

The speed at which a complex system evolves is much faster if it is broken into smaller stable components and hence organised into one or more layers of stable subsystems (for more on this read up on Howard Pattee).

If you:-
  1. take a system of k elements
  2. group every s number of k elements into a new component, l
  3. group every s number of l components into new component, m
  4. keep on repeating this grouping until you can't group any more
Then, with each component being considered stable, the rate of evolution of the system will be proportional to the log to base s of k.

To show this in action, consider the three little piggies building a house. Let's say each house requires 100,000 bricks and whilst the big bad wolf can blow down an unfinished item, any stable component is too strong to be blown apart. Our three little piggies will follow different strategies:-
  • Piggy 1 : Build the house in one go with each brick being a single component.
  • Piggy 2: Build stable components, each component containing 10 sub-components. i.e. 10 bricks = a line. 10 lines = a section of wall etc.
  • Piggy 3: Build stable components, each component contain 100 sub-components.
OK, let's say on average you can put together 1,000 components before the big bad wolf returns. Then :-
  • Piggy 1 : will never be completed.
  • Piggy 2 : will be completed by the 12th visit of the wolf.
  • Piggy 3 : will be completed by the 2nd visit of the wolf.
In general: build in blocks, use small stable components.

[NB: For simplicity of explaining the analogy, I've taken the initial act of combining 100 or 10 or 1 brick(s) into one component as creating one component. If you instead treat each brick as a component, then the times are Pig 1: Never, Pig 2: 112 visits, Pig 3: 102 visits.]

It sounds obvious, but knowing the lifecycle stage of an activity along with componentising systems which can be componentised is a necessary step to increasing innovation. In other words: if someone has already built a hammer, use it and don't rebuild it.

Equally essential is to use different methodologies at different stages of an activities lifecycle (it's not agile vs six sigma or networked vs hierarchical or innovation vs efficiency - it's a mix of each, all the time). In other words: get used to living with change.

Using such an approach you can balance the innovation paradox between the order needed to survive and the disorder needed to create a future.

In summary: build in blocks, use a hammer, expect the plan to change and don't forget to add a splash of colour.

Monday, March 17, 2008

All quiet ....

There is so much to talk about, but alas I have a looming deadline. So instead, I'll just post an interesting graph of three different activities (in this case products).

The axes are:-
  • Ubiquity : or how common an activity is, calculated from market penetration statistics and household / business surveys.
  • Certainty : or how well an activity is understood, calculated from a ranking system that includes a relative total number of technical references cited in the British Library.

Ubiquity vs Certainty for TVs, Telephones and VCRs.
(click on image for larger size)

Now, I haven't highlighted which data set belongs to which product because it doesn't matter. They are all following the same approximation despite occurring over vastly different time ranges.

Is this an Everett's S-Curve? Not quite, I don't use time as an axis but certainty (so it does have a non-simple relationship with time).

Thursday, March 13, 2008

Far from a simple life ...

Yesterday, I spoke and was on the panel at the Butler Group strategy briefing on Enterprise Architecture.

I know that subjects such as Enterprise Architecture, Zachman Frameworks, TOGAF and business modeling are not exactly everyone's cup of tea. I often suspect that this is because of misconceptions about the art. I find the subjects fascinating, illuminating and an essential craft for any business.

The strategy briefing was excellent, there was a wonderful audience and a diverse range of good speakers. Mike Thompson and Mark Blowers were excellent hosts and Ian Charters, of IBM, provided a particularly outstanding talk.

I gave a 40 minute talk on innovation and commoditisation. It was a standard introduction into the subject, but I did get to sneak in a tiny little hint of cybernetics. (I've provided a link to a video of my talk, unfortunately I don't have the audio from the conference itself).

Managing a complex world - from innovation to commoditisation

Sunday, March 09, 2008

Enterprise 2.0 Summit at Cebit

The Enterprise 2.0 Summit at CeBIT was the first conference that I've moderated. Wow, was I nervous.

I thoroughly enjoyed it, the speakers were truly fantastic, the audience wonderful and the organisers Bjoern Negelmann and Kongress Media had done a fantastic job.

I was also given the opportunity to give the opening and closing talks. So I've made a video of each (I've just re-recorded them as I don't have audio from the conference itself.).

Opening Talk

Closing Talk

Thursday, March 06, 2008

Timeo Danaos et dona ferentes

Last year at OSCON, I talked about the need for competitive utility computing markets. Without these markets, many of the benefits of SaaS ("software as a service") would be limited and adoption would be hampered because of fears over the risks.

To create such a market, you need "true portability" which I would define as comprising:-

  1. A number of providers of the same service.
  2. Portability of all data (including any meta-data) from one provider to another.
  3. Interpretation of all data (including any meta data) to be identical in the new provider.
  4. The switching from one provider to another to be a useable process.

To achieve all of this without those service providers surrendering strategic control of their business to a software vendor, you would need all the core technology to be open sourced.

This utility SaaS world would benefit business consumers and those whose expertise is in large scale service provision. However, it is an unpalatable world for any former provider of commonly used proprietary technology.

Palatable or not, commoditisation is fairly inevitable. For any ubiquitous technology to survive this change, the software vendor would have to do what should be impossible. The vendor would have to:-

  1. Persuade a number of companies to use the vendor's proprietary technology to create a competitive utility computing market.
  2. Persuade a huge number of customers to give up on internal systems and to lock themselves into this market and hence the vendor's technology.

The only way this could happen is if the vendor:-

  1. is a financial powerhouse with a long history of stability.
  2. has existing relationships with potential providers and customers based upon proprietary technology.
  3. is capable of creating its own initial large scale SaaS services during the early stages.
  4. can license a version to its customers to host in their own data centres whilst a market establishes.
  5. has strong relationships with a large number of ISPs and ISVs to become potential providers for this "market".
  6. has strong relationships with a large number of customers in order to generate the interest to attract the providers.
  7. has a powerful marketing machine which can provide compelling sales stories such as:-
    • cost efficiency through the use of the technology.
    • opportunity to create revenue by selling capacity to the "market".
    • opportunity to purchase capacity from the "market".
    • opportunity to reduce capex and operating costs by use of the "market".
  8. proven range of tools provided such as CRM. [buying a company such as would probably help]
  9. seamless integration with existing development tools and desktop environment.
  10. collaborative opportunities between companies in the "market". [How about social network based searching and advertising or free market reports based upon company data]
  11. enterprise environment with ISO standards.
  12. positive support for portability issues and a good neighbour-like image. [being pro web standards and pro data portability would help]

In its final and most abstracted form, this "market" would simply be consumers and producers exchanging utility computing resources through a "cloud" or a "mesh" based upon proprietary technology. Whilst seemingly based upon "open standards" and freedom, network effects can easily be used to create lock-in. If you were a complete pirate, you could even use it to undermine the current form of the Internet.

If such an approach was possible, then the proprietary vendor would smile as they escape from the ravages of commoditisation whilst service providers and business consumers would find themselves dependent upon the vendor.

It would be a nightmare scenario, an interoperability disaster which will harm consumers, governments, the community, the public and even businesses.

Be wary of Geeks bearing gifts.

Talking of gifts, I'm glad to note that Microsoft are adopting web standards, that's nice of them. They also appear to be much more open source friendly these days, helping out with popular development environments.

Mr. Edward's seems less than convinced by these "good" intentions and is rather concerned about something called "fixed/flow".

I've always thought that, in business, it's a good idea to fix things when possible ....

.... or at least try.

Monday, March 03, 2008

There and back again ...

I'm off to Enterprise 2.0 Summit at Cebit.

I'm looking forward to catching up with Jenny and Euan as well as many others (including Dion Hinchcliffe - excellent!)

The schedule and program looks fantastic. Bjoern and his team have put together some outstanding speakers and I get to introduce them all!

There will be ducks ....

Sunday, March 02, 2008

You need to fight for freedom ...

I've just been asked "When do you think that competitive utility computing markets will appear?"

The answer to that question is; when business consumers start fighting for it.

It's not in the interest of most providers to see their products become a commodity. So I'd expect the major software providers to pay lip service to portability. Whilst they may adopt open standards, they know full well that this is far from the portability needed for a truly competitive utility computing market.

Software providers will want to create different computing grids based upon branding & reputation. They won't want true portability between them.

As Gary Edwards said in reference to the latest OOXML vs ODF spat:

"The interoperability we expect from an open standard is instead limited to application specific document exchange. And soon enough we come to realize that nothings changed, we have artificially limited application choices, and it's pretty much back to 1995 where everyone in your circle has to be running the same application if you intend on exchanging documents. Meaning, the vendors win again."

Whilst I'm supportive of the aspirations of the data portability group, like Dennis Howlett, I'm not convinced by the group. In my case, the issue for me is their focus on open standards and the damage this might do in diluting the open source message.

If you want true portability, you're going to have to fight for it and you're going to need open source.

SaaS, Utility and Clouds

Picked up this post about how cloud and utility computing are different. Thanks to James Urquhart for the link.

From the comments, I saw that SaaS has got dragged into the conversation. So, for the sake of clarity, I thought I'd expand on my post which included some of definitions that I often use.

Software as a Service is "a software application delivery model where a software vendor develops a web-native software application and hosts and operates the application for use by its customers over the Internet". It's a delivery model, as much as bespoke and stand-alone products are.

Utility computing covers "the packaging of computing resources, such as computation and storage, as a metered service similar to a physical public utility, This system has the advantage of a low or no initial cost to acquire hardware; instead, computational resources are essentially rented". Essentially it is a billing and commodity provisioning model, though the field of study does also cover the concept of large scale utility providers (hence there is a natural overlap with volume operations and SaaS).

Cloud computing covers the concept of computing resources being provided by a mesh of external devices and grids as opposed to a single identifiable device. It's a fairly ephemeral term, just like clouds. When considering the infrastructure for provision of computing resources, I happen to prefer the concepts of distinct computing grids, whether those are provided by specific companies or through P2P infrastructure.

Virtualisation is simply a "technique for hiding the physical characteristics of computing resources from the way in which other systems, applications, or end users interact with those resources". Such techniques are often used to create computing grids.

So "software as a service" can be provided on a "utility computing" basis and the underlying infrastructure can be provided by a "computing grid" which can be built using "virtualisation". It doesn't have to be that way, but it can.

The "software as service" provided could be an application (like or a development framework (like Bungeelabs) or an operating environment (often called 'hardware as a service' like Amazon's EC2).

Furthermore, an application can be built upon a framework which can be built upon an operating environment. So you can have a "stack" of "software as a service".

It is worth noting that not all services in the stack need to be provided in the same way.

A Competitive Utility Computing Market is "an ecosystem of utility 'software as a service' providers where there is portability between one service provider and another". In the case of an application, such as CRM, there would be several SaaS providers of that application with portability between them. Such a market is analogous to the electricity market. It should be noted that portability, in this case, is not simply a matter of switching services and open standards, as consumers are not infrastructurally neutral and have a data relationship with their supplier.

It is also worth noting that each level of the stack from application, framework to operating environments, could be provided by a computing grid or a competitive utility computing market.

Those are the terms that I find useful. These concepts were behind my talk at OSCON and the Zimki product (a JavaScript development framework provided on a utility basis as SaaS through a computing grid which included all the components of portability necessary for creating a competitive utility computing market).

James Urquhart is promising to write more on this subject, so if you haven't already read his blog .... keep an eye out for his posting.

One last thing,

  • "computing grids" (an architectural method of providing infrastructure)
  • "virtualisaion"(a technique for abstracting the physical location of computing resources)
  • "software as a service" (a delivery mechanism)
  • "utility computing" (a billing and provisioning mechanism)

... are different things.

As for "clouds", they are wispy formations of water vapour and some other ephemeral gathering of computing resources which sort of maybe, somehow, isn't quite ... nah ... slipped through my fingers again.

Nick Carr wrote a book on the subject, ask him.

[Update: James has posted a workable definition of cloud computing. The cloud gets a little less 'wispy']

Listen to us ... no, don't listen to us ...

I was asked recently:

"Would the spread of digital fabrication lead to a personalisation revolution?"

"Do you think people will be bothered to print stuff themselves?"

Yes I do. If you don't know anything about fabrication technologies, here is a video of the last talk I gave on the subject. It's from 2006, so it's a bit old.

So, why will people be bothered? Well, for the last fifteen years, most companies have been using choice (along with branding, price strategy and patents) as a way of slowing down the commoditisation of their products. The bewildering array of offerings suited to you, are not for your benefit. They prevent you from making comparisons between almost identical products.

A mobile phone is a mobile phone, except of course in the ad-world. Here, it is a personal communications device which says something about you and your lifestyle. It even has a price plan suited to your needs. Your mobile phone, is your phone and it is unlike any other mobile phone.

After fifteen years of this, we have got very used to the idea of personalisation. As digital fabrication starts to spread, I suspect most people will want to personalise and print their own products, rather than buy someone else's.

I can hear the advertisers trying to convince us that we want to buy XYZ's product rather than products designed and built by you. Too late, we're already far too brainwashed.