Wednesday, March 19, 2008

I'll huff and I'll puff and ... ohh I like how you've painted it.

Whenever I'm mapping out the activities (for example business processes) of an organisation, I try to use colour codes for the different lifecycle stages of an activity. I find this helps me when visualising what the organisation needs to do and how it needs to change.

It's just something I do. I've provided four images to show the colour codes I use.

Stage 1 - Innovation, First Instance
(click on image for larger size)

Stage 2 - First Movers, Bespoke examples
(click on image for larger size)

Stage 3 - Transition, Products
(click on image for larger size)

Stage 4 - Commodity.
(click on image for larger size)

It's worth remembering that any activity has an effect on the organisation. In the case of a created product the effect is overall on cashflow, however you could equally have a process whose effect is on the efficiency of operations and cost reductions.

Generally, any innovation should be built upon many commodity or transitional like activities. If it's not, you need to ask yourself why not?

Componentisation of systems into activities and use of commodity components where possible is a massive accelerator for innovation. It is the reason why I advocate using Service Orientated Architectures (SOA), Software as a Service (SaaS) and Enterprise Architecture (EA) frameworks like Zachman.

The speed at which a complex system evolves is much faster if it is broken into smaller stable components and hence organised into one or more layers of stable subsystems (for more on this read up on Howard Pattee).

If you:-
  1. take a system of k elements
  2. group every s number of k elements into a new component, l
  3. group every s number of l components into new component, m
  4. keep on repeating this grouping until you can't group any more
Then, with each component being considered stable, the rate of evolution of the system will be proportional to the log to base s of k.

To show this in action, consider the three little piggies building a house. Let's say each house requires 100,000 bricks and whilst the big bad wolf can blow down an unfinished item, any stable component is too strong to be blown apart. Our three little piggies will follow different strategies:-
  • Piggy 1 : Build the house in one go with each brick being a single component.
  • Piggy 2: Build stable components, each component containing 10 sub-components. i.e. 10 bricks = a line. 10 lines = a section of wall etc.
  • Piggy 3: Build stable components, each component contain 100 sub-components.
OK, let's say on average you can put together 1,000 components before the big bad wolf returns. Then :-
  • Piggy 1 : will never be completed.
  • Piggy 2 : will be completed by the 12th visit of the wolf.
  • Piggy 3 : will be completed by the 2nd visit of the wolf.
In general: build in blocks, use small stable components.

[NB: For simplicity of explaining the analogy, I've taken the initial act of combining 100 or 10 or 1 brick(s) into one component as creating one component. If you instead treat each brick as a component, then the times are Pig 1: Never, Pig 2: 112 visits, Pig 3: 102 visits.]

It sounds obvious, but knowing the lifecycle stage of an activity along with componentising systems which can be componentised is a necessary step to increasing innovation. In other words: if someone has already built a hammer, use it and don't rebuild it.

Equally essential is to use different methodologies at different stages of an activities lifecycle (it's not agile vs six sigma or networked vs hierarchical or innovation vs efficiency - it's a mix of each, all the time). In other words: get used to living with change.

Using such an approach you can balance the innovation paradox between the order needed to survive and the disorder needed to create a future.

In summary: build in blocks, use a hammer, expect the plan to change and don't forget to add a splash of colour.


RHM said...

The componentization approach makes sense, but only with the addition of context with which you began the post.

If the approach is applied to, for example, an innovative, never-before-constructed edifice: the calculation as to how many visits by the BBW is subject to how many times the developer has had to discard and rebuild a component from scratch, or break it down into its self-standing elements and reassembled in a different configuration.

When constructing something with a known, vetted plan, one approaches the optimization you've presented.

swardley said...

I agree.

I use the analogy of BBW to show why componentisation has an effect, and hence why building an innovative service using common components for common activities (such as infrastructure etc) is faster than building the entire service from scratch.

Mashups are a clear example of this, where services are combined into new innovative forms without the need to rebuild those services.

We're used to these concepts at a low level for example no-one thinks about reinventing a silicon chip for a new web site, however many quite happily reinvent infrastructural components again and again.

The key is to distinguish between the different stages of what an activity is, in order to create common services and processes which new innovations can build upon.

Common or commodity like services by their very nature are well defined and can be planned.

Innovations are by necessity deviations from what has gone before, however this doesn't mean we have to re-invent everything.

Context is key. Good comment.

Anonymous said...

'this doesn't mean we have to reinvent everything'

I've noticed you hold up SOA as a key part of your sustainable architecture on a number of occasion. But then you go on to talk about Mash-ups as a good example. Highly scaled Mash-ups are generally developed using a Resource oriented rather than Service oriented architecture. I'm no programmer, but I'm hearing real 'thought leader' software architects saying that SOA isn't working and ROA is the future of development for web software. Perhaps this point is a bit particular - but terminology is important.

on a more general theme of commoditisation (and this may be a whole other day's work!)- you deal with lock-in, proprietary...ness! etc. yet you say commoditisation is inevitable, but it must first become open-sourced and run by a powerhouse. I've heard this arguement once or twice before, but have never really heard anyone argue when or why these powerhouses might go down this route. Any opinions?

luvin your work ducks and all.

swardley said...

Hi Anon,

The important issue with SOA is componentisation.

SOA is merely an architectural style for packaging processes as service and it is loosely coupled to the mechanism of calling whether SOAP, REST or some other form of RPC.

There is a huge overlap between the worlds of web 2.0 and SOA, Dion Hinchcliffe's ZDNET articles cover this topic and are worth a read.


The power of mashups is in their re-use of services. The important thing to remember is not the mechanism of RPC but the decomposition of software into components. It is worth noting that there is a wide range of different types of mashups (client or server side, code or data etc) and mashup platforms. Again, the key issue is componentisation.

Though there is a slight architecural distinction between SOA+REST and ROA, it is splitting hairs to a painful degree. In general, I prefer to use the term SOA.

I'm afraid the only arguments I've ever come across between ROA / SOA have been based upon one group insisting that only ROA can be built with REST and everyone else disagreeing. I'd be really grateful if you could point me to these "thought leaders" because if I'm missing something more significant, it would be useful to know.

re "yet you say commoditisation is inevitable", to be accurate what I say is that there is a constant pressure towards commoditisation. There are a number of reasons why something won't become a commodity, for example physical constraints.

re "but it must first become open-sourced and run by a powerhouse", What I advocate is the use of open source to create competitive utility computing markets with portability between service providers. Open source is essential in order for providers not to hand over strategic control of their business to a 3rd party. There is a way for a software powerhouse to create the illusion of a free market using proprietary technology and open standards as the basis of a large computing cloud. This I do not advocate or encourage.

re "why these powerhouses might go down this route", I doubt they would willingly do so. The main powerhouse who might significantly gain from such an approach is IBM.

I hope this is useful.

Anonymous said...

Thanks for the steer to the Hinchcliffe article. That clears up a lot. Hinchcliffe himself explains the point I was trying to make much better:

One of the biggest arguments against traditional SOA is that there are literally thousands of software platforms and enviroments that presently exist in the world. And if they don’t speak your unique flavor of SOA (SOAP and WS-*), interopability with them won’t (and doesn’t) happen.

With WOA, anyone that can speak HTTP — the fundamental protocol of the Web — and anyone that can process XML, which is to say just about every tool and platform that exists today, can interoperate and work together simply, safely, and easily and build applications on top of one another services. Importantly, mashups are a key outcome of the trend towards WOA and most mashups are based on REST or REST-like services.

I'm on hollow ground here as I'm informed by 2nd (or 3rd) hand information rather than experience, so perhaps I should just accept this and move on! But it seems to me that if there is a significant shift in thinking on the way SOA should work (in effect a redefinition of SOA), it should be made more clear - Inexperienced people (like myself) will continue to advise WS-* based SOA when I need to be aware of an alternative. It sounds to me like people are redefining SOA so that they don't have to say they were wrong. Perhaps this is best for progress! ;-)

Thanks for the rest of the feedback too. Every day's a school day. I'd love to hear your take on how to address the "I doubt they would willingly do so" comment you made, although perhaps the world is not ready to hear that!

swardley said...

Hi Anon,

My experience has been from SOA+SOAP to SOA+REST environments and we've always called it SOA.

SOA is simply about the decomposition of processes to re-useable software services, as I said it's an architectural style.

My own preference is to separate the architectural style from the implementation detail. I would loathe to see Object Oriented Design (OOD) suddenly called something else, like component or class or entity orientated design, because of a new implementation of the same architectural principles in a new language. Confusion would be the order of the day.

I'm a fairly simple person, so I believe clarity is best be served by highlighting the difference, hence I find SOA+SOAP vs SOA+REST a far more useful distinction than SOA vs ROA.

There has been a significant move towards SOA+REST and whilst SOA is an architectural principle it can be implemented in many ways.

The evolution of SOA is far from over.

I hope that helps.

P.S. On a personal note, I suspect that much of this ROA vs SOA debate is being driven by people who have a vested interest in repackaging a term for reasons of market / thought leadership etc. It unfortunately will do nothing but create further confusion within the industry.

swardley said...

Hi Anon,

Thanks for the excellent questions by the way, I've made a post on the ROA vs SOA subject as I feel this blurring of architecture and implementation will just cause more confusion.

OK, on the "how to address the 'I doubt they would willingly do so' comment you made" - this is a very interesting subject. It's a big complex field, so let's just focus on the provision of computing resources as a service (for example storage or computing environments such as EC2) or "computing clouds" as they are often called.

At the moment we seem to be heading towards towards a mix of two possible scenarios.

1. We end up with a number of different computing "clouds" which promise a form of portability that doesn't truly materialise.

2. A major software vendor releases a proprietary computing gird service which allows multiple vendors to sell computing resources into a "cloud" which is then used by consumers. This creates the illusion of a market, and the technology itself could use open standards further enhancing this illusion. However, this is a particularly insidious form of lock-in.

Whilst these scenarios are good for those vendors, it is less than ideal for industry and business consumers.

How do you address this? Well there are four basic ways I can see, however I'd be very interested in other views.

1. One of the big players might for tactical reasons embrace the creation of competitive utility computing markets and release a number of open sourced products into this field. I say might because it is unlikely. I was hoping that IBM was going to enter this space with its cloud computing effort.

2. A start-up using open source to promote their technology and gain adoption from a number of ISPs. The purpose of using open source here is to enable a number of parties to come together to form a competitive ecosystem and to allow interested customers to first trial the systems internally.

3. Government intervention. Never rule this possibility out.

4. Industry and Business consumers form a consortium to force the creation of competitive utility computing markets. Possible, but unlikely at this moment.

At the moment, I do have some real concerns over where we are heading towards with SaaS but I also understand it's role and necessity.

Portability and second sourcing are critical issues.

I hope this is useful.

Anonymous said...

Easy to ask the questions when you know there's going to be an interesting answer. I just line em up you knock em down!

Great to see you put up a post on ROA. The response you got from Aurélien is the kind of stuff I'd been hearing. Again, I'm trying to interpret something I don't fully understand, but the way I boil down the situation is that ROA people think you need to talk about something separate to SOA as SOA includes so many appraoches that they do not believe to be truly loosely coupled. The RESTful approach (if I'm not mistaken) they believe is (or almosts is!). And so they believe it should be disassociated from the failings of the other.
I'll be very interested to see if that conversation continues on the ROA post.

Really interested in your 4 potential ways of adressing the lack of a competitive utility grid.
My gut reaction on each
1. Possibly won't ever happen, but certainly won't happen until there's some clear revenue model and data centres big enough to funnel and handle charging for transactions (if I'm reading how it would work right)
2. Is it just a feeling - or does the promise of opensource not just always peter out. When successful it gets re-appropriated, when mildly promising, it gets too complicated to manage.
3. Too complex for them to manage. - though a Revolution sounds like fun. (a global body perhaps - too political)
4. Interesting - All it would take is one industry area to be successful and it might just catch on. On the other hand, it would be limited to 'general purpose' industry apps.

doesn't sound promising as you say - but getting ideas like those into the ether are the only way of finding out. It may have consequences for ones credibility though!

swardley said...

I do understand Aurélien's arguments, I'm not persuaded myself that there is a significant difference for a new term.

As for "believe it should be disassociated from the failings of the other", I'm sure there are many who would agree and also many who would argue that SOA+SOAP/WS-* has not been a failure.

Personally, I don't think there is a need to rename an architecture because of perceived implementation failures and neither do I believe it is healthy to deny the failures or successes of certain implementations.

As I see no evidence to suggest that RESTful web services are the end of the story, I have a tendency to separate architectural principles from the software architecture and protocols used for calling services.

I'm not intending to "knock em down", it's just I don't agree that a new term is beneficial or necessary. In an industry which loves to create new terms (I'm guilty of this too), they often create more confusion rather than clarity.

Regarding the four possible routes:-

The first would be a very specific tactical play.

The second does not have to peter out, in no less extent than apache has petered out. This is the most attractive route for a disruptive play by a start-up in this market.

The third, well most Governments already regulate utility like services. The global aspect certainly makes this more complex, however regulation of a market is prudent not revolutionary. Even Adam Smith argued for market regulation.

The last point - it may well come down to this.

As for "getting ideas like those into the ether are the only way of finding out. It may have consequences for ones credibility though", well it would be a sad world where discussion and expression were frowned upon.

Most people worry too much about such stuff.

Good chatting, we will see how it develops.