Friday, May 30, 2008

If you want the truth, ask a philosopher ...

As with everyone else, my engram of what a journalist is has developed from the stories, the tales and the myths that I've have been exposed to. Mike Arrington's latest piece on Blaine Cook has helped reinforce my negative perception.

Whilst Mike has previously made several comments about Twitter, this article appears to be a direct attack against the former Lead Architect by implying that he was "shown the door so that Twitter could get down to building something that could scale". The article then continues with the praising of "Twitter’s doing an excellent job" and almost threatening Blaine to shut-up as any "more comments like this, and Twitter may fire back".

Where's the balance, where's the other side of the story? An unkind soul could be forgiven for coming to the conclusion that Mike has either become the company evangelist or he is on a fishing trip.

As for Blaine, welcome to Britain. You'll find many friends here.

You are too old to be creative ....

In the last few weeks, on three separate occasions, I've come face to face with ageism and the concept that "only the young are creative".

The combination of starting a new career and closing in on my 40th birthday has made me more aware of how widespread and blatant ageism is. There is an often repeated meme that the older are more conservative, uncreative and less dynamic than the young. This meme is everywhere from advertising to marketing and even to job adverts. We are even told that the old just want to be seen as "Young at Heart".

Whilst it is true that the "average" 15 year old will have more creative ideas in their lifetime than the "average" 65 year old, this is simply because they have longer to live. Using the same logic you could argue that the young are more environmentally unfriendly than the old as they will travel, pollute and waste more. If you want to cut down environmental pollution from vehicles then raise the minimum driving age to 50.

Unsurprisingly the old have less time to live, on average, than the young. However, what matters here is how the rate of creativity changes with age and not how long you have got left to live.

Now in his book, "Age and Achievement", Lehman argued that the rate of creativity goes into rapid decline after the ages of 45-50 yrs. According to Lehman the peak of creative productivity varies with the tasks but in general your most creative years are in your 30s.

This is bad news for someone like myself. I feel doomed, I'm over the hill and I felt I was only just getting started - gasp!

Fortunately, in 2006, Harry R. Moody's book "Aging" pointed out that Lehman's treatment of longevity was rather creative and suffers a fundamental flaw which creates the distortion.

If you're going to look at the changing rate of creation for people over time, then simply following a group of scientists and looking at the number of papers they publish will always give you a decline over time i.e. your sample group of 500 scientists might publish 500 papers in total at age 25 but less than 250 papers at age 65. You might argue that the reason for this is due to a decline in creativity but that ignores the simple fact that most people tend to die as they get older whilst others retire. So it's a good idea to actually adjust results for the number of your sample group which are still living and active in the field.

Further reading reveals much more intelligently balanced investigations including :-
  • W.Dennis' 1966 study on "creative productivity between the ages of 20 and 80 years" shows that creators in their 60's and 70's will often generate new ideas at a rate exceeding those of the same creators in their 20's.
  • Simonton's 1988 study of "Age and outstanding achievement" shows that the average rate of output of a creator in their 70's is roughly 50% of the maximum peak found in their 30's and 40's.
So whilst I might be halfway through my creative peak, it looks as though it will take another 30 years for me to collapse back into the uncreative pit of my 20's. Whoot!

These are, of course, just average statistics and say nothing about the individual. Creativity never stops and as Moody points out, the chemist Chevreul took up the study of Gerontology in his 90s and published his first paper at 102.

Looks like old dogs can learn new tricks after all.

N.B. Before someone says that the software industry is a hotbed of creative young talent, the link between youth and software creativity is highly contentious. Our industry would appear to not only have what Tim Berners Lee called a 'stupid' male geek culture but one that also idolises youth.

N.B. As a final note, I would NOT be surprised if the mere existence of ageism does affect performance. The following study reports to show that "being put in a low-power role may impair a person’s basic cognitive functioning and thus, their ability to get ahead". Hence a lack of social mobility in society may well be self-reinforcing. I am curious as to whether a consequence of the act of ageism in society is in self-reinforcement through impairment of an affected person's performance.

-- Added 20th August 2013

Tuesday, May 27, 2008

Data Portability, Radioactive AGPL and all that jazz.

I thought I'd cover some old ground for hopefully one last time - portability.

In the software as a service and cloud computing world, portability depends upon having access to ALL of the necessary data and an alternative provider. The are however three main issues to be considered with any alternative provider. First such a provider has to exist, secondly it has to interpret ALL the code and data you provide from the original service in the same way (interoperability) and lastly the process of switching has to be useable. Without all of this, portability is practically worthless to the vast majority of people.

This ideal of portability will only be practically achieved when multiple providers comply to a standard that is an open sourced implementation of the application or service to be provided. If we want to have competitive markets, avoidance of monopoly, portability between providers, minimisation of adoption fears, second sourcing options and limitation of risks then open source is the only answer.

Open standards (as in "open" APIs - a questionable term - and open formats) might be extremely helpful in providing access to data but they don't provide portability. As history teaches us, it's a delusion to believe they will. In the world of SaaS (Software as a Service), open standards could actually limit portability by enabling companies to describe themselves as being open without actually being so.

If you want portability in this future world of software services, then you need the service to be open sourced. This is what needs to be the standard.

However this isn't a problem because we're talking about a world where competition is based upon service rather than product. Such a world is based around thriving ecosystems of providers with portability between them. If you wish to build such an ecosystem, then you will need to minimise the barriers to participation, ensure portability between providers and at the same time maximum competition and the freedom for providers to make service improvements. Fortunately we have the perfect software license designed to do just that, it's called GPLv3.

GPLv3 allows for providers to take an open sourced environment, provide it as a service and modify it without releasing the improvements back to the community, hence allowing maximum competition or freedom for improvement. However the modified version cannot be released as a product which should minimise branching. By combining GPLv3 with trademarks and a compliance / assurance authority, it is also possible to ensure that this maximised freedom does not interfere with portability. You can achieve similar effects with more permissive licenses (Apache) by use of a compliance / assurance authority but the license, whilst good enough, is not ideal.

GPLv3 is the ideal license when mixed with a trademarked compliance / assurance authority for such services because it contains the "SaaS loophole" and hence allows for :-
  • Competition through operational efficiency on service (hence encouraging competition in the market)
  • Limits differentiation on function through the compliance authority (hence maintaining portability)
  • Limits alternative products (hence minimising forking to different standards)
If you want to create a competitive utility computing market of providers, you can't get much better.

An alternative to this approach is to use AGPL. It is often cited as fixing the "SaaS Loophole" as though that was something beneficial but in reality it has little or no relevance in this world. The license misguidedly enforces the release of all improvements back to the community which is likely to stop potential providers ever adopting and providing a service under this license. AGPL severely limits competition in return for little or no benefit. In my view it's a hangover from a product centric world.

It seems I'm not alone in this viewpoint. Ted and I both agree that the "SaaS Loophole" is a good thing. Ted asks a question of whether you could use trademarks and reputation in place of IP protectionism. In the software as a service world, the short answer is no. Trademarks and reputation could be used to ensure portability between providers but not IP protection of the code itself.

Is there a need for another license other than than AGPL or GPLv3? Personally, I believe the GPLv3 is the only license we need in this world. What we need more of today is a service mindset rather than a product one. I suspect that leap is one which will be too far for many companies.

-- additional note [27th Feb 2013]

It's five years on and I've changed my view ever so slightly on the above. The use of permissive licenses such as Apache can make it less problematic for some companies to adopt and hence it can be seen as more reasonable or pragmatic. It does run a greater risk of forking of the codebase and community with alternative proprietary versions.

Forking of the codebase is no bad thing when the changes are contributed back as it allows for experimentation. Forking of the community is what you want to avoid. In many cases a project which contains many contributing forks does appear less likely to fragment as a community and so this is beneficial. Hence the balance of power between less and more permissive licenses rests in whether this helps maintain or destabilise the community.

In practice I've seen permissive licenses run the risk of a collective prisoner dilema with different companies attempting to differentiate to the disadvantage of the whole. At the same time I've seen companies more willing to get  involved with permissive licenses. Hence I tend to side with GPLv3 as the ideal license but accept a compromise of a more permissive license with a strong compliance / assurance group.

However, whilst I accept it, the reasoning above still stands for me. The dual nature of GPLv3 being permissive with services but restrictive on products was why I originally commented to Eben on the benefits of keeping the SaaS loophole. That hasn't changed.

Saturday, May 24, 2008

The Red Queen Hypothesis ... Part II

Organisations contain a mass of different activities and a network of people performing those activities.

If you take away both the activities and the people, you are left with what an organisation really is, which is nothing (bar reserves of capital). Organisations only exist in the interaction between people and activities. However, people come and go and, as previously mentioned, activities are in a constant state of flux. Hence all organisations are continuously exposed to change.

No organisation can ignore such changes for long as they are not islands but instead live in a competitive environment. If an activity becomes more of a commodity and the organisation fails to respond, the result is a competitive disadvantage. Organisations must therefore continuously respond and adapt to these changes, in people and activities, in order to retain their competitive position against others.

This is the business equivalent of the Red Queen Hypothesis from Genetics. It should be remembered that there are two very different and powerful forces of change in any competitive environment:-
  1. Adaption: the need to constantly respond to changes in people and existing activities.

  2. Creative destruction: the constant destruction of the old ways of doing things by the creation of the new.
The general rule of thumb is:-

"You need to adapt in order to survive today but you also need to innovate in order to survive tomorrow."

Friday, May 23, 2008

The Red Queen Hypothesis ... Part I ... Activities

The Red Queen Hypothesis is used in Genetics to describe why systems need to constantly adapt in order to remain competitive. Formally, it is stated thus:-

"For an evolutionary system, continuing development is needed just in order to maintain its fitness relative to the systems it is co-evolving with."
(Leigh Van Valen, 1973, from wikipedia)

I want to describe this effect in terms of business, however to do so we need to first look at how business activities change. Let us start by examining the use of CRM.

The concept of CRM (customer relationship management) systems was an innovation back in the 1980s. However as everyone sought to exploit this new concept, CRM became far more ubiquitous and well defined. The activity has undergone a metamorphosis from innovation to leading edge to product to even utility services. This is not an unusual event, as there is always a constant pressure towards commoditisation of any activity as everyone tries to take advantage of any innovation (see figure 1).

Figure 1 - The metamorphosis of CRM.
(click on image for larger size)



In figure 2, I've mapped this transition on a graph of ubiquity (how common something is) vs certainty (how well known or defined something is).

Figure 2 - A graphical representation of the transition of CRM.
(click on image for larger size)


The transition of an activity from an innovation to something ubiquitous and well defined (or more commodity-like) is fairly standard. Most activities (whether processes, sub process or the results thereof) are in a continuous state of transition.

Organisations consist of a mass of activities, and those activities exist somewhere on that graph. The activities are all connected and you can even map this out. However, for the time being I've provided a representation of an organisation in figure 3 in graph form.

Figure 3 - Activities in a organisation
(click on image for larger size)

Whilst these activities are at different stages of their lifecycle, they are all undergoing a metamorphosis from innovation to commodity. This transition is independent of the organisation itself, as an activity becomes common when others adopt it.

All organisational activities are therefore in a constant state of flux.
Now, I'll use this concept in the next section to explain the Red Queen Hypothesis and its application to business.

Wednesday, May 21, 2008

Management speak ...

I'm often faced with some fairly strange ideas about business, management and economics. I collectively call these Brentisms, after the David Brent School of Management Theory.

I've listed a few of my favourite, with some glib Brent-like counter statements.

BrentismsBrent-like Counter
We only focus on core activities.1If you only focus on core activities then the one thing that isn't core is a future.
If it can't be measured, then it can't be managed.2Just because we can't measure the future doesn't mean we should give up.
We manage by ROI.3Whilst a hammer might be good for banging in nails, it's not suitable for every job.
Prince 2 is the right methodology.4You might be the best hammer expert in the world but we need a hole drilled.
We should outsource IT.5Certain things are suitable for outsourcing; the company's future isn't one of them.
Our people are an important asset.6Try building a future without people.
The customer is always right.7If you do what your customers want, all of the time, then you will end up with no customers.
We are an innovative company and we reward success.8You shouldn't reward people on how well they can predict the future but instead how well they try to make it happen.


Notes:

1. As per Schumpterian economics, creative destruction is the continual process of how the old ways of doing things are destroyed and replaced by innovative activities. These innovations may not be sustaining but instead maybe disruptive (as per the work of Christensen). Historically, firms that are unable to transition to the new value (non-core) networks created by such disruptive innovations have a high failure rate.

2. Unlike the incremental improvements of an existing product, the implementation of an entirely new concept is a highly uncertain activity. With these uncertain innovations, there are no market studies, no established value networks and no way to accurately predict what is going to happen. There is nothing to measure against. Any organisation embarking on such a venture must be ready to adapt to any emergent opportunities.

3. Utility services need to be managed on price and quality of service whereas highly uncertain activities, such as innovations, often need to be managed on worth. For those activities in-between such extremes you need to use ROI. The key is to use the right methods for the right stages of an activity's life-cycle.

4. The actual methodology here is not important, it could be Prince, Six Sigma, XP or any number of others. What needs to be considered are the organisational activities those methodologies are applied to. All activities start as highly uncertain innovations becoming more ubiquitous and defined with time. For example the act of installing a telephone system is a far more well known and defined activity today (having been repeated millions of times) than when the telephone first appeared. A defined and certain activity is more effectively managed with static methodology designed to reduce variation. An undefined and uncertain activity is more effectively managed with a dynamic methodology designed to adapt to change. Since any organisation contains a mix of innovative and common or commodity-like activities, it is important to apply the right methodology to the right sort of activity.

5. An organisational function such as IT is simply a grouping of similar organisational activities. IT deals with IT, marketing deals with marketing and so on. Within such a function there is a range of commodity and innovative like activities. The key advantage of outsourcing is the benefits that can be obtained through economies of scale. This can only occur for commodity-like activities which are well defined and ubiquitous in use. The outsourcing of innovation will in effect hand over control of future potential sources of profit to a third party provider and it is unlikely to be cost effective unless :-

  1. The organisation, for whatever reasons, has a fluctuating demand for innovation.
  2. The innovation provider or market can undercut the cost of research. For example it could be parasitical on some other establishment, such as Universities.
  3. The innovation relates to operational improvements to something supplied as a service.

Whilst external collaboration and innovation markets are useful tools, there are many practical, economic and strategic reasons for keeping control of innovation within the organisation. When it comes to outsourcing, you should therefore be looking to outsource those common and commodity-like activities only. For these reasons you shouldn't outsource the function of IT but instead you should outsource those activities of IT that are common and commodity-like.

6. Many organisations in the communications industry (for example, newspaper, music and broadcasting) have undergone significant changes with the onset of the internet and the digitisation of content. Most of these changes relate to the commoditisation of the means for mass communication. For example, at one point in time, these industries depended upon huge physical installations such as printing presses. These installations where expensive and the industries were described as being physical capital intensive. If you wanted to be a journalist, musician or any of the other roles in these industries, you needed to go and work for one of these big players. The internet and digitisation of content have effectively removed the need for the physical capital items like printing presses. Anyone today can setup as an online musician, journalist or broadcaster. The big players have lost a powerful mechanism for controlling their staff and talent as they no longer control access to the means for mass communication. Without such a method of control, these organisations must look at other means to attract and manage staff and talent whether it's through financial, human (working with experts) or social capital (a beneficial network, reputation). In such circumstances, people aren't an important asset, they are your only asset. This effect is likely to become more pronounced with the the looming commoditisation of the manufacturing process through 3D printing.

7. Many large companies fail not because they are badly managed and don't listen to their customers, but precisely because they do. Rather than repeat the work of Christensen, I would recommend you read it.

8. At best this statement is pointless as everyone wants to succeed, at worst it is counterproductive. Excluding the plethora of trivial product improvements or features, innovation is about implementing an idea for the first time. It is a highly uncertain activity and more often than not it fails. Innovation is independent of success or failure and it can result in either. By only rewarding success, you will discourage failure and this will discourage experimentation and innovation.

Tuesday, May 20, 2008

A useful lesson

Most of my work and research deals with complex (as in non-linear) approaches to management and how this can be effectively used. However, it is dangerous to assume that someone would willingly exchange a simple but ineffective tool for a more effective yet complex one.

More often than not, people prefer a simple and easy life. This was quite neatly (and candidly) explained to me a week ago.

"I understand we could manage things better but it is not important as everyone else is in the same boat. We only need to get better at this when everyone else does. Organisations might be nothing more than people and activities but they are managed by people. As a manager, I ask myself, how does this help me and how does this make my life easier. My staff are no different and that's the issue you need to be looking at. The company is not the important factor here and you need to forget about alignment, effectiveness and all that. If you want to sell these ideas then you need to look at how this can benefit the individual and help them achieve their personal goals and targets."

If you want to sell a concept or an idea to company, it is important to remember that companies don't buy stuff, people do.

Make sure you focus on their needs first.

Saturday, May 17, 2008

XTech talk ...

By extracting the audio from Ian Forrester's recording and mixing it with the original slides, I've put together a video of my talk from XTech.

Overall the talk was so, so.

It needs a bit more sparkle and some of the concepts didn't come across as clearly as I had hoped. There are possibly too many interwoven ideas and a few of the graphics are looking a bit tired as well.

The main topics are:-

  • An introduction to commoditisation, creative destruction and competitive advantage.
  • An introduction to innovation.
  • An introduction to the underlying processes.
  • Why nothing in management is simple.
  • How this impacts IT.
  • Why open is essential for a service world.

Why open matters
from innovation to commoditisation

(approx. 43 mins)

Friday, May 16, 2008

Inspiration ....

Thanks to Euan Semple for spotting this post by Tom Peters on the best link ever.

This is truly wonderful, the people are incredible and so full of passion.

Thursday, May 15, 2008

Reputation, SaaS and Marketplaces ...

Open sourced standards provide a mechanism for portability and are hence a necessary part of any utility computing marketplace. A first step towards this portable world is Google's Open SDK, however to explore this theme more I'd like to take a look at Bungee Labs' service.

Bungee Labs provides a framework for its customers to develop and release applications into a computing "cloud" that is managed by Bungee Labs. It is provided as a service rather than as a product. The customer benefits by not having to worry about infrastructure and also from only paying for what they use. The customer's only concerns revolve around their data, the applications they've written and how locked-in to this environment they are. The latter is a fairly major issue to many potential consumers, who may be reluctant to use such services without choice in providers and second sourcing options.

Let's now hypothesize that Bungee Labs open sources their environment under GPLv3. Now rather than giving away their competitive advantage, this would instead be a bold yet risky move (all innovations are) to capture a much larger market.

As soon as they open source, then either it will be ignored or other providers could decide to offer this same service based upon this new would-be open sourced standard. We would now have the making of a marketplace with potential portability between providers, a much more attractive option to any potential consumer. Open source is also the fastest way to create an emerging standard by enabling adoption by others.

An important but not to be forgotten side effect of this is that it also allows businesses to become familiar with the service running on their own machines, until such time as they are comfortable to shift into the cloud or, in this case, into the market.

By using an open source approach, a company would hope to gain a small piece of a larger pie based upon the open sourced standard which it has expertise in. If the approach succeeds then the company would in effect trade ownership of a closed locked-in environment for influence in a much bigger marketplace.

Q. How can you influence a marketplace based upon open source?
Q. How do you stop someone branching off in a marketplace based upon an open sourced standard?

With GPLv3 code you can't stop them doing this. However licensing is the wrong way to attempt to control such a market; it's a hangover from too much product focus. What you need to use are the two powerful mechanisms of reputation and trademarks. By establishing a trademark and offering trademarked images to all service providers who comply with the GPLv3 version of the open sourced standard through a testing or compatibility service (e.g. remote testing of compliance to a basic set of primitives which define the service) then in one swift move a company could:-
  1. Ensure compliance information is given to the end user (trademark)
  2. Create a method of enforcing compliance (testing suite and enforcement of trademark)
  3. Create an emerging utility computing market based upon portability and open source.
  4. Establish themselves as a compliance authority (along with all the lucrative service and other revenue streams available)
  5. Encourage operational competition of providers in the market as opposed to functional differentiation (SaaS loophole)
The trademark needs to recognised and trusted, hence reputation is key to this. It is critical that trademarks are enforced.

This idea was essential to Zimki and the formation of competitive markets of providers in this space. An open source system which allows for maximum competition (through operational improvements which don't have to returned to the core because of the SaaS loophole) but ensures interoperability by provision of trademarked images (Zimki Compatible) for providers complying to a testing service.

By open sourcing a platform (such as BungeeLabs'), combining this with a channel program to encourage other competitors to setup and establishing a trademark system for reputable providers, you could start to not only create but influence a marketplace in the Software as a Service world. This marketplace would offer portability and choice for the consumer, a compelling argument against the predominantly locked-in world of SaaS today.

Once we start getting into the nitty gritty of portability in the SaaS world and providers start to overcome their product mentality of the past, then I expect that the establishment of such trademarks and compliance authorities will become a major battleground in the next few years. Being that authority in a world which is heading towards computer exchanges and brokerages is of significant value.

Digital cameras? It'll never happen ....

It might seem as though the computing industry is drowning in a sea of aaS, so I thought I would simply explain what the big deal is.

The fuss is simply about the progression of the computer industry from a product based economy to a service one.

Whilst a product based economy is built upon buying and maintaining products, a service based economy depends upon you renting services that are managed by someone else, such as with Amazon EC2. Whilst there are risks associated with being locked-in to one particular provider, the benefits include :

  • more efficient resource and energy usage.
  • more rapid release of innovations on the web through componentisation.
  • lower cost & capital expenditure for business.

The risks can be overcome with portability between providers, which in turn will lead to competitive utility computing markets. Naturally, there is more variability in applications than in frameworks and more in frameworks than in hardware. So whilst we should see a couple of Hardware "as a Service" marketplaces, there will be hundreds of different application markets.

None of these ideas are new, they've been common fare for many years. The transition to a more service based economy, the use of open sourced standards, the creation of such computing marketplaces and the control of such markets through reputation and trademarks are commonly discussed subjects. None of the mechanics of any of this is the big deal.

In this new world competition is not going to be based upon product but on service and all the current mantra about product being a source of competitive advantage will be flushed away. Many of the big players will be too wedded to their existing value networks to react to this disruptive innovation.

The initial stages of this new service world have been about proprietary systems which have proved their value to startups, SMEs, skunk works and some larger companies. The next stage requires portability between providers. During this time you would expect that the product focused companies will entrench to niche high margin areas like finance and major corporations as well as attempting to exploit new geographical markets.

The question has always been, who of the big companies from IBM to Microsoft to Oracle to SAP is so tied to a product focus that they won't see the change until it is too late. The change isn't about simply providing "our product" as a service, it's about an entire shift to a service based economy. It's not about "product" at all.

So who is going to end up the next Polaroid? That's the big deal.

Wednesday, May 14, 2008

Portability, Accessibility, Openness and the Market.

It would appear that our future is one of both utility computing providers and marketplaces with consumers switching between one provider and another. This is what a shift to a service from a product based economy means.

A cornerstone of such a marketplace is portability. This means access to ALL of your data with multiple providers offering services which interpret ALL of your data in exactly the same way. There can be no additional or missing data or features and compliance to a standard becomes paramount.

However providers are not going to hand over strategic control of their business to a third party, so whatever standard there is will have to be open.

Well there are two types of open:-

Open standard: usually a formal document that establishes uniform engineering or technical criteria, methods, processes and practices which is publicly available and has various rights to use associated with it. [source:Wikipedia]

Open Sourced Standard: a completely free/open source reference implementation of a service. The open source implementation is considered to be the standard.

For example, OpenID is an open standard whereas Google's Open SDK is close to being an open sourced standard (being a free/open source reference implementation of Google's AppEngine service). I say close, because the SDK doesn't actually operate in the same way as GAE.

Whilst open standards will provide accessibility to data, this is not the same as portability. You do not necessarily have ALL the data and there is no guarantee that it will be interpreted in the same manner (interoperability) by another service. With an open sourced standard, there should be no such issues. If there are, they are transparent and can be quickly solved.

This doesn't mean every provider just implements the open source standard. There is plenty of room for competition at the level of service provision, but then that's the point: the world of services is not the same as the world of products.

Competition in the world of products is based upon feature set. In the world of services, competition is based upon price and quality of service with the product itself being standard.

When I switch electricity provider, it's not because they have a better type of electricity but because they have a better service. Whilst switching of electricity providers is based upon portability, in this case standards are enough because those standards define our entire interface and relationship with the provider. In the case of utility computing it is the software product that we are using as a service which defines our entire interface and relationship with the provider.

If you want portability, then that product has to be open source or completely compliant to an open sourced standard.

The alternative is monopoly and then finally regulation.

XTech Review

Last week, it was my pleasure to give the first keynote at XTech 2008 in Dublin. Giving the opening talk at any conference is always a delight because you can then relax and enjoy the rest of the sessions.

The conference was exceptional and I thought I'd take this opportunity to thank Edd Dumbill and Expectnation for organising this excellent event. I thoroughly enjoyed it.

There were so many fabulous sessions that it seems unfair to highlight any in particular. However, I'll mention just a few:-

AMEE - The world's energy meter by Gavin Starks.
An inspirational talk on AMEE (avoiding mass extinction engine). Gavin and his team are focused on providing the world with a canonical source of information regarding energy consumption and carbon data. The work is essential, exciting and fascinating and the talk was delivered with real passion.

Data portability for whom? Some psychology behind the tech by Gavin Bell.
At any good conference, there is always one talk which stands out a mile from the rest, this was it. Gavin gave a breathtakingly good talk on the psychology of why non-geeks need data portability, asking the question:

"How can we ensure that we include their needs and expectations along side the buzzword tick list?"

Within our community it is all too easy to become wrapped up in the current technology and assume that everyone else is just in the process of catching up with the alpha geeks. This is an arrogant and dangerous assumption and Gavin did a great job of dissecting the issues. Awesome.

JavaScript: The Good Parts by Douglas Crockford.
One of best technical talks that I have seen in a long time. Who'd have thought that (value == null) only works because of two errors that cancel each other out.

Here Be Dragons: Knowing Where the World Ends by Leigh Dodds.
A wonderful exploration of the open data landscape and the issue related to exploring, finding and relating canonical sources of information.

Why you should have a Website by Steven Pemberton.
An illuminating talk on the issues around lock-in and data loss in web 2.0. The key issue that Steven was arguing for was that users should have their own web sites and aggregators should collect this data together. Whilst this directly relates to the issue of portability, I happen to agree with Gavin Bell that whatever solution emerges will need to be useable by non-geeks.

On a final note, there is always a presentation which I really shouldn't have missed. Judging from the video, it was truly spectacular.

Bare-naked Flash: Dispelling myths and building bridges by Aral Balkan
True showmanship at its best including the RickRoll of Jeremy Keith. Brilliant.

Tuesday, May 06, 2008

XTech

I've been extremely busy for last two weeks, hence the lack of any posts. I'm now heading off to XTech.