Chapter 5
In chapters one to four I've covered the basics of mapping, common economic patterns and doctrine. However, these Wardley maps of business don’t tell you what to do any more than a geographical map tells an Admiral. The maps are simply a guide and you have to decide what move you’re going to make, where you’re going to attack and how you navigate your ship through the choppy waters of commercial competition. In other words, you have to apply thought, decide to act and then act. In this chapter we’re going to cover my journey through this part of the strategy cycle – see figure 42.
Figure 42 – The play and a decision to act
Identifying opportunity
There exists two different forms of why in business – the why of purpose (i.e. win the game) and the why of movement (i.e. move this piece over that). The why of movement is what I'm going to concentrate on here but in order to examine this then we must first determine where we can attack.
In the past, I had sat in many meetings where options were presented to myself and my executive team and then we made a choice based upon financial arguments and concepts of core. We had never used a landscape to help determine where we could attack. This was a first for us and very much a learning exercise. I’ve taken that earliest map from 2005 and highlighted on it the four areas that we considered had potential. There were many others but for the sake of introduction, I thought I’d keep it simple. These four wheres are shown in figure 43.
Figure 43 – Wheres
Where 1 – we had an existing online photo service that was in decline but which we could concentrate on. There existed many other competitors in this space, many of which were either well financed (e.g. Ofoto) or ahead of us in terms of offering (e.g. Flickr). There were also unmet needs that we had found. As a company we had acquired many capabilities and skills, not necessarily in the online photo business as the group developed many different types of systems. We also had an internal conflict with our parent company’s online photo service which we built and operated. Whilst our photo service was open to the public, the parent company’s service was focused on its camera owners and we had to tread a careful game here as our own service was sometimes considered a competitor.
Where 2 – we had anticipated that a coding platform would become a utility. We had ample skills in developing coding platforms but most importantly, we had also learned what not to do through various painful all-encompassing "Death Star" projects. There would be inertia to this change among product vendors that would benefit us in our land grab. To complicate matters many existing product customers would also have inertia and hence we would have to focus on startups though this required marketing to reach them. There was also a potential trade-off here as any platform would ultimately be built on some form of utility infrastructure similar to our own Borg system (a private utility compute environment providing virtual machines on-demand based on Xen) and this would reduce our capital investment. Our company had mandates from the parent to remain profitable each and every month and to keep headcount fixed. I had no room to expand and any investment made would have to come out of existing monthly profit despite the reserves built up in the bank. A platform play offered the potential to reduce the cost and increase the speed of development of our other revenue generating projects hence freeing up more valuable time until a point where the platform itself was self-sustaining.
Where 3 – we had anticipated that a utility infrastructure would appear. We had experience of doing this but we lacked any significant investment capability. I was also mindful that in some circles of the parent company we were considered a development shop on the end of a demand pipeline and they were heavily engaged with an external hosting company. This might cause conflict and unfortunately I had painted ourselves into this corner with my previous efforts to simply “survive”. If we made the move then in essence many of these problems were no different from the platform space except the agility benefits of platform were considered to be higher. The biggest potential challenge to us would not be from existing product (e.g. server manufacturers) or rental vendors (e.g. hosting companies) but the likes of Google entering the space. This we expected to happen in the near future and we certainly lacked the financial muscle to compete if it did. It seemed more prudent to prepare to exploit any future move they made. However, that said it was an attractive option and worth considering. One fly in the ointment was concerns that had been raised on issues of security and misuse of our systems by various members of my own team. It seemed we would have our own inertia to combat due to our own past success with using products (i.e. servers) and despite the existence of Borg.
Where 4 – we could instead build something novel and new based upon any utility environments (either infrastructure or platform) that appeared. We understood that using utility systems would reduce our cost of investment i.e. the gamble in the space. However, any novel thing would still be a gamble and we’d be up against many other companies. Fortunately, we were very adept at agile development and we had many crazy ideas we could pursue generated by the regular hack days we ran. It might be a gamble in the dark but not one we should dismiss out of hand.
Looking at the map, we had four clear “wheres” we could attack. We could discuss the map, the pros and cons of each move in a manner which wasn’t just “does this have an ROI and is it core?” Instead we were using the landscape to help us anticipate opportunity and points of attack. I suddenly felt our strategy was becoming more meaningful than just gut feel and copying memes from others. We were thinking about position and movement. I was starting to feel a bit like that wise SVP I had met in the lift in the Arts hotel in Barcelona when he was testing that junior (i.e. me) all those years ago. It felt good but I wanted more. How do I decide?
The dangers of past success
The problems around making a choice usually stem from past success and the comfort it brings. We had an existing photo service along with other lines of business which generated a decent revenue. Would it not be better for me to just continue doing what we were doing? I’d be taking a risk changing the course we were on. However, I had recently watched another company fail to manage change and was acutely aware of the dangers of not taking a risk. That company was Kodak.
Being an online photo service, I had a ringside seat to the fundamental shift happening in the image market between 2000 to 2005. The photo had been seen as something with value to customers due to its costs in terms of time and money to produce - the visit to the photo lab, the cost of processing and the wait for it to be delivered via the post. Film was at the centre of this and the only thing more annoying than waiting for it to be processed was not having enough film to take that next shot on holiday. Many times in the past, I had to make choices over which picture I took due to a limited number of shots left. However, the image and the film were really just components to delivering my overall need which was sharing my experiences. The image was also evolving from analog film to a new digital world in which I could take pictures and delete the ones I didn’t like. I might have a limit in terms of memory card but I could always download to a computer and share with others. There was no film processing required.
I’ve created a map for that changing landscape in figure 44 and as I go through more of my experience with the Kodak story then I’ll make references to that map. The old world was one of analog film (Point 1). Sharing a moment was about sitting on the sofa with friends and family and passing the photo album. The film itself needing some mechanism of fulfilment such as the photo lab. However, the camera industry was rapidly becoming commodity with good enough disposable cameras. The analog world of images was also changing to one which was more digital (Point 2). Digital still cameras (DSC) were becoming more common and I could share an image by simply emailing it to others. Kodak had led the charge into this brave new world with early research in the mid 1970s but somehow it also seemed to be losing ground to others such as Sony and Canon.
Figure 44– Kodak
The growth of digital images and the spread of the internet had enabled the formation of online photo services (Point 3). These provided simple ways of printing out your images along with easier means for sharing with others. There was a very noticeable shift occurring from printing to sharing. You could create social networks to share images about hobbies or instead share with a close circles of friends. One of the early pioneers in this space was Ofoto which had been acquired by Kodak in 2001. The messaging of Kodak had also changed around that time, it became more about sharing experiences and moments.
However, Kodak wasn’t the only competitor in the space and unlike many others, Kodak seemed to have a problem in that it made significant revenue from film processing. Whilst it had a strong position in digital still cameras and online photo services, it didn’t seem to be maximizing this. Others were quickly catching up and overtaking. I can only assume that its past success with film had created inertia (Point 4) to the change within the organisation. It seemed to an outside observer that Kodak was in conflict with itself. The first signs of this were apparent in the late 90s with the release of the Advantix camera system, a curious blend of digital camera which produced film for processing. There were also conflicting messages coming out of Kodak despite its messaging, whilst one part of the organisation seemed to pushing digital another part seemed to be resisting.
In 2003, Kodak had introduced the Easyshare printer dock 6000 that enabled consumers to produce Kodak photo prints at home from digital images. When I first heard of this, it felt as through Kodak had finally overcome its inertia through a compromise between the fulfilment and the digital business (Point 5). The future was one of a self-contained Kodak system from digital still camera to online service to photo printer. But there was a problem here. Already, on our online site we had witnessed the rapid growth of images taken with mobile phones (Point 6). Though camera phones were still uncommon, they seemed to herald a future where people would take pictures with their phones and share online. There was no mass market future for print, only a niche compared to an enormous market of shared digital images. It seemed as though Kodak had overcome its inertia through a compromise which meant investing in exactly where the future market wasn’t going to be. By early 2005, from our perspective then the future of the entire industry from fulfilment to photo printers to cameras to film to digital still cameras (Point 7) was starting to look grim. For us, the future of pictures looked more like figure 45 and printed photos were barely worth mentioning unless you intended to specialise in a profitable niche.
Figure 45– A future picture
In any choice I was going to make, I had to be careful of inertia and past success. Simply standing where we were might be the comfortable option but it didn’t mean we would have a rosy future. Our fraught issues around our parent's photo service could grow if we embraced a camera phone future as this would put us in direct conflict with its core DSC business. However Kodak was a clear example of what could go wrong if you didn’t move fast enough into the future, allowed inertia to slow you down or compromised by placing the bets in the wrong place. But maybe there was another future we could find but how far into the future should we peek?
The near, the far and the crazy
Back in the late 90s, I had taken a deep interest in 3D printing. It was the main reason why I had originally joined the near bankrupt online photo service because I envisaged a future where images of physical things would be shared. I wanted to learn about the space of sharing images. When we were acquired by one of the world’s largest printer manufacturers, I was overjoyed. I assumed that they too would share my passion. I gave numerous presentations on the topic both externally and internally within the parent company on this subject and to my disappointment it was always the external crowd that got more excited. In 2004, I gave a presentation at Euro Foo on the future of 3D printers. The subject was a pretty hot topic at the time and one of the audience that I was fortunate enough to meet was Bre Pettis who was demonstrating his felt-tip pen printer, the DrawBot. Why fortunate? Bre founded MakerBot and subsequently rocked the world of 3D printing.
Whilst 3D printing was a passion, I had also an interest in printed electronics especially the work of Sirringhaus and Kate Stone. I started to use these concepts to describe a future world of how manufacturing would change. The basics are provided in figure 46 but we will go through each step of this map.
Figure 46 – The near, the far and the crazy
First let us start with the user need for some device (Point 1). I’ll leave it as generic because I want to cover manufacturing itself and not the specific use of one device over another. Our device would have physical elements including electronics along with any software that would interact with it. The physical and electronic elements are commonly described through some form of computer aided design (CAD) diagram which provides instructions on what to build and this is combined with our software which is simply our code (Point 2).
The physical form would normally be manufactured by a factory which generally used common machinery involved in significant custom processes. However this was starting to change with concepts such as digital factories and even 3D printers which were becoming less magical and more common (Point 3). This promised a future world of highly industrialised factories without extensive re-tooling for each product run. Also, since those first inkjet-printed transistors of Sirringhaus in 2001, a new field of plastic and printed electronics was rapidly growing (Point 4). Electronics manufacture was on the path to becoming industrialised and I would just print the electronics I needed rather than combine a mix of commodity and non-commodity components on my own circuit board created on some assembly line that changed with every product run.
For me, the interesting aspect of this was the combination of both physical and electronic forms. In 2005, I had become aware of several University led efforts to create hybrid objects including junction boxes where both the physical form and electrical components were printed (Point 5). This too would become industrialised to a world in which I printed my entire device rather than used factories which assembled. Now, along with potential for creating novel materials and components, this also had the opportunity to fundamentally change the concept of design.
The function of a device is a combination of its physical form, its electronics and any software that interacts with this. As hybrid printers industrialise then this function is described by purely digital means – the CAD (an instruction set) which is then printed and the code (an instruction set) which is run. When we wish to change the function of a device then we need to change one of those two instruction sets along with considering the interaction between the two. Normally, we try to make changes in software because it’s the less costly but as hardware become more malleable then that equation changes. More than this, we are now in a position to simply describe the function of the device that we want and allow a compiler to determine how that should be instantiated in the instruction sets. My desire to add a sun dial to my phone could be achieved through software or electronic or physical means or a combination of all - a compiler could work out that decision tree for me. This opens up a possibility for an entire new form of programming language that compiles down to physical, electronic and coding forms and where designers concentrate on describing the function of the thing and even object inheritance in the physical world. I called this theoretical programming language SpimeScript (Point 6) in honour of the marvellous book by Bruce Sterling on Shaping Things. This topic was my central theme of a talk I gave at Euro OSCON in 2006.
However, I had previously raised these discussions within the parent company and had become aware that whilst we might be able to make far future anticipations of change, they were increasingly built on layers of uncertainty and were increasingly unfamiliar and uncomfortable to others. The further we went, the crazier the ideas sounded and the more concerned people became. This itself creates a problem if you intend to motivate a team towards a goal. Hence, if I was going to choose a course of action, it needed to push the boundary but not too far so that it seemed like science fiction.
I was starting to feel uncomfortable with: -
Where 1 - focus on the online photo service, for reasons of inertia and conflict.
Where 4 - build something novel and new based upon future industrialised services, for being too far reaching.
The question now became, given our choices could we influence the market in any way to benefit us? Could that help us decide why here over there?
Learning context specific gameplay
Context specific play: Accelerates, decelerators and constraints
I understood that everything evolved due to competition and had plenty of evidence to show past examples from electricity to nuts and bolts. The question was could I somehow influence this? By coincidence, from the very early days of 2001 we had not only been users of open source but also contributors to it. We supported the Perl language and many other open source projects. I had purposefully used these as fertile hunting grounds to recruit my amazing team during 2002-2005. But I had also observed how open source efforts through collaboration with others had produced stunning technology that out surpassed proprietary efforts in many fields. In many cases, open source technology was becoming the de facto standard and even the commodity in a field. It seemed that the very act of open sourcing, if a strong enough community could be created would drive a once magical wonder to becoming a commodity. Open source seemed to accelerate competition for whatever activity it was applied to.
I had also witnessed how counter forces existed such as fear, uncertainty and doubt. This was often applied by vendors to open source projects to dissuade others by reinforcing any inertia they had to change. Open source projects were invariably accused of being not secure, open to hackers (as though that’s some form of insult), of dubious pedigree and of being a risk. However, to us, and the millions of users who consumed our services then they were an essential piece of the jigsaw puzzle. By chance, the various battles around open source had increased my awareness of intellectual property. I became more acutely conscience of how patents were regularly used for ring-fencing to prevent a competitor developing a product. This was the antithesis of competition and it was stifling. I started to form an opinion that certain actions would accelerate competition and drive a component towards a commodity whilst others could be used to slow its evolution. The landscape could be manipulated.
At the same, I had noticed that as certain activities became more industrialised and therefore more widespread then it often became difficult to find people with the right skills or there were shortages of underlying components. The evolution of a component could therefore be constrained by a component it depended upon. I’ve summarised these points in figure 47 by applying them to our first map.
Figure 47 – Accelerators, decelerators and constraints
Point 1 – the evolution of a component can be accelerated by an open approach, whether open source or open data.
Point 2 – the evolution of a component can be slowed down through the use of fear, uncertainty and doubt when crossing an inertia barrier or through the use of patents to ring-fence a technology.
Point 3 – the evolution of a component can be affected by constraints in underlying components e.g. converting compute to a utility would potentially cause a rapid increase in demand (due to new uncharted components that are built upon it or the long tail of unmet business needs) but this requires building data centres. Whilst the provision of virtual machines could be rapid, the building of data centres are not.
I started to explore the map further, looking for other ways we could exploit.
Context specific play: Innovate, Leverage and Commoditise
I have frequently been told that it is better to be a fast follower than a first mover. But is that true? Using the map told me a slightly more complex story. Certainly when exploring an uncharted space, there was lots of uncertainty and huge costs of R&D. It certainly seemed better to let others incur that risk and then somehow acquire that capability. But researchers and companies were constantly creating new things and so there was also a cost of discovering that new successful thing in all the noise. We wouldn’t be the only company trying to play that game and any acquisition cost would reflect this. If we wanted to play that game, then somehow we need to be able to identify future success more effectively than others.
By comparison, when taking a product to a utility then the component was already quite well known. It was defined, there was an existing market but yes there would be inertia. I realised was there was a connection between the two and we were sitting on the answer. Our pioneer – settler – town planner structure had enabled us to cope with evolution and connect the two extremes. The settlers role was simply to identify future successful patterns and learn about them by refining a product or library component. In 2005, we actually referred to our settlers as the framework team and their success came from understanding the patterns within what the pioneers - our development team - had built. The pioneers were our gamblers.
However, what If our pioneers weren’t us but instead other companies? Could our settlers discover successful patterns in all that noise? The problem of course was where would we look? Like any product vendor we could perform some marketing survey to find out how people were using our components but this seemed slow and cumbersome. Fortunately, our online photo service gave us the answer.
For many years we had exposed parts of the photo service through URL requests and APIs to others. It wasn’t much of a leap to realise that if we monitored consumption of our APIs then we could use this to identify in real-time what other companies were being successful without resorting to slow and expensive marketing surveys. This lead to the innovate – leverage – commoditse (ILC) model. Originally, I called this innovate - transition - commoditise and I owe Mark Thompson a thank you for persuading me to change transition to something more meaningful. The ILC model is described in figure 48 and we will go through its operation.
Figure 48 – ILC
Take an existing product that is relatively well defined and commonplace and turn it into an industrialised utility (Point A1 to A2). This utility should be exposed as an easy to use API. Then encourage and enable other companies to innovate by building on top of your utility (Point B1 ). You can do this by increasing their agility and reducing their cost of failure, both of which a utility will provide. These companies building on top of your utility are your pioneers.
The more companies you have building on top of your utility (i.e. the larger your ecosystem) then the more things your “outside” pioneers will be building and the wider the scope of new innovations. Your “outside” ecosystem is in fact your future sensing engine. By monitoring meta data such as the consumption of your utility services then you can determine what is becoming successful. It’s important to note that you don’t need to examine the data of those companies but purely the meta data hence you can balance security concerns with future sensing. You should use this meta data to identify new patterns that are suitable for provision as industrialised components (B1 to B2). Once you’ve identified a future pattern then you should industrialise it to a discrete component service (B3) provided as utility and exposed through an API. You’re now providing multiple components (A2, B3) in an ever growing platform for others to build upon (C1). You then repeat this virtuous circle.
Obviously, companies in any space that you’ve just industrialised (B2 to B3) might grumble – “they’ve eaten our business model” – so, you’ll have to carefully balance acquisition with implementation. On the upside, the more discrete components you provide in your platform then the more attractive it becomes to others. You'll need to manage this ecosystem as a gardener encouraging new crops to grow and being careful not to harvest too much. Do note, this creates an ever expanding platform in the sense of a loose gathering of discrete component services (e.g. storage, compute, database) which is distinct from a coding platform (i.e. a framework in which you write code).
There is some subtle beauty in the ILC model. If we take our ecosystem to be the companies building on top of our discrete component services, then the larger the ecosystem is: -
• the greater the economies of scale in our underlying components
• the more meta data exists to identify future patterns
• the broader the scope of innovative components built on top and hence the wider the future environment that we can scan
This translates to an increasing appearance of being highly efficient as we industrialise components to commodity forms with economies of scale but also highly customer focused due to leveraging meta data to find patterns others want. Finally, others will come to view us as highly innovative through the innovation of others. All of these desirable qualities will increase with the size of the ecosystem as long as we mine the meta data and acts as an effective gardener.
Being constantly the first mover to industrialise a component provides a huge benefit in enabling us to effectively be a fast follower to future success and wealth generation. The larger the ecosystem we build, the more powerful the benefits become. This model stood in stark contrast to what I had been told – that you should be a fast follower and that you could be one of highly innovate, efficient or customer focused. Looking at the map, I knew that with a bit of sleight of hand then I could build the impression that I was achieving all three by being a first mover to industrialise and a fast follower to the uncharted. I normally represent this particular form of ecosystem model (there are many different forms) with a set of concentric circles that describe the process – see figure 49.
Figure 49 – Circular view of ILC
Using context specific gameplay: the play
It was at this point, with some context specific gameplay in hand that I started to run through a few scenarios with James, my XO and my Chief Scientist in our boardroom. Our plan started to coalesce and was enhanced by various experiments that the company had conducted. Not least of which was the head of my frameworks team walking in to tell me that they had just demonstrated we could develop entire applications (front end and back end) in Javascript.
At the same time as refining our play, I had encouraged the group to develop component services under the moniker of LibApi as in liberation API i.e. our freedom from endlessly repeated tasks and our existing business model. To say I was rapturous by this experiment would be to underestimate my pure delight. This fortuitous event helped cement the plan which is summarised in figure 50. I’ll break it down and go through each point in detail.
Figure 50 – The Plan
Point 1 – the focus of the company would be on providing a coding platform as a utility service alongside an expanding range of industrialised component services for common tasks such as billing, messaging, an object store (a key-object store API), email etc. All components would be exposed through APIs and the service would provide the ability to develop entire applications in a single language – JavaScript. The choice of JavaScript was because of its common use, the security of the JS engine and the removal of translation errors with both the front and back end code built in the same language. The entire environment would be charged on the basis of JavaScript operations, network usage and storage. There would be no concept of a physical or virtual machine.
Point 2 – to accelerate the development of the platform, the entire service would be open sourced. This would also enable other companies to set up competing services but this was planned for and desirable.
Point 3 – the goal was not to create one Zimki service (the name given to our platform) but instead a competitive marketplace of providers. We were aiming to grab a small but very lucrative piece of a large pie. We would seed the market with our own utility service and then open source the technology. To prevent companies from creating different product versions the entire system needed to be open sourced under a license which enabled competition on an operational level but minimised feature differentiation of a product set – GPL seemed to fit the bill. Since our development process used test driven development and the entire platform was exposed through APIs, we were already creating a testing suite. This testing suite would be used to distinguish between community platforms providers and certified Zimki providers through a trademarked image. By creating this marketplace, we could overcome one source of inertia (reliance on a single provider) whilst enabling companies to try their own platform in-house first and develop new opportunities for ourselves from an application store to market reporting to switching services to brokerage capability.
Point 4 – we needed to build an ecosystem to allow us to identify future services we should create and hence we had to create an ILC model. Obviously we could only directly observe the consumption data for those who built on our service but what about other Zimki providers? By providing common services such as GUBE (generic utility billing engine) along with an application store, a component library (a CPAN equivalent) and ultimately some form of brokerage capability then we intended to create multiple sources of meta data. We had a lot of discussion here over whether we could go it alone but I felt we didn’t have the brand name. We needed to create that marketplace and the potential was huge. I had estimated that the entire utility computing market would be worth $200bn a decade later in 2016. Our longer term prize was to be the market enabler and ultimately build some form of financial exchange. We would require outside help to make this happen given our constraints.
Point 5 – we needed to make it easy, quick and cheap for people to build entire applications on our platform. We had to ruthlessly cut away all the yak shaving (pointless, unpleasant and repeated tasks) that were involved in developing. When one of the development team built an entirely new form of wiki with client side preview and went from idea to launching live on the web in an under an hour then I knew we had something with potential. Pre-shaved Yaks became the catch-phrase to describe the service.
Point 6 – we anticipated that someone would provide a utility infrastructure service. We needed to exploit this by building on top of them. We had become pretty handy at building worth based services (i.e. ones we charged for on a percentage of the value they created) over the years and I knew we could balance our charging of the platform against any variable operational cost caused by a utility infrastructure provider. It would also have the advantage of cutting them off from any meta data other than that our platform was growing. If I played the game well enough then maybe that would be an exit play for us through acquisition. If we were truly going to be successful then I would need to break the anchor of the parent company at some point in the future.
Point 7 – we knew there would be a constraint in building utility services and that compute demand was elastic. This gave options for counter play such as creating a price war to force up the demand beyond the ability of one supplier to provide. But in order to play one off against another we needed to give competitors a route into the market. Fortunately, we had our Borg system and though we had talked with one large well known hardware provider (who had been resistant to the idea of utility compute) we could open source (Point 8) this space to encourage that market to form. I had counter plays I could use if needed.
The option looked good based upon our capabilities. It was within the realm of possibilities and mindful of the constraints we had. This seemed to provide the best path forward. It would mean refocusing the company, removing services like our online photo site and putting other revenue services into some form of minimal state until the platform business grew enough that we could dispose of them. I was ready to pull the trigger but there was one last thing I needed.
Impacts on purpose
The decision to act can impact the very purpose of your company – the strategy cycle is iterative and it’s a cycle. In this case our purpose was going from a “creative solutions group” a meaningless juxtaposition of words to a “provider of utility platforms”. Nevertheless, if you want to win a battle then I needed to bring everyone onboard and create a crusade. Our crusade became “pre-shaved Yaks”. We intended to rid the world of the endless tasks which got in the way of coding. We would build that world where you just switched on your computer, opened up a browser and started coding. Everything from worrying about capacity planning, configuring packages to installing machines would be gone. Every function you wrote could be exposed as a web service. Libraries of routines written by others could be added with ease through a shared commons and you could write entire application in hours not days or weeks or months. This was our purpose. It was my purpose. And it felt good.
What happened next?
We built it. On the 18th Feb 2006 we had the platform, core API services, the billing system, the portal and three basic applications for others to copy. We launched in March 2006, a full two years before Google appeared on the scene with AppEngine. By the 18th April 2006, we had 30 customers, 7 applications and a monthly rate of 600K API calls. By the 19th June 2006, we had 150 customers, 10 applications and a run rate of 2.8M API calls. We were growing!
On August 25, 2006 it wasn’t Google but Amazon that launched with EC2. I was rapturous once again. Amazon was a big player and we immediately set about moving our platform onto EC2. Every time we presented at events our booths tended to be flooded with interest. The company had embraced the new direction (there were still a few stragglers) but there was a growing buzz. We still had a mountain to climb but we had announced the open sourcing, secured a top billing at OSCON in 2007 and the pumps were primed. But Houston, we had a problem.
What went wrong?
The problem was me. I had massively underestimated the intentions of the parent company. I should have known better given that I had spent over three years (2002–2005) trying to persuade the parent company that 3D printing would have a big future or my more recent attempts that mobile phones would dominate the camera market. The parent company had become pre-occupied with SED televisions and focusing on its core market (cameras and printers). Despite the potential that I saw, we were becoming less core to them and they had already begun removing R&D efforts in a focus on efficiency. They had brought in an outside consultancy to look at our platform and concluded that utility computing wasn’t the future and the potential for cloud computing (as it became known) was unrealistic. The parent company’s future involved outsourcing our lines of business to a systems integrator (SI) and as I was told “the whole vision of Zimki was way beyond their scope”.
I had several problems here. First, they wouldn’t invest in our service because apparently a decision had been made higher up within the parent company on what was core. What they were concerned with was the smooth movement of our lines of business to the SI. That supported their core aims and their needs. When I raised the idea of external investment then the problem became they couldn’t keep a stake in something which they said was not core. When I raised the idea of a management buy-out, they would always go to the unrealistic $200bn market figure I had predicted for 2016. Surely, I would be willing to pay a hefty sum based upon this future market as a given for a fledgling startup in a fledgling market? No venture capital firm would take such an outrageous one-sided gamble. In any case, I was told the discussion could always be left until after the core revenue services were transferred to the SI. This was just short hand for “go away”.
The nail in the coffin was when I was told by one of the board that the members had decided to postpone the open sourcing of our platform and that they wanted me to immediately sign contracts cancelling our revenue generating services at an unspecified date to be filled in later. As the person who normally chaired the board meeting then I was annoyed at being blindsided, the choice and myself. Somehow, in my zeal to create a future focused on user needs and a meaningful direction, I had forgotten to gain the political capital I needed to pull it off. I might have created a strong purpose and built a company capable of achieving it but I had messed up big time with the board. It wasn’t their fault; they were focusing on what was core to the parent company. The members were all senior executives of the parent company and it should have been obvious that they were bound to take this position. I realised that I have never truly involved them in our journey and had become pre-occupied with building a future for others. I had not even fully explained to them our maps relying instead on stories but this was because I still hadn’t realised how useful maps really were. In my mind, maps were nothing more than my way of explaining strategy because I hadn’t yet found that magic tome that every other executive learnt at business school. This was a powerful group of users — my board and the parent company — that had needs that I had not considered. Talk about a rookie mistake. I had finally been rumbled as that imposter CEO.
There was no coming back from this, they were adamant on their position and had all the power to enforce it. I was about to go on stage at OSCON (O’Reilly open source conference) in 2007 and rather than my carefully crafted message, I had to somehow announce the non-open sourcing of our platform and the non-creation of a future competitive utility market. I was expected to break a promise I had made to our customers and I was pretty clear that postpone was a quaint way of saying “never”. I couldn’t agree with the direction they had chosen and we were at loggerheads. My position was untenable and I resigned.
The company’s services were quickly placed on the path to being outsourced to the SI and the employees were put through a redundancy program which all started a few days after I resigned. The platform was disbanded and closed by the end of the year. The concepts however weren’t lost as a few of these types of ideas made their way through James Duncan into ReasonablySmart (acquired by Joyent) and another good friend of mine James Watters into Cloud Foundry. I note with a wry smile that Pivotal and its platform play is now valued at over $2.5bn and serverless is a rapidly growing concept in 2016. As for SED televisions? Well, some you win, some you lose. As for the consultancy, any frustration I might have is misdirected because I was the one who failed here. It was my job to lead the company and that didn’t just mean those who worked for me but also the board.
In these first chapters, I’ve hopefully shown you how to understand the landscape you’re competing in, anticipate the future, learn to apply doctrine, develop context specific gameplay, build the future and then finally blow it by ignoring one set of users. Would Zimki have realised its potential and become a huge success? We will never know but it had a chance. This was my first run through the strategy cycle and at least I felt as though I had a vague idea as to what I was doing rather than that naïve youth of “seems fine to me”. I was still far from the exalted position of that confident SVP that I had met all those years ago and I was determined to get better next time. Fortunately for me, there was a next time. But that’s another part of the story.
I have to reiterate that every time that I’ve gone around the cycle, I’ve got better at playing the game. As we travel along the same path I’ll be adding in more economic patterns, more doctrine and more context specific gameplay along with deep diving on some of the parts I’ve glossed over or were merely general concepts in those early days. This first section of five chapters describes my Beginning and ends in July 2007. The next three sections each cover a following part of my journey including My wilderness, then Ubuntu to Better for Less and finally my LEF days. The latter will show you just how advanced mapping has become. But as with all journeys, let us stick to the path and no short cutting. Every step is valuable; every landscape is an opportunity to learn from.
Before we start on our next part of the journey, I want to clean up some terms and provide some basic tips for mapping.
I had several problems here. First, they wouldn’t invest in our service because apparently a decision had been made higher up within the parent company on what was core. What they were concerned with was the smooth movement of our lines of business to the SI. That supported their core aims and their needs. When I raised the idea of external investment then the problem became they couldn’t keep a stake in something which they said was not core. When I raised the idea of a management buy-out, they would always go to the unrealistic $200bn market figure I had predicted for 2016. Surely, I would be willing to pay a hefty sum based upon this future market as a given for a fledgling startup in a fledgling market? No venture capital firm would take such an outrageous one-sided gamble. In any case, I was told the discussion could always be left until after the core revenue services were transferred to the SI. This was just short hand for “go away”.
The nail in the coffin was when I was told by one of the board that the members had decided to postpone the open sourcing of our platform and that they wanted me to immediately sign contracts cancelling our revenue generating services at an unspecified date to be filled in later. As the person who normally chaired the board meeting then I was annoyed at being blindsided, the choice and myself. Somehow, in my zeal to create a future focused on user needs and a meaningful direction, I had forgotten to gain the political capital I needed to pull it off. I might have created a strong purpose and built a company capable of achieving it but I had messed up big time with the board. It wasn’t their fault; they were focusing on what was core to the parent company. The members were all senior executives of the parent company and it should have been obvious that they were bound to take this position. I realised that I have never truly involved them in our journey and had become pre-occupied with building a future for others. I had not even fully explained to them our maps relying instead on stories but this was because I still hadn’t realised how useful maps really were. In my mind, maps were nothing more than my way of explaining strategy because I hadn’t yet found that magic tome that every other executive learnt at business school. This was a powerful group of users — my board and the parent company — that had needs that I had not considered. Talk about a rookie mistake. I had finally been rumbled as that imposter CEO.
There was no coming back from this, they were adamant on their position and had all the power to enforce it. I was about to go on stage at OSCON (O’Reilly open source conference) in 2007 and rather than my carefully crafted message, I had to somehow announce the non-open sourcing of our platform and the non-creation of a future competitive utility market. I was expected to break a promise I had made to our customers and I was pretty clear that postpone was a quaint way of saying “never”. I couldn’t agree with the direction they had chosen and we were at loggerheads. My position was untenable and I resigned.
The company’s services were quickly placed on the path to being outsourced to the SI and the employees were put through a redundancy program which all started a few days after I resigned. The platform was disbanded and closed by the end of the year. The concepts however weren’t lost as a few of these types of ideas made their way through James Duncan into ReasonablySmart (acquired by Joyent) and another good friend of mine James Watters into Cloud Foundry. I note with a wry smile that Pivotal and its platform play is now valued at over $2.5bn and serverless is a rapidly growing concept in 2016. As for SED televisions? Well, some you win, some you lose. As for the consultancy, any frustration I might have is misdirected because I was the one who failed here. It was my job to lead the company and that didn’t just mean those who worked for me but also the board.
In these first chapters, I’ve hopefully shown you how to understand the landscape you’re competing in, anticipate the future, learn to apply doctrine, develop context specific gameplay, build the future and then finally blow it by ignoring one set of users. Would Zimki have realised its potential and become a huge success? We will never know but it had a chance. This was my first run through the strategy cycle and at least I felt as though I had a vague idea as to what I was doing rather than that naïve youth of “seems fine to me”. I was still far from the exalted position of that confident SVP that I had met all those years ago and I was determined to get better next time. Fortunately for me, there was a next time. But that’s another part of the story.
I have to reiterate that every time that I’ve gone around the cycle, I’ve got better at playing the game. As we travel along the same path I’ll be adding in more economic patterns, more doctrine and more context specific gameplay along with deep diving on some of the parts I’ve glossed over or were merely general concepts in those early days. This first section of five chapters describes my Beginning and ends in July 2007. The next three sections each cover a following part of my journey including My wilderness, then Ubuntu to Better for Less and finally my LEF days. The latter will show you just how advanced mapping has become. But as with all journeys, let us stick to the path and no short cutting. Every step is valuable; every landscape is an opportunity to learn from.
Before we start on our next part of the journey, I want to clean up some terms and provide some basic tips for mapping.