Monday, February 23, 2015

An extra dollop of hubris to go with my hubris ... on AI

First, I find it exceptional hubris to assume that we will create artificial intelligence before it emerges. However, even if we somehow accept that we're masterful enough to beat random accident to the race then the idea that we will be able to control it makes me shudder.

Any mathematical model (and that includes any computer program) is subject to Godel's Theorem of Incompleteness. Basically, we can't provably show a model is true within the confines of the model itself. What that means in plain English is that mistakes / bugs / unforeseen circumstances will happen. There is no way to create a control mechanism which will absolutely ensure that any AI doesn't misbehave any more than there is a way to create a provably secure system. Given enough time, something will go wrong in much the same way that given enough time, any system will be hacked.

The only sensible course of action is isolation. Which is why in security you don't try to make the unbreakable system, you accept it will be broken and minimise the consequences of this to the smallest risk vector possible. This is why systems like Bromium which use micro virtualisation make so much good sense and certainly a lot more than much of the rest of the security industry.

If you want to reduce the risk of AI then you have to reduce its interconnectedness i.e. you have to use isolation. But this is counter to the whole point of the internet and where everything is heading (e.g. IoT, mobile, use of ecosystems) where we attempt to make everything connected.

This whole paradox stems from the fact that industrialisation of one component enables the rapid creation of higher orders system which in turn evolve to more industrialised forms and the cycle repeats. Our entire history is one of creating ever more complex systems which enable us to reduce the entropy around us i.e. make order out of chaos. It's no different with biological systems (which are also driven by competition). See figure 1.

Figure 1 - Evolution and Entropy.


This constant process of change increases our energy consumption (assuming we consumed energy efficiently in the first place), enables us to deal with vastly more complex problems and threats to our existence but at the same time exposes us to ever great levels of reliance and vulnerability through the underlying components. It's why the underlying components need to be designed with design for failure in mind which also means isolation. So when you build something in the cloud, you rely on vast volumes of good enough components ideally spread across multiple zones and regions and even clouds (unless the cost of switching is too high).

But, that also unfortunately requires greater interconnectedness at higher order systems i.e. our virtual machines may be isolated across multiple zones and regions but our monitoring, configuration and control are integrated and connected across all of these. In much the same way that our redundant array of inexpensive disks (RAID) was controlled by software agents connected across all the disks. A major part of the benefit that industrialisation brings to us comes from this very interconnectedness but that interconnectedness also creates its own risk.

When it comes to artificial intelligence then forget being able to provide a provably verified, valid, secure and controlled mechanism to prevent something wayward happening. You can't. Ask Godel.

The only way to prevent catastrophic consequences in the long term is to isolate as much as possible  at this level of "intelligence" assuming we created it in the first place - which I doubt. But the very benefits it creates which includes protection against future threats (diseases, asteroid strike, climate control) comes from the interconnectedness that creates this threat.

There is no way around this problem and as I've said before, if we keep connecting up a 100 billion different things then eventually artificial intelligence (of a form we probably won't recognise) will emerge. We can't stop it and despite our best efforts then given enough time we will lose control of it. The only thing we can do is isolate the "intelligence" throughout the system but then who wants to do that? No-one. That's where the benefits are - we want one set of super smart intelligent network of things communicating with another or how else are we going to create the "paradise" of "any given Tuesday"?

At some point, we need to have that discussion on whether the benefits of interconnectedness outweighs the risks. It isn't a discussion about AI but fundamentally one on the speed of progress. We've already had one warning shot in the financial markets where the pursuit of competition created a complexity of interconnected components that we lost control of. I know some are convinced that we can create verified, valid, secure and controlled mechanism to prevent any future harm. Even if you don't agree with Godel who says you can't, we've already demonstrated how easy it is for whizz kids to fail.

This discussion on interconnectedness or as Tim O'Reilly would say "What is the machine we are creating?" is really one on our appetite for risk. Personally, I'm all gung ho and lets go for it. However, we need to have that wider discussion and with a lot hubris than we do today.