For me, 2016 will be an interesting year in cloud. This is fundamentally because I'll get to test whether a specific model that I built in 2005 (and wrote a business case on - the Zimki project) has any semblance to reality or not. There were numerous 'forecasts' from the model across multiple industries extending from 2015-2025. Two of these I'll mention because I've talked publicly about both and this is timely for me as I'm working on a piece on predicting the predictable for the LEF,
The first 'prediction' was that by 2020, utility computing (in all forms) would exceed $1trillion p.a. and that 'on premise' utility computing (what we call 'private cloud') would represent around 5% of the total. I've said this publicly several times and I've no reason to change my view despite the long range (15 years).
@geoffarnold : I'll stick with what I wrote '05. By 2020 utility compute market is almost $1T and private (on prem) cloud < 5% & declining.— swardley (@swardley) May 9, 2013
The second 'prediction' was that at some point during 2015 to 2017, the decline of private cloud would start to kick in. Private cloud was always a 'transitional' model in my view. I've talked publicly about this part of the model since 2008, a year in which I narrowed the forecast down to the later part of 2016. More recently, I've said that I expect this time to turn into a bit of a bloodbath in that space. Again, this was a long range forecast and yes, I've seen no reason to change my view.
The problem with long range forecasts of this type is there appears to be a very specific uncertainty principle between the predictability of what and the predictability of when i.e. we can often accurately assign probabilities to what is going to happen but not when or vice versa. There are 'ways' to cheat this with weak signals but alas I have only ever had the most wobbly of evidence to support these weak signals.
Over the years, I've tried varying weak signals, experimenting with componentisation of forecasts, changing time ranges, altering risk of forecast and other techniques. Bit by bit, I've been collecting data on veracity, on failures etc. It's still relatively early days and it'll take me at least a further decade before I'll know how supportable the weak signals are (if at all) and start feeling comfortable talking about it. There are some particular aspects of the work that I'm highly 'uncomfortable' with.
Hence 2016 will be an interesting year for me, part of the journey. I'm about halfway there with this by my reckoning - of course, that's a prediction which could well be suspect.