Return to list

Why 2018 is going to be the year (Alpha) Zero.

Benjamin Kott

January 03, 2019

Lots has been said about 2018 already and what an amazing and at the same time dramatic year it has been. Personally, I will never forget *that* simultaneous landing of the two Falcon Heavy side boosters (followed by watching Spaceman circle the planet live on YT for a few hours).

Falcon Heavy Side Boosters Landing

While in a decade or two we might actually send people to Mars, there is something much bigger unfolding right in front of our eyes, right here and now.

I believe that one day we will look back at 2018 as the year in which AI really came of age. Yes, it's already implemented in hundreds of different applications from image recognition in surveillance systems to self-driving cars and personal voice assistants. But that's all "old-school" stuff now, compared to what natural reinforcement learning (you've heard it here first!) as recently demonstrated by DeepMind with AlphaZero can do.

It all started with chess. And this quote of Demis Hassabis (DeepMind co-founder) pretty much sums it up:

AlphaZero could start in the morning, playing completely randomly. Then by tea ['lunch time' for the non-Brits] it would be superhuman level. By dinner it would be the strongest chess entity there has ever been.

It took AlphaZero only about 8 hours to defeat the strongest chess software in existence, though admittedly it was running on massively Google TPU powered servers (the games were actually played in December 2017, but Google only recently published the complete write-up) and not much longer to beat AlphaGo, its "older sibling" algorithm that famously defeated the Go champion Lee Sedol in 2016.

Here's the good news:

This "learning from experience" approach, starting with no rules (hence the Zero), is tremendously powerful. It could be readily applied to managing building operations, energy systems, transportation and many more aspects of our everyday lives. Granted, you can't really run thousands of "games" - or simulations - on the electricity grid or in a live power plant in order to determine the optimal operating approach. But with digital twin technology and massively increased IoT-based sensoring that delivers instant and incremental feedback, we can go a very long way.

There are a lot of obvious benefits from letting these types of zero-knowledge reinforcement learning algorithms support and maybe even govern parts of our lives, especially the ones that we as humans aren't best placed to fully control and optimise. Even "just" at a commercial building level - which is our domain of expertise here at Fabriq - the complexity of multiple systems working together (and sometimes against each other) often surpasses what humans can reasonably manage with linear approaches. Which is why we have so many rules and safety margins in place in order to stay within certain limits and make sure the asset doesn't fall over. The optimisation opportunity here is certainly immense - up to 30-40% and more in terms of energy, carbon and cost savings are possible with a fully integrated, highly-optimised approach. And that's just the beginning - the potential impact on occupier productivity and wellbeing is at least 10x higher in financial terms.

Or take climate change for instance. It is now abundantly clear that we will not even stand a chance to reign in on global warming and keep temperatures within the the targeted 1.5C limit by the end of the century. We're much more likely to be looking at a 3-4C increase and probably more, once knock-on effects are factored in. It's not about a lack of innovation to do things in a more environmentally friendly way. Virtually all of the technology that is required to stay within the limits already exists today. The driving factor here is human nature. As long as individuals, and that's pretty much all of us, continue to behave in such a way that it maximises their personal outcome (e.g. in terms of comfort, wellbeing, finances, power etc.), we will not be able to implement the required changes fast enough, as there will always be a critical mass of people who are off worse in the short term and resist the change.

But with AI, we could have a fighting chance to avoid runaway climate change, which at its core is incredibly multi-faceted and of sheer complexity. It is conceivable that only a sophisticated AI would be able to optimise all the interconnected systems (from agriculture to energy, real estate, transportation and many more) in such a way that we can achieve the targets without a significant negative short-term impact on personal "payoffs".

Data has a better idea

Now for the flip side ...

For all their achievements, these types of highly advanced and widely applicable AI approaches also demonstrate the potential dangers of yielding control and letting machine-based algorithms take over. This won't happen over night, but it appears inevitable that we will see an ever increasing application of it in our daily lives and well beyond smart phones / devices. Which brings up many questions:

  • Where is the point at which we don't understand what the risks involved are, if we already don't understand how to run e.g. individual buildings efficiently?
  • Where is the point of no return? By that I don't mean a Skynet-like situation when the AI takes over the world and we can't pull the plug. But a stage at which so many processes and systems rely on it that we are ill-prepared to step back and do things "the old way" if needed.
  • Who or what will be the "pawns" in the "game of life"? (An unexpected characteristic of AlphaZero's approach to playing chess was how readily it sacrificed pawns in order to position itself for long-term success)

My final point: after all, who is there to tell us that the AI will do things in a way that's "appropriate" to the prosperity of human (and other) beings? And above all, what does that even mean??

In other words, what exactly is the fully interconnected and highly pervasive AI of the future going to optimise for? Or is the very optimisation goal something that it will also (have to) learn of its own devices? That would make for some "interesting" scenarios, to say the least.

Lots of questions, and not many answers. In fact, pretty much none at all at this point.

All I wish for the new year...

The one thing I would wish for in 2019 and beyond, is that for every dollar spent by the industry on advancing AI, another dollar was spent on trying to understand the potential implications of AI beyond solving the immediate problem at hand (winning at chess, optimising traffic etc.). In addition to researching scenarios for the long-term impact of many thousands or even millions of individual AI systems eventually working hand in hand - or against each other - to run our lives, one day in the future that's nearer than most of us think.

Onwards! Because there is no going back, ever.


Similar Articles