Artificial Intelligence (AI) is a lot of things. It's a game changer for business, it can enable humans to work smarter and faster than ever before, and it could potentially have a significant impact on economies and the labor market.
But at the root of it all – the function which gives AI value – is the ability to make predictions. Calculating – more quickly and accurately than has ever been possible – what the likelihood is of a particular outcome, is the fundamental advance which AI brings to the table.
To start with, it’s worth defining what we mean when we talk about AI. In recent years the leaps in technology which have been generating the biggest buzz are around machine learning and deep learning. These are specific implementations of technology which can be used to give machines the ability to learn, without human input, by merely being fed data.
This means they can become increasingly better at routine tasks – such as examining image data from cameras and working out what is shown, or reading through thousands of pages of documents and understanding the relevant pieces of information for the task at hand.
How this will affect the role of humans is a hot topic and the question is very much up in the air. Some predict that the near-future will see us becoming used to working alongside “smart” machines, hugely boosting our productivity. Others say the arrival of these machines will make us redundant when it comes to many forms of labor, leading to widespread unemployment and eventually civil unrest.
In their latest book: Prediction Machines – The Simple Economics of Artificial Intelligence, authors Ajay Agrawal, Joshua Gans and Avi Goldfarb seek to demonstrate how that prediction is fundamental to the changes that AI makes possible. In their book they explain that understanding this concept – and preparing our reaction to it – could determine which of those two possible futures is likely to come about.
Key to this, they argue, will be whether human AI “managers” can learn to differentiate between tasks involving prediction, and those where a more human touch is still essential.
When I met with Joshua Gans – professor of strategic management and holder of the Jeffrey S Skoll Chair of Technical Innovation and Entrepreneurship at the University of Toronto – he gave me some insight into how economists are tackling the issues raised by AI.
"As economists studying innovation and technological change, a conventional frame for trying to understand and forecast the impact of new technology would be to think about what the technology really reduces the cost of," he tells me.
"And really its an advance in statistical methods – a very big advance – and really not about intelligence at all, in a way a lot of people would understand the term ‘intelligence.' It's about one aspect of intelligence, which is prediction.
“When I look up at the sky and see there are grey clouds, I take that information and predict that it’s going to rain. When I’m going to catch a ball, I predict the physics of where it’s going to end up. I have to do a lot of other things to catch the ball, but one of the things I do is make that prediction.”
In business, we have to make these predictions many, many times each day. Will we make a higher profit by selling large volumes cheaply, or small volumes at a high price? Who is the best team member to take on a job? Where will we get the best "bang for our buck" out of our marketing budget?
Traditionally these predictions relied heavily on “gut instinct” – what our intuition or experience tells us is the likely outcome. They are data-driven too of course – our instincts are informed by what we’ve learned, but there’s only so much time that can be spent reading reports and books.
That generally isn't a constraint for a computer – which, if given the right algorithms, can automatically ingest vast amounts of data and use it to make predictions more quickly and accurately than we could ever hope to ourselves.
“Sometimes we [humans] avoid making decisions because we can’t make a prediction – we may have a ‘rule of thumb’ or something like that,” Gans explains.
“So what’s going to happen is that these prediction machines are going to make predictions better and faster and cheaper, and when you do that, two things happen. The first is that we will do a lot more predicting. And the second is that we will think of new ways of doing things for problems where the missing bit was prediction.”
Self-driving cars are an obvious example. The idea isn't new, but humans had struggled for decades with making them a reality, because there was no way to enable a machine to make the accurate predictions it would need to navigate safely. This changed with the arrival of machine learning and deep learning.
“People weren’t formulating it as a prediction problem and, once we got the tools, lo and behold, they started to make improvements,” Gans says.
So what does this actually all mean, for us as humans?
“Well, firstly, as heavy users of predictions, it’s good news for us,” he says. “Predictions are something we like, and we’re getting them faster and cheaper, so that’s good.”
As an example, he asks me to think about a school bus driver.
“Ok, so we can replace a human driver with an automated vehicle – great! So we throw the driver off the bus and get a robot to go and pick up the children. But then you immediately think – wait a second – a whole load of unsupervised children on a bus sounds like a stupid idea.”
As tempting a solution as it sounds, human rights organizations probably wouldn’t look too kindly on the idea of also giving robots the ability to discipline unruly children during transit.
A more socially acceptable solution could be to replace the drivers with human supervisors or, more productively, educators.
“Then we could start the lessons as soon as the kids get on the bus,” says Gans. “Or we could have the school assembly on the bus. It frees up time – we just have to be imaginative.”
The fact is, no one right now knows what impact AI will have had on society in 20 years’ time, let alone 50 or 100 years.
Advances which genuinely can make humans redundant on a large scale are likely to take some time to come to fruition.
“I know people talk about the concept of the ‘singularity’ and that it’s all going to happen overnight. But I don’t know if it’s actually going to occur that way,” Gans tells me.
"It's likely to be slowly, slowly … and I feel that slow-moving problems are the ones we work out how to deal with. That would be the source of my confidence."
Gans’ new book ‘Predictive Machines: The Simple Economics of Artificial Intelligence’ is now available from Harvard Business School Press.
This article originally appeared in Forbes