Canadian AI Policy & Ethics
Nov 22, 2018 ● MIT Technology Review
One of the fathers of AI is worried about its future

Yoshua Bengio wants to stop talk of an AI arms race

Are there ways to foster more collaboration between countries?

We could make it easier for people from developing countries to come here. It is a big problem right now. In Europe or the US or Canada it is very difficult for an African researcher to get a visa. It’s a lottery, and very often they will use any excuse to refuse access. This is totally unfair. It is already hard for them to do research with little resources, but in addition if they can’t have access to the community, I think that’s really unfair. As a way to counter some of that, we are going to have the ICLR conference [a major AI conference] in 2020 in Africa. 

Inclusivity has to be more than a word we say to look good. The potential for AI to be useful in the developing world is even greater. They need to improve technology even more than we do, and they have different needs.

Are you worried about just a few AI companies, in the West and perhaps China, dominating the field of AI?

Yes, it’s another reason why we need to have more democracy in AI research. It’s that AI research by itself will tend to lead to concentrations of power, money, and researchers. The best students want to go to the best companies. They have much more money, they have much more data. And this is not healthy. Even in a democracy, it’s dangerous to have too much power concentrated in a few hands. 

There has been a lot of controversy over military uses of AI. Where do you stand on that?

I stand very firmly against. 

Even non-lethal uses of AI?

Well, I don’t want to prevent that. I think we need to make it immoral to have killer robots. We need to change the culture, and that includes changing laws and treaties. That can go a long way.

Of course, you’ll never completely prevent it, and people say, “Some rogue country will develop these things.” My answer is that one, we want to make them feel guilty for doing it, and two, there’s nothing to stop us from building defensive technology. There’s a big difference between defensive weapons that will kill off drones, and offensive weapons that are targeting humans. Both can use AI.

Shouldn’t AI experts work with the military to ensure this happens? 

If they had the right moral values, fine. But I don’t completely trust military organizations, because they tend to put duty before morality. I wish it was different.

What are you most excited about in terms of new AI research? 

I think we need to consider the hard challenges of AI and not be satisfied with short-term, incremental advances. I’m not saying I want to forget deep learning. On the contrary, I want to build on it. But we need to be able to extend it to do things like reasoning, learning causality, and exploring the world in order to learn and acquire information.

If we really want to approach human-level AI, it’s another ball game. We need long-term investments, and I think academia is the best place to carry that torch. 

You mention causality—in other words, grasping not just patterns in data but why something happens. Why is that important, and why is it so hard?

If you have a good causal model of the world you are dealing with, you can generalize even in unfamiliar situations. That’s crucial. We humans are able to project ourselves into situations that are very different from our day-to-day experience. Machines are not, because they don’t have these causal models.

We can hand-craft them, but that’s not enough. We need machines that can discover causal models. To some extent it’s never going to be perfect. We don’t have a perfect causal model of the reality; that’s why we make a lot of mistakes. But we are much better off at doing this than other animals.

Right now, we don’t really have good algorithms for this, but I think if enough people work at it and consider it important, we will make advances.


This article originally appeared in MIT Tech Review.

Article by:

MIT Technology Review