Canadian AI Opinion
Dec 07, 2018 ● Sam Shead
Q&A: 10 Minutes With AI Pioneer Yoshua Bengio

A brief interview with one of the fathers of AI

Yoshua Bengio flinches when I say I've seen him referred to as one of the "fathers of AI" in a recent article.  The 54-year-old Canadian computer scientist is best known for his early work on deep learning in the 1980s and 1990s but today he has his own AI startup accelerator in Montreal called Element AI, which he cofounded in October 2016.

Bengio also teaches at the University of Montreal, heads the MILA (Montreal Institute for Learning Algorithms), and is co-director of the Learning in Machines & Brains project of the Canadian Institute for Advanced Research.

What follows is a lightly edited transcript from a recent interview.

Sam Shead: I've heard you referred to as one of the fathers of AI…?

Yoshua Bengio: Whatever, I don’t like these things.

SS: Can you describe what's happened in AI in the last few years that's different to what happened in AI in the 80s or 90s?

YB: Well, many things have happened. We are able to do things now that we were dreaming of 10/20/30 years ago. What's interesting is it's building on many of the core ideas that were already in our hands around the 1990s.

What has changed? Well, we now have access to much more data and that's important because intelligence is about knowledge and knowledge is acquired from data in the case of modern AI, with machine learning. Computing power to be able to process all that data and train much larger models [has also changed]. And changes in the models themselves, which allow them to learn more abstract stuff and to be deeper, that’s why we call it deep learning. 

And also to change the way they work so that in addition to being able to work on fixed sized vectors (so sets of information), they can work on any kind of data structure: graphs, lists, sequences. 

>span class="s1">SS: I know you won't like this question, but where do you think Canada sits in the global AI race and which countries do you see at the forefront of AI development?

YB: Well, Canada clearly is a scientific leader in AI. Of course, Geoff [Hinton, another 'father of AI' that now works at Google] and I being here, and thanks to the CIFAR programme, which was started in Canada, that has initiated a lot of that progress.

We’re starting now in Canada to build the economic development around AI in several cities: Montreal, Toronto, Edmonton, Vancouver, Waterloo.  There’s quite a lot of energy there. But of course, the size of Canada is nothing like the US or China. Even the Europeans are starting to invest heavily in AI. So, we'll see how a small country can continue to have the kind of leadership role we have scientifically but also in the industrial world.

I think it's really important for Canada to be part of that leadership and in part, because we bring not just the science but also humanist values that I think are really important because AI is going to change society and we want it to change for good.

SS: Big tech companies are putting hundreds of millions, if not billions into AI. Do you think any one company has an edge over anyone else or do you think it’s an even playing field?

YB: Obviously, some companies have bigger bets and larger groups but overall I see many groups that are contributing. I think it's better that way. It's better we have multiple groups. It’s not just industry. You have to realise that a lot of the development that is happening is also with academia, with collaborations between industry and academia and between academics.

One thing I don’t like about the reporting about AI is that journalists seem to think the progress is happening in companies and that’s not true. They are part of it but a lot of the progress is continuing to happen in academia.

SS: So is it an issue when Silicon Valley firms come along and take people out of academia?

YB: It is, it is. And we have to be careful how this is done. I think it's better to have those professors continuing to be part of academia and having part-time roles than having them completely snatched to industry for example. One reason is a really important resource that industry needs is talent. And there's not enough of it. So we need to invest a lot in training the next generation. So we need to make sure we don't lose the professors who can train the next generation.

SS: Do you think Facebook missed out on DeepMind? There were talks…

YB: I know [about the talks], yes. I don't know, this is a business decision, not just a research thing. I don't have any comment on that.

SS: What do you think we need to be teaching students when it comes to AI?

YB: I think ethics should be part of the training not just of the grad students that I'm working with. It should be taught at undergrad to everyone, not just people doing technology. Philosophy should be taught in primary school because we need the next generation to understand how to think by themselves, how to reason. Step away from their immediate fears and immediate desires so that society over the long run can move in the right direction.

SS: How can we programme machines with a set of ethics if we can't all agree as humans on them?

YB: We don't need to agree on everything. Machine learning is quite able to work with ambiguity and multiple voices. We work with probabilities so you can have multiple avenues. What is important is to know what are the actions that many humans would find not acceptable — it’s ok if others do but if machines could know that well there’s a substantial number of humans who disagree with that they might want to call upon a human to make the decision, for example.

SS: So it seems like we've had quite a lot of advances over the last few years. Some people are clearly concerned AI is going to become a big threat to society in the future. People like Elon Musk and Nick Bostrom and so on. What do you make of their views?

YB: I think there's a confusion here. There’s a confusion between different kinds of fears and concerns. I have concerns but they’re very different from what these people have been expressing. My concerns are more about the misuse of AI. I have not so much short-term or real concern of their feature of superintelligence taking over humanity. We’re building those machines. I think we can do it responsibly.

SS: When you say the misuse of AI, what do you mean?

YB: I think of things like killer robots. I think of things like fake news being generated by computers. I think of things like political advertisement completely changing society in ways that are bad for society. There are many issues. We had the Montreal declaration on the responsible development of AI to try to help us see through these issues and many other organisations around the world have been doing this kind of thinking.

In general, I think there’s a lot of consensus about what the issues are that are more of a social nature. I see the discussion moving in that direction rather than these very primal fears of it’s like some alien is going to invade us. This is just science fiction fears.

SS: Do you think the big tech companies are doing enough?

YB: We have to be careful it doesn’t become some kind of ethics washing. Nice words, but no action.

SS: Do you know anything about Google's AI ethics board?

YB: I don’t want to make any specific judgements. I think many people in those companies are sincerely trying to steer things in the right direction but at the end of the day it’s the actual decisions that companies make. Examples include the recent discussions around Google with military use of their technology or Microsoft selling things for policing. There are decisions that need to be taken not just at the level of a few executives but that involve society.


This article originally appeared in Forbes


Article by:

Sam Shead