Canadian AI Policy & Ethics
Apr 09, 2019 ● Stephan Jou
Ten Things We Can Learn From Canada About Responsible AI

AI development should not sacrifice the greater good in the name of innovation

In recent years, artificial intelligence (AI) has gone from a “hot” trend to a virtually ubiquitous technology. I work in the cybersecurity space and, believe me, nearly every vendor today claims to have some sort of AI functionality in its product. But let’s be honest, the rapid proliferation of AI has caused some buzzword fatigue among buyers. Equally important, it’s created an AI race between vendors that hasn’t always prioritised the greater good.

Innovation in AI is breathtaking. There’s no shortage of new applications for AI, and where there’s an opportunity, there is a creator chasing it. The increasingly apparent reality of AI development, however, is that creators often struggle to strike a balance between innovation and ethics. Some AI technologies are already creeping up to or even past ethical boundaries (i.e. letting social media algorithms use your personal data to choose the “right” content for you). To get a better understanding of this tension, let’s turn to a country that is leading the conversation around responsible AI: Canada.

The United States and China may be ahead in the AI innovation game when you look at the numbers, but Canada has quickly become a leading voice in what I believe is the most important conversation around AI: creating artificial intelligence systems (AIS) with the greater societal welfare in mind. In late 2018, l’Université de Montréal published the Montréal Declaration for Responsible AI—a document aimed at providing a foundation for the ethical and sustainable development of AIS. In addition, Canada hosted two G7 meetings in 2018 specifically focused on AI, including the G7 Multi-stakeholder Conference on AI in Montréal at the end of last year which specifically focused on ethical, responsible and moral use and implementation of AI. I was invited to attend this meeting—one of 150 attendees from both public and private sectors—and was really struck by how much we as a society can learn about AI...and it has nothing to do with math!

The Montreal Declaration comprises 10 principles for creators and users and details important, fundamental values, including guaranteeing that AI technologies do not interfere with privacy, erode bonds between humans, or exclude any portions of society. The Declaration has become a gold standard for this topic; it truly is an invaluable tool for creators and implementers alike. It is a detailed document, but I have summarised the core principles here for our understanding:

  • “Well-being”—AIS creation and use should foster the well-being of people, as well as the rest of our living world.
  • “Respect for Autonomy”—AIS should not diminish our autonomy. In fact, it should give us greater control over our lives.
  • “Protection of Privacy”—AIS cannot intrude on our privacy, particularly in respect to what personal data is acquired and stored.
  • “Solidarity”—AIS should not erode trust between people; it should foster it.
  • “Democratic Participation”—AIS should be transparent and subject to debate. AI explainability is a key piece here!
  • “Equity”—AIS should increase equity and justice within our society.
  • “Diversity Inclusion”—Social and cultural diversity should not be hindered by the use of AIS.
  • “Prudence”—Expect and mitigate adverse consequences as much as possible.
  • “Responsibility”—Humans should be the ultimate decision makers. AIS should not lessen that accountability.
  • “Sustainability”—Protect our environment. As with any technology, AIS must be designed with respect to the longevity of our planet.

Canadian Prime Minister, Justin Trudeau, closed the G7 Conference with an important message: “If Canada is to become a world leader in AI, we must also play a lead role in addressing some of the ethical concerns we will face in this area.” His point is a good one that perhaps more countries and creators should consider. Innovation is one thing, but sustainable innovation can’t happen separate from societal good. What’s the use in creating a powerful technology if it isn’t primed to be around for long?

As an AI creator, I have empathy for these challenges. When you are facing a race against competitors who are seeking the latest and greatest technology, it is easy to dive straight to the technical details and forget the societal impact. But innovation and responsibility does not have to be mutually exclusive. Great technology can’t be irresponsible. And responsible technology starts at the beginning. Further, in my experience, being open and transparent about your AI, a key theme across the ten principles of responsible AI development, also happens to be a competitive edge. In other words, it turns out that responsible AI is also effective and competitive AI.

If you have not done so already, I highly encourage you to read through the Montreal Declaration’s 10 principles and, if you are a creator and willing to make the commitment, sign the Declaration in support of responsible AI.


This article originally appeared in IT Portal 

Article by:

Stephan Jou