Canadian AI Opinion
Dec 12, 2018 ● Kathryn Hume, Daniel Moore, Michael Zerbs
The AI Revolution Needs A Rulebook. Here’s A Beginning

The promise of AI comes with risks. It’s up to us to make sure it's done right

Artificial-intelligence technologies will transform life and work. It’s up to us to make sure they do it right.

The data-driven algorithmic systems we refer to as AI are already on the job. They make increasingly accurate suggestions for products “you may also like.” They are automating cognitive tasks previously reserved for humans, such as diagnosing cancer or uncovering evidence inside masses of documents.

But the promise of AI comes with risks, too.

Systems based on human data can also include human failings and prejudices. Algorithms powered by that data are not objective oracles, but mathematical tools that may pick up, refract and amplify the biases that exist in society.

Developers and users of AI are working to define ethical standards to guard against unintended consequences. As consensus builds on what responsible AI will look like, it is also becoming clear that this effort can pay back in other ways. The push for ethical AI requires executive leadership to improve communication between technical, business and control functions, and to clarify the organization’s social and ethical values.

How did we get here? Because of how AI systems work. Unlike computer programs that follow the logical path set by a developer, these systems take data and outcomes defined by the builder, and then go off on their own, essentially, and find the best way to achieve those outcomes. That puts great weight on two factors: carefully defining the desired outcome so that it aligns with our values, and cleansing the data of inherent biases.

This is more difficult than it sounds. First, there may be a staggeringly large amount of data involved, since the power of AI lies in working with a quantity of data and variables beyond the capabilities of a human brain. Second, much of the data is generated by human behaviour, and humans can behave with bias. Even subtle and unconscious bias can produce data that steers systems in directions their designers would never choose.

The devil is in the data and even respected pioneers in AI can be caught out. Recently, Amazon made headlines for a machine-learning-powered recruitment tool that had to be scrapped when it was found to be biased against hiring women – a bias introduced not by the AI itself, but through inherent bias in the data used to train it.

As the conversation builds around responsible AI systems, Scotiabank and Kingston’s Smith School of Business co-hosted a conference in early November on Ethics and AI, where leading thinkers from the research and commercial worlds of AI found an emerging consensus.

With the only graduate program in North America to focus on the management of AI in organizations, Smith scholars are exploring how ethical enterprises should make the best use of these powerful new tools. Recent research by Stephanie Kelley, Yuri Levin and David Saunders, presented at the conference, suggests rapid technical advances in AI have driven significant innovation at the risk of outpacing existing ethical guidelines and rules.

The researchers highlight ways financial-services organizations can use principles of fairness, accountability and transparency to guide them through the ethical implications of AI.

Canadian banks are ideally placed to lead by example. Scotiabank, for example, has drafted objectives for interactive AI systems, starting with the point that those systems must be truly useful: They need to improve outcomes for customers and society as well as the bank. They should be monitored for unacceptable outcomes and accountable for any mistakes, misuse or unfair results. Safety and the protection of data privacy is also paramount. And, as the technology develops, objectives should adapt without losing sight of core values.

Integrate.ai has developed a framework for Responsible AI that helps executive leadership work with technology teams to put ethics into practice in AI systems using consumer data. Along with partners such as Scotiabank, the startup aims to hasten the adoption of AI across Canada by strengthening the trust Canadian consumers have in the businesses that shape their everyday lives.

Done right, AI systems can not only avoid negative outcomes but help organizations move in a positive direction. One example is a proprietary job-applicant screening tool developed by Scotiabank that helps recruiters overcome unconscious barriers and hire the best talent.

Focusing on fairness means doing the right thing. It is also good for business. Discovering that a minority population is not well represented in a data set may also point to an underserved market with a need for unique products and services. Similarly, AI can unlock better outcomes for customers with more sophisticated evaluations of creditworthiness and financing needs. But customers know best what is right for them: Ethical AI must put them first.

To achieve that end, organizations must unite in common purpose people with diverse experiences as well as a range of specialized knowledge, including mathematics, data science, risk, social sciences, ethics and law. And while practitioners may develop the systems, executive leadership is responsible for articulating the values that guide them.

As the researchers at Smith point out, the stability of the financial system and the strength of AI innovation in Canada make it an ideal location for AI ethics research – and action. As Canadians, we are proud that our researchers and innovators can compete with any in the world for discoveries and applications in the field. We’re also proud of Canadian values of social equity and inclusion. Responsible co-operation by government, academic researchers, businesses and individuals could vault Canada into global leadership in this vital, fast-developing field.

The rewards from AI will be enormous. But they should not come at the expense of our values.


This article originally appeared in The Globe and Mail 

Article by:

Kathryn Hume, Daniel Moore, Michael Zerbs