Canadian AI Policy & Ethics
Nov 30, 2018 ● Andrée Gagnon
Canada Needs Comprehensive AI Policy Framework

Canada should leverage it's strengths to become an AI governance leader

Artificial intelligence has long captured people’s imaginations, inspiring curiosity, fear, hope and awe. When Star Trek’s android character Data debuted on the screen, he embodied the science fiction depiction of AI that has dominated public consciousness: a machine that looks human, that can be taught to act human, yet has infinitely more intelligence and analytical speed than people can possibly achieve.

While writers and filmmakers romanticized AI and stretched the imagination with utopic as well as dystopic outcomes, real science started yielding genuine AI advances that have entered our lives without fanfare. In areas ranging from public infrastructure to research, health care, insurance, financial services and public safety, our lives have become dependent upon countless computations of data, decisions that are made without our conscious knowledge, that help bring us order, stability and security.

But despite our growing reliance on AI, we are still operating in a pre-AI legal landscape. Our current laws, regulations and policies are incapable of rendering the nuanced interpretation and do not have the sophistication needed to govern these new realities. To fully embrace the opportunities that artificial intelligence represents, and to adequately prepare us for its challenges, we need a comprehensive evaluation of the laws and practices that govern this space.

AI in the world today

What does AI look like in the real world? Often it is helpful – even life-saving – and poses minimal public risk.

Smart cities are using sophisticated algorithms to analyze data and improve the movement of people. This includes aspects like the control of traffic lights and the reassignment of road lanes.

AI-empowered oncology is identifying cancer tumours more quickly and with a lower margin of error than conventional medicine. This is already saving lives.

Instantaneous translation already offers high-quality text translation applications, and it will soon provide high-quality voice-interface to allow people of different languages to talk to each other seamlessly.

We are seeing AI-empowered prosthetic limbs, including hands that can grasp and catch.

We have Canadian-invented prosthetic eyes that enable the visually impaired to interface more easily with the visual world. These are not just mechanical innovations, they rely on high-speed- data processing of surrounding inputs to give them their accuracy and effectiveness.

Other applications of artificial intelligence will have similar human benefits but require more research in order to optimize their effectiveness and to fully evaluate the legal and ethical considerations.

Facial recognition technology has perhaps captured the most attention recently, and it has a wide array of applications. It can tell a person who is blind the name of the individual who has just walked into a room. It can also, however, contribute to a surveillance state. Governments can use facial recognition to monitor political activities, potentially creating a chilling effect around participation in political events, and this would ultimately undermine our core freedoms of assembly and expression.

Key policy questions around AI

Canada’s Copyright ActPrivacy ActPersonal Information Protection and Electronic Documents Act and the Office of the Privacy Commissioner – all of these laws and institutions were designed in (and for) a simpler world that did not have to grapple with the complexities, and the potential, of machine learning and cloud computing. By continuing to regulate with a 20th-century framework to govern 21st-century technology, we are setting ourselves up for misunderstanding and distrust of technology. As a result, we could fail to accommodate and promote life-enhancing technological advancement, and we might not be able to protect ourselves against the cyber threats that travel with information technology.

Many questions require thorough reconsideration before we unleash the power of AI technology more broadly. How do we modernize the definition of consent for the use of our personal data? Can we efficiently obtain consent for secondary data use? How do we modernize privacy laws to balance personal privacy with the greater public good? How do we treat anonymized and pseudonymized data? Should they be subject to the same consent thresholds? How do we strengthen AI systems to guard against reproduction of unintended algorithmic bias? How do we explain to citizens how an AI system makes a recommendation?

In The Future ComputedArtificial Intelligence and its Role in Society, Microsoft identifies six fundamental principles that must underpin a new framework: fairness, reliability, privacy and security, inclusiveness, transparency and accountability. When AI systems make or inform decisions (about, for example, medical treatment), they must be consistent and non-discriminatory. AI systems must be able to withstand unanticipated situations. They must be kept up to date on privacy protection and security regulations and laws. They have to be built to ward off bias, and their users must understand the capabilities and limitations of the system.

Canada is a world leader in AI technology development. We also have a strong tradition of civil liberty. We should leverage these strengths to also become a leader in AI governance.  But in order to take up this leadership position, we will have to work through these policy challenges systematically and with a sense of urgency, as other regimes that may not share our values are already forging ahead with the deployment of AI technologies. Up until now, government involvement has mostly been limited to funding and catalyzing research and innovation. This pace and level of involvement will not place Canada in a leadership position with respect to governance of AI.

For the federal government, the next logical steps are a retroactive audit of existing governance structures (primarily legislation and regulations), followed by a proactive government-led evaluation of sensitive uses of AI technology – such as facial recognition. This will help determine where fixes are necessary, be they legislative amendments, regulatory changes, federal guidelines or policy changes.

The retroactive piece involves re-examining the existing statutes and institutions – mentioned above – and undertaking a comprehensive review to ensure they adequately address the legal and ethical questions raised by current and emerging technology. An ideal vehicle for such a review would be a special parliamentary committee which would take input from legal, scientific and academic experts and stakeholders. Such an enterprise should concurrently review laws and regulations governing artificial intelligence, to ensure not only the applicability of each, but also to assess interconnectedness and identify developing gaps.

The inferior alternative to this comprehensive approach would be to update individual laws as they come up for statutory review. While this would allow for updating, such a staggered or ad hoc approach would not permit a whole systems-based analysis. Different statutes would be modernized at different times (over many years), and gaps would likely persist.

The second piece of this venture is to cast forward and develop policy recommendations based on sensitive uses or implementations of AI technology rather than on AI technology in general. These AI functions – or deployments of AI technology – would include algorithmic sorting of risk in immigration and corrections systems, facial recognition technology, and other new or potential applications that create the risk of material harm to individuals.

Despite significant promise of efficiencies and accuracy improvements, each of these applications must be assessed for incremental benefit, legal implications, economic opportunities, human rights impact, privacy and other civil liberties implications, ethics, propensity for perpetuation of bias, and potential social impact. Such a scrub would be executed by an independent expert panel specializing in each of these respective applications. The experts would have comprehensive knowledge of the policy and regulatory environment in other jurisdictions. Their work could yield an analysis that thoroughly weighs all considerations, and guide policymakers in governing the new technology’s deployment

Each of these expert panels ought to include technical experts, legal and constitutional analysts, and public policy ethicists. They would ultimately provide advice to Parliament on what types of new laws and regulations may be needed, and what practices may need to be strengthened (or loosened) to maximize public benefit and mitigate individual risk.

Organizations such as the OECD and the International Standards Organization (ISO) have been tackling the challenges around artificial intelligence. Canada could take a leadership role within these and other forums to develop policies and international standards and build a foundation for consistent definitions, terminology and solutions that will address the trustworthiness of AI. The emerging Canadian policy framework should also include industry associations sharing and promulgating best practices in the development and deployment of human-centred AI.

We should also continue the government’s financial support of this emerging technology. This can be done by developing shared public data sets and environments for testing and training AI. This would enable broad experimentation with artificial intelligence, and also help identify problems in algorithmic functioning and expedite solutions for such things as error or bias.

Artificial intelligence is already being adopted to make decisions that improve many people’s lives every day, and it has the potential to help everybody on the planet achieve more. But to realize this potential, people need to trust this technology.  In order to build trust, governments and policy-makers must be willing to embrace this opportunity to bring our current policy and legislative framework in line with the current technology.

The longer policy-makers wait, the more daunting the task will become.  If there are concerns about how a technology will be deployed more broadly across society, it is incumbent upon the government to create a policy framework for legal review and to determine if regulation is necessary to protect civil society. Only then will policy-makers and the law of the land be able to demystify this technology and unleash its full potential for bettering humanity, while also better protecting us from potential abuse and excess.


This article originally appeared in Policy Options 

Article by:

Andrée Gagnon