Canadian AI Privacy & Security
Jun 06, 2018 ● Harvard Business Review
Google’s AI Assistant Is a Reminder that Privacy and Security Are Not the Same

We want the benefit of AI assistants without significant privacy and security issues

Earlier this month, Google unveiled remarkable new capabilities for its automated assistant. They’re based on Google’s growing expertise in artificial intelligence (AI).

Perhaps the most dramatic, and, to look at the deluge of commentary, troubling, demonstration was the ability of Google AI to make phone calls that imitate a human. If you haven’t seen the demo, here’s a link. While we’re not there yet, in short order you’ll be able to instruct an AI to use an old technology (voice calls) to make appointments and handle other interactions on your behalf — interacting with other humans or, if the receiver wants, other AIs. Suffice it to say, there’s value in that.

So, what’s the concern? An AI that sounds human compromises both privacy and security. Although they’re often bundled together, privacy and security are different.

Privacy includes the right to be left alone. AI callers violate that because of their potential to intrude. Privacy concerns also arise when information is used out of context (for instance, for gossip, price discrimination, or targeted advertising). AI callers that sound human may violate privacy because they can fool people into believing the context of the call is person-to-person when it is actually person-to-machine. They may obtain information from you and then use it in ways you don’t anticipate. This could happen if you’re talking to the AI yourself or, in the future, if your AI is talking on your behalf.

However, when people talk about privacy concerns, they’re often really concerned about security. The issue isn’t targeted advertising or gossip, it’s theft and safety. Security is the state of being free from danger. Security concerns arise when information is extracted and then used in an illegal way. The most obvious of these is identity theft. Imagine that an AI caller can impersonate your voice. You may want that as part of a service that you control, but, at the same time, someone could replicate that to fool others into believing they are talking to you.

The problem is that improving security may not help privacy. For instance, increased surveillance might increase security from bad actors, but at the expense of privacy.

Solving privacy may not help security. Privacy rules that restrict the flow of information may make it harder for police to know what the bad guys are doing, therefore making you less secure.

For AI assistants, we can make impersonation illegal and add new layers to identification checks so that identify theft is difficult. This helps security, but not privacy. We’ll still get spammed by AI callers and information you tell an AI could be used to target advertising.

Many commentators called for Google (and others) to be required to identify when a machine is calling rather than a person. Google responded that they will provide that identification. However, someone set on stealing your identity will not politely follow these rules.

Even if the government mandates that an AI assistant needs to identify itself, it will only work to protect you if bad actors also must identify themselves. After all, if your voice can be imitated so can a voice announcement. Self-identification helps privacy, but not security.

It is time to consider the protocols for AI communication across domains in order to improve both privacy and security. Social networks have their own internal ways of authenticating who is messaging whom. However, we do not have good ways to verify across networks. We still live in an analog world and voice recognition, a classic analog authenticator, is no longer good enough.

The goal is clear. We want the benefit of AI assistants without significant privacy and security issues. We need AI callers to be able to identify themselves in a verifiable manner and protocols on how AI calls should be handled. Voice recognition no longer works for verification; it’s time for a digital-first solution.

Article by:

Harvard Business Review