Canadian AI General AI News
Jul 11, 2018 ● Vauhini Vara
Can This Startup Break Big Tech’s Hold on A.I.?

Element AI has the pedigree to push the boundaries of what is possible with the technology

IN THE MODERN FIELD OF ARTIFICIAL INTELLIGENCE, all roads seem to lead to three researchers with ties to Canadian universities. The first, Geoffrey Hinton, a 70-year-old Brit who teaches at the University of Toronto, pioneered the subfield called deep learning that has become synonymous with A.I. The second, a 57-year-old Frenchman named Yann LeCun, worked in Hinton’s lab in the 1980s and now teaches at New York University. The third, 54-year-old Yoshua Bengio, was born in Paris, raised in Montreal, and now teaches at the University of Montreal. The three men are close friends and collaborators, so much so that people in the A.I. community call them the Canadian Mafia.

In 2013, though, Google recruited Hinton, and Facebook hired LeCun. Both men kept their academic positions and continued teaching, but Bengio, who had built one of the world’s best A.I. programs at the University of Montreal, came to be seen as the last academic purist standing. Bengio is not a natural industrialist. He has a humble, almost apologetic, manner, with the slightly stooped bearing of a man who spends a great deal of time in front of computer screens. While he advised several companies and was forever being asked to join one, Bengio insisted on pursuing passion projects, not the ones likeliest to turn a profit. “You must realize how big his heart is and how well-placed his values are,” his friend Alexandre Le Bouthillier, a cofounder of an A.I. startup called Imagia, tells me. “Some people on the tech side forget about the human side. Yoshua does not. He really wants this scientific breakthrough to help society.” Michael Mozer, an A.I. professor at the University of Colorado at Boulder, is more blunt: “Yoshua hasn’t sold out.”

Not selling out, however, had become a lonesome endeavor. Big tech companies—Amazon, Facebook, Google, and Microsoft, among others—were vacuuming up innovative startups and draining universities of their best minds in a bid to secure top A.I. talent. Pedro Domingos, an A.I. professor at the University of Washington, says he asks academic contacts each year if they know students seeking postdoc positions; he tells me the last time he asked Bengio, “he said, ‘I can’t even hold on to them before they graduate.’ ” Bengio, fed up by this state of affairs, wanted to stop the brain drain. He had become convinced that his best bet for accomplishing this was to use one of Big Tech’s own tools: the blunt force of capitalism.

On a warm September afternoon in 2015, Bengio and four of his closest colleagues met at Le Bouthillier’s Montreal home. The gathering was technically a strategy meeting for a technology-transfer company Bengio had cofounded years earlier. But Bengio, harboring serious anxieties about the future of his field, also saw an opportunity to raise some questions he had been dwelling on: Was it possible to create a business that would help a broader ecosystem of startups and universities, rather than hurt it—and maybe even be good for society at large? And if so, could that business compete in a Big Tech–dominated world?

Bengio especially wanted to hear from his friend Jean-François Gagné, an energetic serial entrepreneur more than 15 years his junior. Gagné had earlier sold a startup he cofounded to a company now known as JDA Software; after three years working there, Gagné left and became an entrepreneur-inresidence at the Canadian venture capital firm Real Ventures. Bengio was keen on getting involved in Gagné’s next project, provided it aligned with his own goals. Gagné, as it happened, had also been wrestling with how to survive in a Big Tech–dominated world. At the end of the three-hour meeting, as the sun began to set, he told Bengio and the others, “Okay, I’m going to flesh out a business plan.”

That winter, Gagné and a colleague, Nicolas Chapados, visited Bengio at his small University of Montreal office. Surrounded by Bengio’s professorial paraphernalia—textbooks, stacks of papers, a whiteboard covered in cat-scratch equations—Gagné announced that with Real Ventures’ blessing he had come up with a plan. He proposed cofounding a startup that would build A.I. technologies for startups and other under-resourced organizations that couldn’t afford to build their own and might be attracted to a non–Big Tech vendor. The startup’s key selling point would be one of the most talented workforces on earth: It would pay researchers from Bengio’s lab, among other top universities, to work for the company several hours a month yet keep their academic positions. That way, the business would get top talent at a bargain, the universities would keep their researchers, and Main Street customers would stand a chance of competing with their richer rivals. Everyone would win, except maybe Big Tech.

GOOGLE CEO Sundar Pichai declared earlier this year, “A.I. is one of the most important things humanity is working on. It is more profound than, I dunno, electricity or fire.” Google and the other companies that together constitute the Big Tech threat that occupies Bengio have positioned themselves as forces to democratize A.I., by making it affordable for consumers and businesses of all sizes, and using it to better the world. “A.I. is going to make sweeping changes to the world,” Fei-Fei Li, the chief scientist for Google Cloud, tells me. “It should be a force that makes work, and life, and society, better.”

When Bengio and Gagné began their discussions, the largest tech companies hadn’t yet been embroiled in the high-profile A.I. ethics messes—about controversial sales of A.I. for military and predictive policing, as well as the slipping of racial and other biases into products—that would soon consume them. But even then, it was clear to insiders that Big Tech companies were deploying A.I. to compound their considerable power and wealth. Understanding this required knowing that A.I. is different from other software. First of all, there are relatively few A.I. experts in the world, which means they can command salaries well into the six figures; that makes building a large team of A.I. experts too expensive for all but the wealthiest companies. Second, A.I. often requires more computing power than traditional software, which can be expensive, and good data, which can be difficult to get, unless you happen to be a tech giant with nearly limitless access to both.

“There’s something about the way A.I. is done these days … that increases the concentration of expertise and wealth and power in the hands of just a few companies,” Bengio says. Better resources attract better researchers, which leads to better innovations, which brings in more revenue, which buys more resources. “It sort of feeds itself,” he adds.

Bengio’s earliest encounters with A.I. anticipated the rise of Big Tech. Growing up in Montreal in the 1970s, he was especially taken with science fiction books like Philip K. Dick’s novel Do Androids Dream of Electric Sheep?—in which sentient robots created by a megacorporation have gone rogue. In college, Bengio majored in computer engineering; he was in graduate school at McGill University when he came across a paper by Geoff Hinton and was lightning-struck, finding echoes of the sci-fi stories he had loved so much as a child. “I was like, ‘Oh my God. This is the thing I want to do,’ ” he recalls later.

In time, Bengio, along with Hinton and LeCun, would become an important figure in a field known as deep learning, involving computer models called neural networks. But their research was littered with false starts and confounded ambitions. Deep learning was alluring in theory, but no one could make it work well in practice. “For years, at the machine-learning conferences, neural networks were out of favor, and Yoshua would be there cranking away on his neural net,” recalls Mozer, the University of Colorado professor, “and I’d be like, ‘Poor Yoshua, he’s so out of it.’ ”

In the late 2000s it dawned on researchers why deep learning hadn’t worked well. Training neural networks at a high level required more computing power than had been available. Further, neural networks need good digital information in order to learn, and before the rise of the consumer Internet there hadn’t been enough of it for them to learn from. By the late 2000s, all that had changed, and soon large tech companies were applying the techniques of Bengio and his colleagues to achieve commercial milestones: translating languages, understanding speech, recognizing faces.

By that time, Bengio’s brother Samy, also an A.I. researcher, was working at Google. Bengio was tempted to follow his brother and colleagues to Silicon Valley, but instead, in October 2016, he, Gagné, Chapados, and Real Ventures launched their own startup: Element AI. “Yoshua had no material ownership in any A.I. platform, despite being hounded over the last five years to do so, other than Element AI,” says Matt Ocko, a managing partner at DCVC, which invested in the company. “He had voted with his reputation.”

To win customers, Element relied on the star power of its researchers, the reputational glitz of its funding, and a promise of more personalized service than Big Tech could provide. But its executives also worked another angle: In an age in which Google was competing to sell A.I. to the military, Facebook had played host to rogue actors who influence elections, and Amazon was gobbling up the global economy, Element could position itself as a less predaceous, more ethical A.I. outfit.

This spring, I visited Element’s headquarters in Montreal’s Plateau District. The headcount had expanded dramatically, to 300, and judging from the colorful Post-it notes columned on the walls, so had the workload. In one meeting, a dozen Elementals, as employees call themselves, watched a demo of a product in development, in which a worker could enter questions on a Google-like screen—“What’s our hiring forecast?”—and get up-to-date answers. The answers would be based not just on existing information but also on the A.I.’s predictions about the future based on its understanding of business goals. As is typical at fast-growing startups, the employees I met seemed simultaneously energized and utterly exhausted.

A persistent challenge for Element is the dearth of good data. The simplest way to train A.I. models is to feed them lots of well-labeled examples—thousands of cat images, or translated texts. Big Tech has access to so much consumer-oriented data that it’s all but impossible for anyone else to compete at building large-scale consumer products. But businesses, governments, and other institutions own huge amounts of private information. Even if a corporation uses Google for email, or Amazon for cloud computing, it doesn’t typically let those vendors access its internal databases about equipment malfunctions, or sales trends, or processing times. That’s where Element sees an opening. If it can access several companies’ databases of, say, product images, it can then—with customers’ permission—use all of that information to build a better product-recommendation engine. Big Tech companies are also selling A.I. products and services to businesses—IBM is squarely focused on it—but no one has cornered the market. Element’s bet is that if it can embed itself in these organizations, it can secure a corporate data advantage similar to the one Big Tech has in consumer products.

Not that it has gotten anywhere close to that point. Element has signed up some prominent Canadian firms, including the Port of Montreal and Radio Canada, and counts more than 10 of the world’s 1,000 biggest companies as customers, but executives wouldn’t quantify their customers or name any non-Canadian ones. Products, too, are still in early stages of development. During the demo of the question-answering product, the project manager, François Maillet, who is not a native English speaker, requested information about “how many time” employees had spent on a certain product. The A.I. was stumped, until Maillet revised the question to ask “how much time” had been spent. Maillet acknowledges the product has a long way to go. But he says Element wants it to become so intelligent that it can answer the deepest strategic questions. The example he offers—“What should we be doing?”—seemed to go beyond the strategic. It sounded quite nearly prayerful.

LOOK NO FURTHER than Google’s employee revolt over its decision to provide A.I. to the Pentagon as evidence that tech companies’ stances on military use of A.I. have become an ethical litmus test. Bengio and his cofounders vowed early on to never build A.I. for offensive military purposes. But earlier this year, the Korea Advanced Institute of Science and Technology, a research university, announced it would partner with the defense unit of the South Korean conglomerate Hanwha, a major Element investor, to build military systems. Despite Element’s ties with Hanwha, Bengio signed an open letter boycotting the Korean institute until it promised not to “develop autonomous weapons lacking meaningful human control.” Gagné, more discreetly, wrote to Hanwha emphasizing that Element wouldn’t partner with companies building autonomous weapons. Soon Gagné and the scientists received assurances: The university and Hanwha wouldn’t be doing so.

Autonomous weapons are neither the only ethical challenge facing A.I. nor the most serious one. Kate Crawford, a New York University professor who studies the societal implications of A.I., has written that all the “hand-wringing” over A.I. as a future existential threat distracts from existing problems, as “sexism, racism, and other forms of discrimination are being built into the machinelearning algorithms.” Since A.I. models are trained on the data that engineers feed it, any biases in the data will poison a given model.

Tay, an A.I. chatbot deployed to Twitter by Microsoft to learn how humans talk, soon started spewing racist comments, like “Hitler was right.” Microsoft apologized, took Tay off-line, and said it is working to address data bias. Google’s A.I.-powered feature that uses selfies to help users find their doppelgängers in art matched African-Americans with stereotypical depictions of slaves and Asian-Americans with slant-eyed geishas, perhaps because of an overreliance on Western art. I am an Indian-American woman, and when I used the app, Google delivered me a portrait of a copperfaced, beleaguered-looking Native American chief. I also felt beleaguered, so Google got that part right. (A spokesman apologized and said Google is “committed to reducing unfair bias” in A.I.)

Problems like these result from bias in the world at large, but it doesn’t help that the field of A.I. is believed to be even less diverse than the broader computer science community, which is dominated by white and Asian men. “The homogeneity of the field is driving all of these issues that are huge,” says Timnit Gebru, a researcher who has worked for Microsoft and others and is an Ethiopian-American woman. “They’re in this bubble, and they think they’re so liberal and enlightened, but they’re not able to see that they’re contributing to the problem.”

Women make up 33% of Element’s workforce, 35% of its leadership, and 23% of technical roles—higher percentages than at many big tech companies. Its employees come from more than 25 countries: I met one researcher from Senegal who had joined in part because he couldn’t get a visa to stay in the U.S. after studying there on a Fulbright. But the company doesn’t break down its workforce by race, and during my visit, it appeared predominantly white and Asian, especially in the upper ranks. Anne Martel, the senior vice president of operations, is the only woman among Element’s seven top executives, and Omar Dhalla, the senior vice president of industry solutions, is the only person of color. Of the 24 academic fellows affiliated with Element, just three are female. Of 100 students listed on the website of Bengio’s lab, MILA, seven are women. (Bengio said the website is out of date and he doesn’t know the current gender breakdown.) Gebru is close with Bengio but does not exempt him from her criticisms. “I tell him that he’s signing letters against autonomous weapons and wants to stay independent, but he’s supplying the world with a mostly white or Asian group of males to create A.I.,” she said. “How can you think about world hunger without fixing your issue in your lab?”

Bengio said he is “ashamed” about the situation and trying to address it, partly by widening recruitment and earmarking funding for students from underrepresented groups. Element, meanwhile, has hired a new vice president for people, Anne Mezei, who set diversity and inclusion as a top priority. To address possible ethical problems with its products, Element is hiring ethicists as fellows, to work alongside developers. It has also opened an AI for Good lab, in a London office directed by Julien Cornebise, a former researcher at Google DeepMind, where researchers are working, for free or at cost, with nonprofits, government organizations, and others on A.I. projects with social benefit.

Still, ethical challenges persist. In early research, Element is basing some products on its own data; the question-answering tool, for example, is being trained partly on shared internal documents. Martel, the operations executive, tells me that because Element executives aren’t sure from an ethics standpoint how they might use A.I. for facial recognition, they plan to experiment with it on their own employees by installing video cameras that will, with employees’ permission, capture their faces to train the A.I. Executives will poll employees on their feelings about this, to refine their understanding of the ethical dimensions. “We want to figure it out through eating our own dog food,” Martel says. That means, of course, that any facial-recognition model will be based, at least at first, on faces that are not representative of the broader population. Martel says executives are aware of the issue: “We’re really concerned about not having the right level of representativeness, and we’re looking into solutions for that.”

Even the question that Element’s product aims to answer for executives—What should we be doing?—is loaded with ethical quandaries. One could hardly fault a business-oriented A.I. for recommending whatever course of action maximizes profit. But how should it make those decisions? What social costs are tolerable? Who decides? As Bengio has acknowledged, as more organizations deploy A.I., millions of humans are likely to lose their jobs, though new ones will be created. Though Bengio and Gagné originally planned to pitch their services to small organizations, they have since pivoted to target the 2,000 largest companies in the world; Element’s need for large data sets turned out to be prohibitive for small organizations. In particular, they are targeting finance and supply-chain companies—the biggest of which aren’t exactly defenseless underdogs. Gagné says that as the technology improves, Element expects to sell it to smaller organizations as well. But until that happens, its plan to give an A.I. advantage to the world’s biggest companies would seem better-equipped to enrich powerful incumbent corporations than to spread A.I.’s benefits among the masses.

Bengio believes the job of scientists is to keep pursuing A.I. discoveries. Governments should more aggressively regulate the field, he says, while distributing wealth more equally and investing in education and the social safety net, to mitigate A.I.’s inevitable negative effects. Of course, these positions assume governments have their citizens’ best interests in mind. Meanwhile, the U.S. government is cutting taxes for the rich, and the Chinese government, one of the world’s biggest funders of A.I. research, is using deep learning to monitor citizens. “I do think Yoshua believes that A.I. can be ethical, and that his can be the ethical A.I. company,” says Domingos, the University of Washington professor. “But to put it bluntly, Yoshua is a little naive. A lot of technologists are a little naive. They have this utopian view.”

Bengio rejects the characterization. “As scientists, I believe that we have a responsibility to engage with both civil society and governments,” he says, “in order to influence minds and hearts in the direction we believe in.”

ONE COLD, BRIGHT MORNING this spring, Element’s staff gathered for an off-site training in collaborative software design, in a high-ceilinged church that had been converted into an event space. The attendees, working in groups at round tables, had been assigned to invent a game to teach the fundamentals of A.I. I sat with some halfdozen employees, who had decided on a game about an A.I. named Sophia the Robot who had gone rogue and would need to be fought and captured, using, naturally, A.I. techniques. Mezei, the new VP for people, happened to be at this table. “I like the fact that it’s Sophia, because we need more women,” she interjected. “But I don’t like fighting.” There were murmurs of assent all around. An executive assistant suggested, “Maybe the goal is changing Sophia’s mindset so it’s about helping the world.” This was a more palatable version of the game, one better aligned with Element’s self-image. One employee told me, “At the office, we’re not allowed to talk about Skynet”—the antagonistic A.I. system from the Terminator franchise. Anyone who slips up has to put a dollar into a special jar. A colleague added, in a tone of great cheer, “We’re supposed to be positive and optimistic.”

Later I visited Bengio’s lab at the University of Montreal, a warren of carceral, fluorescent-lit rooms filled with computer monitors and piled-up textbooks. In one room, some dozen young men were working on their A.I. models, exchanging math jokes, and contemplating their career paths. Overheard: “Microsoft has all these nice perks—you get cheaper airline tickets, cheap hotels.” “I go to Element AI once a week, and I get this computer.” “He’s a sellout.” “You can scream, ‘Sellout!’ in other fields, but not deep learning.” “Why not?” “Because in deep learning, everyone’s a sellout.” Bengio’s sellout-free vision, it seemed, had not quite been realized.

Still, perhaps more than any other academic, Bengio has influence over A.I.’s future, by virtue of training the next generation of researchers. (One of his sons has become an A.I. researcher too; the other is a musician.) One afternoon I went to see Bengio in his office, a small, sparse room whose main features were a whiteboard across which someone had scrawled the phrase “Baby A.I.,” and a bookcase featuring such titles as The Cerebral Cortex of the Rat. Despite being an Element cofounder, Bengio acknowledged that he hadn’t been spending a lot of time at the offices; he had been preoccupied with frontiers in A.I. research that are far from commercial application.

While tech companies have been focused on making A.I. better at what it does—recognizing patterns and drawing conclusions from them—Bengio wants to leapfrog those basics and start building machines that are more deeply inspired by human intelligence. He hesitated to describe what that might look like. But one can imagine a future in which machines wouldn’t just move products around a warehouse but navigate the real world. They wouldn’t just respond to commands but understand, and empathize with, humans. They wouldn’t just identify images; they’d create art. To that end, Bengio has been studying how the human brain operates. As one of his postdocs told me, brains “are proof that intelligent systems are possible.” One of Bengio’s pet projects is a game in which players teach a virtual child—the “Baby A.I.” from his whiteboard—about how the world operates by talking to the pretend infant, pointing, and so on: “We can use inspiration from how babies learn and how parents interact with their babies.” It seems far-fetched until you remember that Bengio’s once-outlandish notions now underpin some of Big Tech’s most mainstream technologies.

While Bengio believes human-like A.I. is possible, he evinces impatience with the far-reaching ethical worries popularized by people like Elon Musk, premised on A.I.s outsmarting humans. Bengio is more interested in the ethical choices of the humans building and using A.I. “One of the greatest dangers is that people either deal with A.I. in an irresponsible way or maliciously—I mean for their personal gain,” he once told an interviewer. Other scientists share Bengio’s feelings, and yet, as A.I. research continues apace, it remains funded by the world’s most powerful governments, corporations, and investors. Bengio’s university lab is largely funded by Big Tech.

At one point, during a discussion of the biggest tech companies, Bengio told me, “We want Element AI to become as large as one of these giants.” When I questioned whether he would then be perpetuating the same sort of concentration of wealth and power that he has decried, he replied, “The idea isn’t just to create one company and be the richest in the world. It’s to change the world, to change the way that business is done, to make it not as concentrated, to make it more democratic.” As much as I admired his position and believed in his intentions, his words didn’t sound much different from the corporate slogans once chosen by Big Tech. Don’t be evil. Make the world more open and connected. Creating an ethical business is less about founders’ intentions than about how, over time, business owners measure societal good against profit. What should we be doing? If computers are still struggling to answer that question, they should take some solace in knowing that we humans are not much better.


This article originally appeared in Fortune 

Article by:

Vauhini Vara