FOR ALL THEIR differences, big tech companies agree on where we’re heading: into a future dominated by smart machines. Google, Amazon, Facebook, and Apple all say that every aspect of our lives will soon be transformed by artificial intelligence and machine learning, through innovations such as self-driving cars and facial recognition. Yet the people whose work underpins that vision don’t much resemble the society their inventions are supposed to transform. WIRED worked with Montreal startup Element AI to estimate the diversity of leading machine learning researchers, and found that only 12 percent were women.
That estimate came from tallying the numbers of men and women who had contributed work at three top machine learning conferences in 2017. It suggests the group supposedly charting society’s future is even less inclusive than the broader tech industry, which has its own well-known diversity problems.
At Google, 21 percent of technical roles are filled by women, according to company figures released in June. When WIRED reviewed Google’s AI research pages earlier this month, they listed 641 people working on “machine intelligence,” of whom only 10 percent were women. Facebook said last month that 22 percent of its technical workers are women. Pages for the company’s AI research group listed 115 people earlier this month, of whom 15 percent were women.
A Google spokesperson told WIRED that the company’s research page lists only people who have authored research papers, not everyone who implements or researches AI technology, but declined to provide more information. Facebook also declined to provide details on the diversity of its AI teams. Joelle Pineau, who leads the Montreal branch of Facebook’s AI lab, said counting the research team’s publicly listed staff was “reasonable,” but that the group is small relative to everyone at Facebook involved in AI, and growing and changing through hiring.
Pineau is part of a faction in AI research trying to improve the field’s diversity—motivated in part by fears that failing to do so increases the chance AI systems have harmful effects on the world. “We have more of a scientific responsibility to act than other fields because we’re developing technology that affects a large proportion of the population,” Pineau says.
Companies and governments are betting on AI because of its potential to let computers make decisions and take action in the world, in areas such as health care and policing. Facebook is counting on machine learning to help it fight fake news in places with very different demographics to its AI research lab, such as Myanmar, where rumors on the company's platform led to violence. Anima Anandkumar, a professor at the California Institute of Technology who previously worked on AI at Amazon, says the risks AI systems will cause harm to certain groups are higher when research teams are homogenous. “Diverse teams are more likely to flag problems that could have negative social consequences before a product has been launched,” she says. Research has also shown diverse teams are more productive.
Corporate and academic AI teams have already—inadvertently—released data and systems biased against people poorly represented among the high priests of AI. Last year, researchers at the universities of Virginia and Washington showed that two large image collections used in machine learning research, including one backed by Microsoft and Facebook, teach algorithms a skewed view of gender. Images of people shopping and washing are mostly linked to women, for example.
Anandkumar and others also say that the AI community needs better representation of ethnic minorities. In February, researchers from MIT and Microsoft found that facial analysis services that IBM and Microsoft offered to businesses were less accurate for darker skin tones. The companies’ algorithms were near perfect at identifying the gender of men with lighter skin, but frequently erred when presented with photos of women with dark skin. IBM and Microsoft both say they have improved their services. The original, flawed, versions were on the market for more than a year.
The scarcity of women among machine learning researchers is hardly surprising. The wider field of computer science is well documented as being dominated by men. Government figures show that the proportion of women awarded bachelor's degrees in computing in the US has slid significantly over the past thirty years, the opposite of the trend in physical and biological sciences.
Little demographic data has been gathered on the people advancing machine learning. WIRED approached Element about doing that after the company published figures on the global AI talent pool. The company compiled a list of the names and affiliations of everyone who had papers or other work accepted at three top academic machine learning conferences—NIPS, ICLR, and ICML—in 2017. The once obscure events now feature corporate parties and armies of corporate recruiters and researchers. Element’s list comprised 3,825 names, of which 17 percent were affiliated with industry. The company counted men and women by asking workers on a crowdsourcing service to research people on the list online. Each name was sent to three workers independently, for consistency. WIRED checked a sample of the data, and excluded six entries that came back incomplete.
The picture that emerged is only an estimate. Rachel Thomas, a professor at the University of San Francisco, and cofounder of AI education provider, Fast.ai, says it can still be useful. Figures on AI’s diversity problem might help motivate attempts to address it, she says. “I think it’s a fairly accurate picture of who big companies working on AI think are appropriate people to hire,” Thomas says.
AI’s lack of diversity and efforts to address it have won more attention in recent years. Thomas, Anandkumar, and Pineau have all been involved with Women in ML, or WiML, a workshop that runs alongside NIPS, currently the hottest conference in AI. The side-event provides a venue for women to present their work, and in 2017 boasted corporate sponsorship from Google, Facebook, Amazon, and Apple. Similarly, boldface tech brands sponsored a new workshop that ran alongside NIPS last year called Black In AI, which hosted technical research talks, and discussion on how to improve the field’s diversity. Fast.ai’s courses are designed to offer an alternative to the conventional grad school track into AI, and the company offers diversity scholarships.
Despite the growth of such programs, few people in AI expect the proportion of women or ethnic minorities in their field to grow very swiftly.
Diversity campaigns at companies such as Google have failed to significantly shift the predominance of white and Asian men in their technical workforces. Negar Rostamzadeh, a research scientist at Element, says AI has its own version of a problem well documented in tech companies whereby women are more likely than men to leave the field, and less likely to be gain promotions. “Working to have good representation of women and minorities is positive, but we also want them to be able to advance,” Rostamzadeh says.
Women in AI research also say the field can be unwelcoming and even hostile to women.
Anandkumar and Thomas say they learned long before completing their PhDs that it’s not unusual for men in computer science or math research to subject women to inappropriate remarks or harassment. Two long-standing computer science professors at Carnegie Mellon University resigned this week, citing “sexist management.” In February, Anandkumar made online posts with the #metoo tag, describing verbal harassment by an unnamed coworker in AI.
Events at NIPS in recent years illustrate the challenge of making the field more welcoming to women—and how the new money flowing into AI can sometimes make it worse.
In 2015, the founders of a Canadian startup called Deeplearni.ng brought t-shirts to the conference with the slogan “My NIPS are NP-hard,” an anatomical math jokesome men and women found inappropriate. (The conference’s full name is Neural Information Processing Systems.) Stephen Piron, founder of the startup, now called Dessa, says making the shirt “was a meat-headed move” he regrets, and that his company values inclusion.
At last year’s event, Anandkumar and some other attendees complained that a party hosted by Intel—which also sponsored the Women in ML event—where female acrobats descended from the ceiling created an unwelcoming atmosphere for women. An Intel spokesman said the company welcomes feedback on how it can better create environments where everyone feels included. The conference’s official closing party generated similar complaints, triggering investigations into the behavior of two prominent researchers.
One was University of Minnesota professor Brad Carlin, who performed at the NIPS closing party in a band called the Imposteriors made up of statistics professors. Carlin, who plays keys, made a joke about sexual harassment during the show. Tweets complaining about his remark spurred data scientist Kristian Lum to write a blog post alleging that a person involved in the incident—later confirmed to be Carlin—and another, unnamed, researcher had touched her inappropriately, on separate occasions. Carlin later retired after a University of Minnesota investigation found he had breached sexual harassment policy on multiple occasions. Bloomberg reported the second man was Steven Scott, Google’s director of statistics research. A company spokesperson confirmed Scott left the company after an internal investigation into his behavior.
The organizers of NIPS are now working on a more detailed code of conduct for the event, which takes place in Montreal this December. Last week they sent out a survey soliciting opinions on alternatives to the current name that wouldn’t have the same “distasteful connotations.“ Candidates include CLIPS, NALS, and ICOLS.
Pineau of Facebook doesn’t have a preference, but is in favor of changing the name. “I have searched for the conference and ended up on some really unpleasant websites,” she says. She also cautions that renaming NIPS shouldn’t distract from AI’s larger, and less easily fixed problems. “I worry a little bit that people will think we’ve done a grand gesture and momentum on other things will slow down,” she says.
This article originally appeared in WIRED