While artificial intelligence (AI) has the potential to solve an incredible spectrum of problems and challenges in our lives, our work and our world, there is a widening disconnect between the people who are introducing and deploying AI-based solutions and those who set policies for when and how these solutions are used.
Much has been written about one consequence of this disconnect—algorithmic bias in AI systems, in which machine learning algorithms trained on data that reflects historical discrimination replicate and even magnify it. But there’s another pressing issue: There are many missed opportunities to use AI for the good of many.
Just as AI systems susceptible to bias are a problem, so too is inadequate focus on contributions that improve the lives of marginalized communities, such as Black and brown individuals, economically vulnerable populations and many other groups whose interests are underserved in society. If teams that set research directions, write algorithms or deploy them are made up of individuals with similar backgrounds and experiences, then we will end up with research that is to the benefit of a similarly narrow and already privileged subset of society.
I see this gap in action every day. I was born and raised in Ethiopia, a country with a beautiful culture and history, but also a country that continues to face challenges that are distinct from those in the United States. So I look out for AI research focused on helping the people of Ethiopia and the people of Africa more generally. More often than not, however, I find a conspicuous lack of such research.
At many of the AI conferences that I attend, few research papers focus on problems that specifically affect people in Africa. Research communities that do, such as Information Communication Technology for Development (ICT4D), are still not fully integrated with the AI community as a whole. None of this will change without adequate representation of diverse viewpoints on research teams; only then will we able to produce research that serves the diverse needs of communities throughout the world.
On AI Teams, Representation Matters
For many reasons, it matters who is conducting AI research. The researcher gets to set the question, decide what datasets to use, how to conduct analyses and how to present results. There are many points at which the researcher makes subjective decisions, which can impact what community will benefit from the research. Greater diversity within research communities can control for bias in this process. Diverse teams can think creatively about solving problems to which we can apply techniques and insights from AI.
When I was an intern at the Computational Social Science Group at Microsoft Research, I had one such opportunity to work as part of one such team and to address an issue that had been underexplored by the AI community. I was aware of the health data gap faced by many African nations. Unlike in the United States, where the Centers for Disease Control has statistics on just about every disease you can imagine, in Africa, even basic statistics, such as HIV/AIDS mortality rates, are frequently missing. When such datasets are available, they are noisy or prone to errors.
To help address this gap, with collaborators at Microsoft Research and Stony Brook University, we set out to understand and measure health information needs of people using Bing search queries from all 54 nations in Africa. We uncovered diverse information needs related to HIV/AIDS, malaria and tuberculosis, ranging from searches related to symptoms, to beliefs in natural cures and remedies, and concerns about HIV/AIDS stigma. Speaking to missed opportunities, our work was the first-ever research project to use a large Web-based data source to address health issues impacting all 54 African nations. Clearly, there are opportunities for further work in this area, and we should not miss them.
Fortunately, the last few years have seen the increased presence and inclusion of Black and African researchers in the field of AI through initiatives such as Black in AI, which I co-founded and co-organize with Timnit Gebru and others. I am therefore optimistic that we will continue to bridge gaps between AI research and underserved and marginalized communities. By ensuring representation of all communities and treating lived experiences as expertise, we can enable and empower our research communities to come up with more diverse ways to identify and approach problems for the benefit of all.
An Interdisciplinary Approach To Diversity
AI research must also acknowledge that the problems we would like to solve are not purely technical, but rather interact with a complex world full of structural challenges and inequalities. It is therefore crucial that AI researchers collaborate closely with individuals who possess diverse training and domain expertise.
People in interdisciplinary communities are thinking critically about the impact of AI and its place in society. At Cornell University, where I’m a Ph.D. candidate in computer science, I am also a member of the MacArthur Foundation–funded AI, Policy, and Practice initiative as well as the Center for the Study of Inequality, which brings together researchers in the social and computational sciences as well as law to understand the causes and consequences of social and economic inequality.
This diverse training deeply informs my research. Inspired by this, Kira Goldner and I co-founded the Mechanism Design for Social Good (MD4SG) group, which we co-organize with Irene Lo. MD4SG is an interdisciplinary and multi-institutional research group that aims to use algorithmic, mechanism design and AI techniques to improve access to opportunity. We focus on communities of individuals for whom opportunities have historically been limited and explore problems such as how to allocate low-income housing resources, improve agriculture systems in the developing world and increase civic participation.
In both private enterprise and the public sector, research must be reflective of the society we’re serving. We need to have more conversations across business and academia about the impact that diversity can have in AI, how to avoid causing unintended harm, and how algorithms can help us promote equality and justice within society.
This article originally appeared in Forbes