General AI News Opinion
Oct 23, 2018 ● New York Times
Five Artificial Intelligence Insiders in Their Own Words

Artificial intelligence insiders discuss the topic and more

YVES BÉHAR

C.E.O. and founder, fuseproject

While artificial intelligence touches so much of what we do today, the current thinking behind A.I. is too limited. To reach A.I.’s potential, we need to think beyond what it does for search, social media, security, and shopping – and beyond making things merely “smarter”.

Instead, we should imagine how A.I. can be both smart and compassionate, a combination that can solve the most important human problems. Good design can lead the way by infusing an approach centered around the user and the real needs that A.I. can address.

We should be thinking about A.I. in new contexts – the newborn of the overworked parent, the cancer patient who needs round-the-clock attention, and the child with learning and behavioral difficulties. A.I. holds great promise for them in combination with design that is empathic and follows a few principles.

First, good design should help without intruding. A.I. must free our attention rather than take it away, and it must enhance human connection and abilities rather than replace humans. Great design can create A.I. interfaces that fit discreetly and seamlessly into users’ lives, solving problems without creating distraction.

Second, good design can bring A.I.’s benefits to those who otherwise might be left out. That so much of A.I. is currently directed at the affluent contradicts the notion that good design can serve everyone, regardless of age, condition or economic background. We should take the “A.I. for all” approach, and it should follow a human need. Designers and researchers should work together in the early stages of design to identify those needs, develop A.I. that responds compassionately to human demands, and use design to ensure cost-effective, accessible A.I.-enabled products and services.

Third, A.I. should never create emotional dependence. We see this danger in social media, where A.I. traps users in echo chambers and emotional deserts. Thoughtful design can create A.I. experiences that evolve with the user to continuously serve their needs. The combination of good A.I. and good design can ultimately create products that help people live healthier, longer and happier lives.

This is not a purely utopian vision. It is achievable. Developers must recognize that while they can do important, valuable work for a range of commercial products and consumers, the most meaningful A.I. will touch those with greater needs and lack of access. The result will mean well-designed products and experiences that tackle real needs, with the power to improve, not complicate, human lives.

LILA IBRAHIM

Chief operating officer, DeepMind

Artificial intelligence offers new hope for addressing challenges that seem intractable today, from poverty to climate change to disease. As a tool, A.I. could help us build a future characterized by better health, limitless scientific discovery, shared prosperity and the fulfillment of human potential. At the same time there’s a growing awareness that innovation can have unintended consequences and valid concern that not enough is being done to anticipate and address them.

Yet for A.I. optimists, this increasing attention to risks shouldn’t be cause for discouragement or exasperation. Rather, it’s an important catalyst for thinking about the kind of world we want to live in — a question technologists and broader society must answer together.

Throughout history few, if any, societal transformations have been preceded by so much scrutiny and speculation about all the ways they could go wrong. A sense of urgency about the risks of A.I., from unintended outcomes to unintentional bias, is appropriate. It’s also helpful. Despite impressive breakthroughs, A.I. systems are still relatively nascent. Our ambition should be not only to realize their potential, but to do so safely.

Alongside vital public and political conversations, there’s plenty of technical work to be done too. Already, some of the world’s brightest technological minds are channeling their talent into developing A.I. in line with society’s highest ethical values.

For example, as more people and institutions use A.I. systems in everyday life, interpretability — whether an A.I. system can explain how it reaches a decision — is critical. It’s one of the major open challenges for the field, and one that’s energizing researchers across the world.

A recent research collaboration between DeepMind and London’s Moorfields Eye Hospital demonstrated a system that not only recommended the correct referral decision for over 50 eye diseases with 94 percent accuracy but also, crucially, presented a visual map that showed doctors how its conclusions were reached.

Meanwhile, researchers at Berkeley have been studying how humans can make sense of robot behavior, and in turn, develop ways for robots to signal their intentions. A team at OpenAI has developed approaches for interpretable communication among humans and computers. Others at DeepMind have been working on A.I. “theory of mind” — the ability to model what drives other systems’ actions and behaviors, including beliefs and intentions.

These are just early examples of progress. Much more must be done, including deeper collaborations between scientists, ethicists, sociologists and others. Optimism should never give way to complacency. But the fact that more energy and funding is being invested into this kind of fundamental research is a positive sign. After all, these challenges are as complex, as important, and should be as prestigious as the many other impressive achievements in the A.I. field to date.

Awareness of risks can also be a call to action. It’s precisely because A.I. technology has been the subject of so many hopes and fears, that we have an unprecedented chance to shape it for the common good.

NILS GILMAN

Vice president for programs at the Berggruen Institute

We stand on the cusp of a revolution, the engineers tell us. New gene-editing techniques, especially in combination with artificial intelligence technologies, promise unprecedented new capacities to manipulate biological nature — including human nature itself. The potential could hardly be greater: whole categories of disease conquered, radically personalized medicine, and drastically extended mental and physical prowess.

However, at the Berggruen Institute we believe that strictly engineering conceptions of these new technologies are not enough to grasp the significance of these potential changes.

So profound is the potential impact of these technologies that they challenge the very definition of what it is to be human. Seen in this light, the development and deployment of these technologies represent a practical experiment in the philosophical, today conducted primarily by engineers and scientists. But we need a broader conversation.

For millenniums Western philosophy took for granted the absolute distinction between the living and the nonliving, between nature and artifice, between non-sentient and sentient beings. We presumed that we — we humans — were the only thinking things in a world of mere things, subjects in a world of objects. We believed that human nature, whatever it may be, was fundamentally stable.

But now the A.I. engineers are designing machines they say will think, sense, feel, cogitate, and reflect, and even have a sense of self. Bioengineers are contending that bacteria, plants, animals, and even humans can be radically remade and modified. This means that the traditional distinctions between man and machine, as between humans and nature — distinctions that have underpinned Western philosophy, religion, and even political institutions — no longer hold. In sum, A.I. and gene editing promise (or is it threaten?) to redefine what counts as human and what it means to be human, philosophically as well as poetically and politically.

The questions posed by these experiments are the most profound possible. Will we use these technologies to better ourselves or to divide or even destroy humanity? These technologies should allow us to live longer and healthier lives, but will we deploy them in ways that also allow us to live more harmoniously with each other? Will these technologies encourage the play of our better angels or exacerbate our all-too-human tendencies toward greed, jealousy, and social hierarchy? Who should be included in conversations about how these technologies will be developed? Who will have decision rights over how these technologies are distributed and deployed? Just a few people? Just a few countries?

To address these questions, the Berggruen Institute is building transnational networks of philosophers + technologists + policy-makers + artists who are thinking about how A.I. and gene-editing are transfiguring what it means to be human. We seek to develop tools for navigating the most fundamental questions: not just about what sort of world we canbuild, but what sort of world we should build —— and also avoid building. If A.I. and biotechnology deliver even half of what the visionaries believe is in store, then we can no longer defer the question of what sort of human beings we want to be, both as individuals, and as a collective.

STEPHANIE DINKINS

Artist & associate professor of art, Stony Brook University; fellow, Data & Society Research Institute; Soros Equality Fellow; 2018 Resident Artist, Eyebeam

My journey into the world of artificial intelligence began when I befriended Bina48 — an advanced social robot that is black and female, like me. The videotaped results of our meetings form an ongoing project called “Conversations with Bina48.” Our interactions raised many questions about the algorithmically negotiated world now being constructed. They also pushed my art practice into focused thought and advocacy around A.I. as it relates to black people — and other non-dominant cultures — in a world already governed by systems that often offer us both too little and overly focused attention.

Because A.I. is no single thing, it’s difficult to speak to its overarching promise; but questions abound. What happens when an insular subset of society encodes governing systems intended for use by the majority of the planet? What happens when those writing the rules — in this case, we will call it code — might not know, care about, or deliberately consider the needs, desires, or traditions of people their work impacts? What happens if the codemaking decisions are disproportionately informed by biased data, systemic injustice, and misdeeds committed to preserving wealth “for the good of the people?”

I am reminded that the authors of the Declaration of Independence, a small group of white men said to be acting on behalf of the nation, did not extend rights and privileges to folks like me — mainly black people and women. Laws and code operate similarly to protect the rights of those who write them. I worry that A.I. development — which is reliant on the privileges of whiteness, men and money — cannot produce an A.I.-mediated world of trust and compassion that serves the global majority in an equitable, inclusive, and accountable manner. People of color, in particular, can’t afford to consume A.I. as mere algorithmic systems. Those creating A.I. must realize that systems that work for the betterment of people who are not at the table are good. And systems that collaborate with and hire those missing from the table — are even better.

A.I. is already quietly reshaping systems of trust, industry, government, justice, medicine and, indeed, personhood. Ultimately, we must consider whether A.I. will magnify and perpetuate existing injustice, or will we enter a new era of computationally augmented humans working amicably beside self-driven A.I. partners? The answer, of course, depends on our willingness to dislodge the stubborn civil rights transgressions and prejudices that divide us. After all, A.I. — and its related technologies — carry the foibles of their makers.

A.I. presents the challenge of reckoning with our skewed histories, while working to counterbalance our biases, and genuinely recognizing ourselves in each other. This is an opportunity to expand — rather than further homogenize — what it means to be human through and alongside A.I. technologies. This implies changes in many systems: education, government, labor, and protest, to name a few. All are opportunities if we, the people, demand them and our leaders are brave enough to take them on.

ANDRUS ANSIP

European Commission vice president for the digital single market

In health care today, algorithms can beat almost all but the most qualified dermatologists in recognizing skin cancer. A recent study found that dermatologists could identify 86.6 percent of skin cancers, while a machine using artificial intelligence (A.I.) detected 95 percent.

In Denmark, when people call 112 — Europe’s emergency number — an A.I.-driven computer analyzes the voice and background noise to check whether the caller has had a heart attack.

A.I. is one of the most promising technologies of our times.

It is an area where several European Union countries have decided to invest and research, to formulate a national strategy or include A.I. in a wider digital agenda. We encourage all our countries to do this, urgently, so that the E.U. develops and promotes A.I. in a coordinated way.

This should be a Pan-European project, not a series of national initiatives that may or may not overlap. It is the best way for Europe to avoid a splintered A.I. environment that hinders our collective progress, and to capitalize on our strong scientific and commercial position in this fast-evolving sector.

That can only happen if all E.U. countries work and pull together.

Today, many breakthroughs in A.I. come from European labs.

We have a good talent and industrial base — as a world region, Europe is home to the highest number of service robot manufacturers. We have world-class expertise in techniques requiring less data — ‘small data’ — to train algorithms.

The European Commission has long recognized the importance and potential of A.I. and robotics, along with the need for much higher investment and an appropriate environment that adequately addresses the many ethical, legal and social issues involved.

This can only be achieved by starting from common European values — such as diversity, nondiscrimination and the right to privacy — to develop A.I. in a responsible way. After all, people will not use a technology that they do not trust. We set out Europe’s way forward on A.I. in a dedicated strategy earlier this year.

Our aim is to be ahead of technological development, to maximize the impact of investment in supercomputers and research and to encourage more A.I. use by the private and public sectors. We are consulting widely — also with non-E.U. countries — to design ethical guidelines for A.I. technologies.

The project to build a Digital Single Market based on common Pan-European rules creates the right environment and conditions for the development and takeup of A.I.in Europe — particularly concerning data, the lifeblood of A.I.

It frees up data flows, improves overall access to data and encourages more use of open data.

It aims to improve connectivity, protect privacy and strengthen cybersecurity.

A.I. has the potential to benefit society as a whole: for people going about their everyday lives, for business.

Europe is stepping up efforts to be at the forefront of the exciting possibilities that A.I. technologies offer.

Correction: October 19, 2018

Because of an editing error, an earlier version of this article misstated the name of Yves Béhar’s company. It is fuseproject, not fuselabs.


This article originally appeared in The New York Times 

Article by:

New York Times