Skip to content ↓

How should autonomous vehicles be programmed?

Massive global survey reveals ethics preferences and regional differences.
Press Inquiries

Press Contact:

Abby Abazorius
Phone: 617-253-2709
MIT News Office

Media Download

Ethical questions involving autonomous vehicles are the focus of a new global survey conducted by MIT researchers.
Download Image
Caption: Ethical questions involving autonomous vehicles are the focus of a new global survey conducted by MIT researchers.

*Terms of Use:

Images for download on the MIT News office website are made available to non-commercial entities, press and the general public under a Creative Commons Attribution Non-Commercial No Derivatives license. You may not alter the images provided, other than to crop them to size. A credit line must be used when reproducing images; if one is not provided below, credit the images to "MIT."

Close
Ethical questions involving autonomous vehicles are the focus of a new global survey conducted by MIT researchers.
Caption:
Ethical questions involving autonomous vehicles are the focus of a new global survey conducted by MIT researchers.

A massive new survey developed by MIT researchers reveals some distinct global preferences concerning the ethics of autonomous vehicles, as well as some regional variations in those preferences.

The survey has global reach and a unique scale, with over 2 million online participants from over 200 countries weighing in on versions of a classic ethical conundrum, the “Trolley Problem.” The problem involves scenarios in which an accident involving a vehicle is imminent, and the vehicle must opt for one of two potentially fatal options. In the case of driverless cars, that might mean swerving toward a couple of people, rather than a large group of bystanders.

“The study is basically trying to understand the kinds of moral decisions that driverless cars might have to resort to,” says Edmond Awad, a postdoc at the MIT Media Lab and lead author of a new paper outlining the results of the project. “We don’t know yet how they should do that.”

Still, Awad adds, “We found that there are three elements that people seem to approve of the most.”

Indeed, the most emphatic global preferences in the survey are for sparing the lives of humans over the lives of other animals; sparing the lives of many people rather than a few; and preserving the lives of the young, rather than older people.

“The main preferences were to some degree universally agreed upon,” Awad notes. “But the degree to which they agree with this or not varies among different groups or countries.” For instance, the researchers found a less pronounced tendency to favor younger people, rather than the elderly, in what they defined as an “eastern” cluster of countries, including many in Asia.

The paper, “The Moral Machine Experiment,” is being published today in Nature.

The authors are Awad; Sohan Dsouza, a doctoral student in the Media Lab; Richard Kim, a research assistant in the Media Lab; Jonathan Schulz, a postdoc at Harvard University; Joseph Henrich, a professor at Harvard; Azim Shariff, an associate professor at the University of British Columbia; Jean-François Bonnefon, a professor at the Toulouse School of Economics; and Iyad Rahwan, an associate professor of media arts and sciences at the Media Lab, and a faculty affiliate in the MIT Institute for Data, Systems, and Society.

Awad is a postdoc in the MIT Media Lab’s Scalable Cooperation group, which is led by Rahwan.

To conduct the survey, the researchers designed what they call “Moral Machine,” a multilingual online game in which participants could state their preferences concerning a series of dilemmas that autonomous vehicles might face. For instance: If it comes right down it, should autonomous vehicles spare the lives of law-abiding bystanders, or, alternately, law-breaking pedestrians who might be jaywalking? (Most people in the survey opted for the former.)

All told, “Moral Machine” compiled nearly 40 million individual decisions from respondents in 233 countries; the survey collected 100 or more responses from 130 countries. The researchers analyzed the data as a whole, while also breaking participants into subgroups defined by age, education, gender, income, and political and religious views. There were 491,921 respondents who offered demographic data.

The scholars did not find marked differences in moral preferences based on these demographic characteristics, but they did find larger “clusters” of moral preferences based on cultural and geographic affiliations. They defined “western,” “eastern,” and “southern” clusters of countries, and found some more pronounced variations along these lines. For instance: Respondents in southern countries had a relatively stronger tendency to favor sparing young people rather than the elderly, especially compared to the eastern cluster.

Awad suggests that acknowledgement of these types of preferences should be a basic part of informing public-sphere discussion of these issues. In all regions, since there is a moderate preference for sparing law-abiding bystanders rather than jaywalkers, knowing these preferences could, in theory, inform the way software is written to control autonomous vehicles.

“The question is whether these differences in preferences will matter in terms of people’s adoption of the new technology when [vehicles] employ a specific rule,” he says.

Rahwan, for his part, notes that “public interest in the platform surpassed our wildest expectations,” allowing the researchers to conduct a survey that raised awareness about automation and ethics while also yielding specific public-opinion information.

“On the one hand, we wanted to provide a simple way for the public to engage in an important societal discussion,” Rahwan says. “On the other hand, we wanted to collect data to identify which factors people think are important for autonomous cars to use in resolving ethical tradeoffs.”

Beyond the results of the survey, Awad suggests, seeking public input about an issue of innovation and public safety should continue to become a larger part of the dialoge surrounding autonomous vehicles.

“What we have tried to do in this project, and what I would hope becomes more common, is to create public engagement in these sorts of decisions,” Awad says.

Press Mentions

The New Yorker

New Yorker contributor Caroline Lester writes about the Moral Machine, an online platform developed by MIT researchers to crowdsource public opinion on the ethical issues posed by autonomous vehicles. 

Fast Company

Katharine Schwab of Fast Company writes about the Media Lab’s Moral Machine project, which surveyed people about their feelings on the ethical dilemmas posed by driverless vehicles. Because the results vary based on region and economic inequality, the researchers believe “self-driving car makers and politicians will need to take all of these variations into account when formulating decision-making systems and building regulations,” Schwab notes.

BBC News

BBC News reporter Chris Fox writes that MIT researchers surveyed people about how an autonomous vehicle should operate when presented with different ethical dilemmas. Fox explains that the researchers hope their findings will “spark a ‘global conversation’ about the moral decisions self-driving vehicles will have to make.”

The Economist

MIT researchers conducted a global survey to determine how people felt about the ethical dilemmas presented by autonomous vehicles, The Economist reports. Prof. Iyad Rahwan explains that he and his colleagues thought it was important to survey people from around the world as “nobody was really investigating what regular people thought about this topic.”

National Public Radio (NPR)

MIT researchers created an online game to determine how people around the world think autonomous vehicles should handle moral dilemmas, reports Laurel Wamsley for NPR. “Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms,” the researchers explain, “and to the policymakers that will regulate them.”

Fortune- CNN

Lucas Laursen writes for Fortune that a global survey created by MIT researchers uncovered different regional attitudes about how autonomous vehicles should handle unavoidable collisions. Global carmakers, Laursen writes, “will need to use the findings at the very least to adapt how they sell their increasingly autonomous cars, if not how the cars actually operate.”

PBS NewsHour

MIT researchers used an online platform known as the “Moral Machine” to gauge how humans respond to ethical decisions made by artificial intelligence, reports Jamie Leventhal for PBS NewsHour. According to postdoc Edmond Awad, two goals of the platform were to foster discussion and “quantitatively [measure] people’s cultural preferences.”

The Guardian

A new study from Media Lab researchers highlights the result of an online survey that asked volunteers how a self-driving vehicle should respond to a variety of potential accidents. “Moral responses to unavoidable damage vary greatly around the world in a way that poses a big challenge for companies planning to build driverless cars,” writes Alex Hern in The Guardian.

The Washington Post

Carolyn Johnson writes for The Washington Post about a new MIT study “that asked people how a self-driving car should respond when faced with a variety of extreme trade-offs.” According to Prof. Iyad Rahwan, “regulating AI will be different from traditional products, because the machines will have autonomy and the ability to adapt,” explains Johnson.

Motherboard

Using an online platform known as the “Moral Machine,” researchers at the Media Lab have surveyed more than two million people from 233 countries about how an autonomous vehicle should respond in a crash. “The Moral Machine game is similar to the infamous trolley problem,” writes Tracey Lindeman for Motherboard, “but calibrated for the autonomous car.”

Popular Mechanics

Popular Mechanics reporter Dave Grossman writes that MIT researchers surveyed more than 2 million people to gauge people’s opinions on the ethics of autonomous vehicles. Grossman explains that the researchers believe their findings demonstrate how “people across the globe are eager to participate in the debate around self-driving cars and want to see algorithms that reflect their personal beliefs.”

The Verge

A new paper by MIT researchers details the results of a survey on an online platform they developed, which asked respondents to make ethical decisions about fictional self-driving car crashes. “Millions of users from 233 countries and territories took the quiz, making 40 million ethical decisions in total,” writes James Vincent of The Verge.

Wired

The results of the Media Lab’s “Moral Machine” survey provides a glimpse into how people will respond to the ethical dilemmas surrounding autonomous vehicle accidents. “The point here, the researchers say, is to initiate a conversation about ethics in technology, and to guide those who will eventually make the big decisions about AV morality,” writes Wired’s Aarian Marshall.

Related Links

Related Topics

Related Articles

More MIT News