It’s the near future. A self-driving car is zipping its passengers down a country road when, out of nowhere, a handful of pedestrians stroll into its way. There’s no easy way out: Either the car plows through them or it swerves into a tree, killing those riding inside. What would you rather it do?
If you’re like the people surveyed for a recent study, “The Social Dilemma of Autonomous Vehicles,” in the journal Science on the ethical programming of autonomous vehicles, you’d probably like the car to spare the pedestrians—unless you happened to be riding inside. This natural urge for self-preservation creates a social dilemma that could delay the adoption of the emerging transportation technology and, the authors of the study wrote, perhaps needlessly doom hundreds of thousands of people to preventable traffic deaths.
"Most people want to live in a world where cars will minimize casualties,” said co-author Iyad Rahwan, an associate professor at the Massachusetts Institute of Technology, in a statement. “But everybody wants their own car to protect them at all costs.”
The researchers first polled 1,928 internet users about how moral they rated an autonomous car’s response to various hypothetical crashes. A pattern became clear: The higher the number of pedestrians that would be spared, the more participants felt it was ethical for the car to sacrifice a passenger—even when they imagined that person was a family member.
Things got more complicated, though, when participants were asked whether the government should require driverless cars to minimize pedestrian deaths at the expense of passengers, and if they would buy a car programmed to do so. People liked the idea of autonomous cars that would kill one pedestrian to save 10 others. They also liked the idea of other motorists owning cars that would sacrifice passengers to protect pedestrians. But they were less likely to want to own such a car themselves or to support the government enforcing this kind of sacrifice. Overall, the respondents were nearly three times less likely to buy a car designed to let the occupants die to spare pedestrians than one with no such programming.
Proponents of driverless vehicles argue that the technology will save many lives if widely adopted. Crashes killed nearly 33,000 people in the United States and 1.25 million worldwide in 2013, and human error caused almost all of them. Cars that move by algorithm, on the other hand, can communicate directly with one another and don’t fall asleep, get distracted by text messages or drink too much. Their widespread use may also help cut greenhouse gasses, which indirectly cause all sorts of health problems.
If the technology turns out to be as safe as advertised, the risk of dying behind the wheel will almost certainly drop from where it is today—even in a car willing to take its passengers out for the greater good. But some collisions will happen, and some people will still die.
The trouble is that society must decide in advance who will live and who will die, occupants and or pedestrians. When it comes down to it, nobody wants to be the one to plough into a tree to save the walkers. That uncomfortable moral dilemma could stall government policy and keep driverless cars off the road.
“This is a challenge that should be on the minds of carmakers and regulators alike,” the authors wrote, adding that hangups about regulation “may paradoxically increase casualties by postponing the adoption of a safer technology.”