The Real Moral Dilemma of Self-Driving Cars

It has nothing to do with the ‘trolley problem’

Will Oremus
OneZero
Published in
6 min readNov 7, 2019

--

A photo of an Uber self driving car on display with a sign that reads “Self-Driving Car” to the left.
An Uber self driving car sits on display ahead of an Uber products launch event in San Francisco, California on September 26, 2019. Photo: Philip Pacheco/Getty Images

TThe advent of self-driving cars revived the decades-old philosophical conundrum known as the “trolley problem.” The basic setup is this: A vehicle is hurtling toward a group of five pedestrians, and the only way to save them is to swerve and run over a single pedestrian instead.

For philosophers and psychologists, it’s pure thought experiment — a tool to tease out and scrutinize our moral intuitions. Most people will never face such a stark choice, and even if they did, studies suggest their reaction in the moment would have little to do with their views on utilitarianism or moral agency. Self-driving cars have given the problem a foothold in the real world. Autonomous vehicles can be programmed to have policies on such matters, and while any given car may never face a split-second tradeoff between greater or lesser harms, some surely will. It actually matters how their creators evaluate these situations.

Solving “the trolley problem” for self-driving cars has gotten a lot of attention. But it may actually be a distraction from a far more pressing moral dilemma. U.S. safety investigators released a report on a self-driving car crash this week that suggests the real choice at this stage of self-driving car development is not between one innocent victim and other innocent victims; it’s between caution and competitive advantage. Or, as the Guardian’s Alex Hern put it:

The report examines a March 2018 accident in which a self-driving Uber, with an inattentive test driver behind the wheel, hit and killed a woman crossing a street in Tempe, Arizona. An investigation by the National Transportation Safety Board (NTSB) shows that part of the problem was that Uber’s systems simply didn’t take into account the possibility of pedestrians jaywalking. So even though the car’s sensors detected the woman walking into the road long before the crash, it struggled to identify her as human and failed to predict that she would keep walking.

That seems like an egregious oversight, and one that should have been addressed in…

--

--

Will Oremus
OneZero

Senior Writer, OneZero, at Medium