Member-only story

The Real Moral Dilemma of Self-Driving Cars

It has nothing to do with the ‘trolley problem’

Will Oremus
OneZero
6 min readNov 7, 2019

--

A photo of an Uber self driving car on display with a sign that reads “Self-Driving Car” to the left.
An Uber self driving car sits on display ahead of an Uber products launch event in San Francisco, California on September 26, 2019. Photo: Philip Pacheco/Getty Images

TThe advent of self-driving cars revived the decades-old philosophical conundrum known as the “trolley problem.” The basic setup is this: A vehicle is hurtling toward a group of five pedestrians, and the only way to save them is to swerve and run over a single pedestrian instead.

For philosophers and psychologists, it’s pure thought experiment — a tool to tease out and scrutinize our moral intuitions. Most people will never face such a stark choice, and even if they did, studies suggest their reaction in the moment would have little to do with their views on utilitarianism or moral agency. Self-driving cars have given the problem a foothold in the real world. Autonomous vehicles can be programmed to have policies on such matters, and while any given car may never face a split-second tradeoff between greater or lesser harms, some surely will. It actually matters how their creators evaluate these situations.

Solving “the trolley problem” for self-driving cars has gotten a lot of attention. But it may actually be a distraction from a far more pressing moral dilemma. U.S. safety investigators released a report on a self-driving car crash this week that suggests the real choice at this stage of self-driving car development is not between one…

--

--

OneZero
OneZero

Published in OneZero

OneZero is a former publication from Medium about the impact of technology on people and the future. Currently inactive and not taking submissions.

Will Oremus
Will Oremus

Written by Will Oremus

Senior Writer, OneZero, at Medium

Responses (26)