How Should Self-Driving Cars Choose Who Not to Kill?

A popular MIT quiz asked ordinary people to make ethical judgments for machines

Credit: Scalable Cooperation/MIT Media Lab

Medium: What’s the difference between the way humans and machines make moral decisions?

What did The Moral Machine tell you about how we want machines to act?

How much attention should self-driving car manufacturers pay to your results?

In an earlier study, you found that people thought an autonomous vehicle should protect the greater number of people, even if that meant sacrificing its passengers. But they also said they wouldn’t buy an autonomous car programmed to act this way. What does this tell us?

Should the U.S. government follow Germany’s example and issue ethical guidelines for self-driving cars?

Are people’s values closely aligned enough to achieve universal guidelines?

An engineer working on Google’s own self-driving car project said your results were not so significant because the real answer in these scenarios would almost always be “slam on the brakes.” Are The Moral Machine’s scenarios actually relevant to real life?

British Journalist. Mostly human rights in Europe and the Middle East. Working with @Guardian @Reuters @BBC etc

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store