Member-only story

How New A.I. Is Making the Law’s Definition of Hacking Obsolete

Using adversarial machine learning, researchers can trick machines — potentially with fatal consequences. But the legal system hasn’t caught up.

Ryan Calo
OneZero
6 min readAug 21, 2019

--

Credit: picture alliance/Getty Images

IImagine you’re cruising in your new Tesla, autopilot engaged. Suddenly you feel yourself veer into the other lane, and you grab the wheel just in time to avoid an oncoming car. When you pull over, pulse still racing, and look over the scene, it all seems normal. But upon closer inspection, you notice a series of translucent stickers leading away from the dotted lane divider. And to your Tesla, these stickers represent a non-existent bend in the road that could have killed you.

In April this year, a research team at the Chinese tech giant Tencent showed that a Tesla Model S in autopilot mode could be tricked into following a bend in the road that didn’t exist simply by adding stickers to the road in a particular pattern. Earlier research in the U.S. had shown that small changes to a stop sign could cause a driverless car to mistakenly perceive it as a speed limit sign. Another study found that by playing tones indecipherable to a person, a malicious attacker could cause an Amazon Echo to order unwanted items.

--

--

OneZero
OneZero

Published in OneZero

OneZero is a former publication from Medium about the impact of technology on people and the future. Currently inactive and not taking submissions.

Ryan Calo
Ryan Calo

Written by Ryan Calo

Law Professor, University of Washington

Responses (4)