Asimov’s Three Laws Helped Shape A.I. and Robotics. We Need Four More.

A leading expert in the emergent field of A.I. law argues it’s high time to update the three laws of robotics

Eric Allen Been
OneZero

--

Image: Hiroshi Watanabe/Getty Images

Isaac Asimov’s three laws of robotics are probably the most famous and influential science fictional lines of tech policy ever written. The renowned writer speculated that as machines took on greater autonomy and a greater role in human life, we would need staunch regulations to ensure they could not put us in harm’s way. And those proposed laws hark back to 1942, when the first of Asimov’s Robot stories were published. Now, with A.I., software automation, and factory robotics ascendant, the dangers posed by machines and their makers are even more complex and urgent.

In Frank Pasquale’s provoking and well-wrought book, New Laws of Robotics: Defending Human Expertise in the Age of AI, the Brooklyn Law School professor proposes adding four new principles to Asimov’s original three. Which, for those unfamiliar, are as follows:

  1. A robot must not harm humans, or, via inaction, allow a human to come to harm.
  2. Robots must obey orders given to them by humans, unless an order would violate the first law.
  3. Finally, a robot must protect its own existence, as…

--

--

Eric Allen Been
OneZero

A writer. Not based in Brooklyn. Recent bylines with Vox, Vanity Fair, Harvard Magazine, MIT’s Undark, VICE and Playboy.