General Intelligence

The U.S. Military Is Worried That Its A.I. Might Be Too Gullible

Plus, a program that can recognize you by your keystroke

Dave Gershgorn
OneZero
Published in
Sent as a

Newsletter

3 min readApr 10, 2020

--

Photo illustration, source: John Moore/Getty Images

Welcome to General Intelligence, OneZero’s weekly dive into the A.I. news and research that matters. You can receive General Intelligence in your email inbox every Friday by visiting OneZero’s homepage and clicking “Follow.”

AArtificial intelligence is already inexorably linked to some of our most critical systems. Automated stock trading runs Wall Street, algorithmic risk assessments are baked into criminal justice and the foster care system, and police around the world have recently gotten very into facial recognition.

But automated systems like these are fallible. On Thursday, DARPA announced that it was partnering with Intel to reinforce the military’s A.I. systems. The project is designing models that are less susceptible to tricks, otherwise known as “adversarial attacks.”

Deep learning algorithms exist to find patterns in incredibly complex datasets. The algorithms reduce the complexity of those patterns over and over again, until the immensely simplified results can be matched against examples the algorithms have already seen. For instance, an algorithm built to detect pictures of dogs identifies key…

--

--

OneZero
OneZero

Published in OneZero

OneZero is a former publication from Medium about the impact of technology on people and the future. Currently inactive and not taking submissions.

Dave Gershgorn
Dave Gershgorn

Written by Dave Gershgorn

Senior Writer at OneZero covering surveillance, facial recognition, DIY tech, and artificial intelligence. Previously: Qz, PopSci, and NYTimes.