General Intelligence
The U.S. Military Is Worried That Its A.I. Might Be Too Gullible
Plus, a program that can recognize you by your keystroke
Welcome to General Intelligence, OneZero’s weekly dive into the A.I. news and research that matters. You can receive General Intelligence in your email inbox every Friday by visiting OneZero’s homepage and clicking “Follow.”
Artificial intelligence is already inexorably linked to some of our most critical systems. Automated stock trading runs Wall Street, algorithmic risk assessments are baked into criminal justice and the foster care system, and police around the world have recently gotten very into facial recognition.
But automated systems like these are fallible. On Thursday, DARPA announced that it was partnering with Intel to reinforce the military’s A.I. systems. The project is designing models that are less susceptible to tricks, otherwise known as “adversarial attacks.”
Deep learning algorithms exist to find patterns in incredibly complex datasets. The algorithms reduce the complexity of those patterns over and over again, until the immensely simplified results can be matched against examples the algorithms have already seen. For instance, an algorithm built to detect pictures of dogs identifies key…