New Coalition Calls to End ‘Racist’ A.I. Research Claiming to Match Faces to Criminal Behavior

‘Criminality cannot be predicted. Full stop.’

Dave Gershgorn
OneZero

--

Photo: NurPhoto/Getty Images

More than 500 experts on artificial intelligence, technology, and sociology have signed a letter addressed to a major academic publisher asking to halt the publication of any research that uses machine learning to predict whether someone might commit a crime.

The letter was written in response to a May 5 press release from Harrisburg University in Pennsylvania, which stated that two professors and a PhD student had created software that could predict whether someone was likely to commit a crime based on nothing more than a picture of their face. The publisher, Springer Nature, was slated to include that paper in an upcoming book series.

Harrisburg’s press release claimed that the new software is 80% accurate and completely unbiased, meaning it supposedly contained no statistical predisposition to predict someone was more likely to be a criminal based on their race or gender. It also pitched the software as a tool for law enforcement.

The university, which did not immediately respond to a request for comment, took down its press release on May 6 and said it would post an update after the research was published. Springer Nature told…

--

--

Dave Gershgorn
OneZero

Senior Writer at OneZero covering surveillance, facial recognition, DIY tech, and artificial intelligence. Previously: Qz, PopSci, and NYTimes.