New Coalition Calls to End ‘Racist’ A.I. Research Claiming to Match Faces to Criminal Behavior

‘Criminality cannot be predicted. Full stop.’

Photo: NurPhoto/Getty Images

More than 500 experts on artificial intelligence, technology, and sociology have signed a letter addressed to a major academic publisher asking to halt the publication of any research that uses machine learning to predict whether someone might commit a crime.

The letter was written in response to a May 5 press release from Harrisburg University in Pennsylvania, which stated that two professors and a PhD student had created software that could predict whether someone was likely to commit a crime based on nothing more than a picture of their face. The publisher, Springer Nature, was slated to include that paper in an upcoming book series.

Harrisburg’s press release claimed that the new software is 80% accurate and completely unbiased, meaning it supposedly contained no statistical predisposition to predict someone was more likely to be a criminal based on their race or gender. It also pitched the software as a tool for law enforcement.

The university, which did not immediately respond to a request for comment, took down its press release on May 6 and said it would post an update after the research was published. Springer Nature told organizers on Monday that the paper was eventually rejected during the review process.

But the experts looking to stop this research, who have named their group the Coalition for Critical Technology, say the paper’s goal to predict criminality was only the latest in a series of similarly unscientific efforts that continue to be laundered through mainstream academia. While the science behind these studies has been debunked, papers attempting to draw a correlation between someone’s face and their behavior continue to be published.

The coalition’s broader aim is to send a message to publishers that the very idea of using machine learning to predict criminality is not scientifically sound and should not be peer reviewed or published in the future.

“Black women and Black scholars and practitioners have led the critique on this work over the last 50 years,” Theodora Dryer, a research scientist at NYU who helped organize the letter, tells OneZero. “They have shown time and time and time again that prediction of criminality is intrinsically racist and reproductive of structures of power that exclude them.”

More than half of the letter’s text consists of footnotes and citations, a message from its signatories that there are decades of scientific precedent that undermine the very question of whether criminality can be discerned by a person’s physical attributes.

While more recently the work of showing how modern facial recognition is biased has been pioneered by women of color, including Joy Buolamwini, Timnit Gebru, and Deborah Raji (who have all signed the letter), Dryer referenced a 1975 talk from Toni Morrison in which the author addressed the weaponization of data against Black people.

“Studies designed to confirm old prejudices and create new ones are really on the increase. Of the several areas of ignorance, those concerning Black people and their relationship to this country are still, at least to me, the most shocking,” Morrison said then.

Part of the reason technologists and sociologists agree that any algorithm meant to predict a person’s likelihood of committing a crime will be racist is that the data comes from an already racist criminal justice system and set of laws.

Laws that target people of color and police departments that overpolice minority populations create data, like crime statistics or mug shots, that is then used to build algorithms reliant on that data.

“Criminality cannot be predicted. Full stop. Criminality is a social construct and not something in the world that we can empirically measure or capture visually or otherwise,” said Sonja Solomun, research director at McGill University’s Centre for Media, Technology, and Democracy and letter organizer.

Similar research has largely been derided as a 21st-century mutation of phrenology, a debunked 19th-century belief that the shape of a person’s head can give insight into their character.

In 2016, researchers from Shanghai Jiao Tong University published a paper that alleged their algorithm could predict who would become a criminal by analyzing their face, with no bias. They released a few images of “criminals” and “non-criminals.” Soon after publication, Google and Princeton researchers thoroughly refuted the paper’s aims. They noticed that the “non-criminals” were all wearing visible collared shirts, while all the “criminal” images were wearing either T-shirts or darker clothes. A machine learning algorithm has no concept of anything outside the examples it is given, and if there are white shirts in the images of “non-criminals” and not in the images of “criminals,” it will equate wearing a white shirt with not being a criminal.

Another paper in 2018 tried to make the case that sexuality could be perceived by a facial recognition algorithm. Stanford researcher Michal Kosinski built an algorithm that tried to determine whether a person was gay or straight. “When I first read the outraged summaries of it, I felt outraged,” Jeremy Howard, founder of A.I. education startup Fast.ai, told Quartz. “And then I thought I should read the paper, so then I started reading the paper and remained outraged.”

The same Google and Princeton researchers who debunked the above criminality paper also critiqued Kosinski’s work.

“Like computers or the internal combustion engine, A.I. is a general-purpose technology that can be used to automate a great many tasks, including ones that should not be undertaken in the first place,” they wrote.

Coalition for Critical Technology organizers say that the questions asked about this kind of research is less about the data it uses or the algorithms researchers built, but the power structures that the research strengthens.

“How might the publication of this work and its potential uptake legitimize, incentivize, monetize, or otherwise enable discriminatory outcomes and real-world harm?” the organizers wrote. “These questions aren’t abstract.”

Senior Writer at OneZero covering surveillance, facial recognition, DIY tech, and artificial intelligence. Previously: Qz, PopSci, and NYTimes.

Sign up for Pattern Matching

By OneZero

A newsletter that puts the week's most compelling tech stories in context, by OneZero senior writer Will Oremus. Take a look.

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

The undercurrents of the future. A publication from Medium about technology and people.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store