Clearview AI’s Surveillance Dystopia Isn’t New for People of Color

Facial recognition technology is already biased against marginalized groups

Sarah Emerson
OneZero

--

CCTV camera
Photo: Andia/Getty Images

MMany Americans are waking up to a potential surveillance “dystopia” created from billions of images they personally uploaded to the internet. The tiny company responsible, Clearview AI, claims to have scraped 3 billion photos from services like Facebook and YouTube to construct a sprawling facial recognition database used by law enforcement agencies across the country, according to a recent New York Times report by Kashmir Hill. The piece rightfully stoked fears about mass surveillance — but marginalized communities have been living with these concerns for years.

“We need a law to save us from dystopia,” the Times’ Charlie Warzel wrote after the piece was published. But these dystopic circumstances already exist for people of color. For decades they have spoken of the harm caused by surveillance technology to their communities, which are far likelier to be subjected to facial recognition tools. Not only are Asian, Black, and Indigenous people frequently misidentified by these systems — because the algorithms that power them may contain the biases of their creators — they’re also significantly overrepresented in law enforcement databases due to racial profiling and over-policing.

--

--

Responses (1)