Clearview AI’s Surveillance Dystopia Isn’t New for People of Color

Facial recognition technology is already biased against marginalized groups

Photo: Andia/Getty Images

MMany Americans are waking up to a potential surveillance “dystopia” created from billions of images they personally uploaded to the internet. The tiny company responsible, Clearview AI, claims to have scraped 3 billion photos from services like Facebook and YouTube to construct a sprawling facial recognition database used by law enforcement agencies across the country, according to a recent New York Times report by Kashmir Hill. The piece rightfully stoked fears about mass surveillance — but marginalized communities have been living with these concerns for years.

“We need a law to save us from dystopia,” the Times’ Charlie Warzel wrote after the piece was published. But these dystopic circumstances already exist for people of color. For decades they have spoken of the harm caused by surveillance technology to their communities, which are far likelier to be subjected to facial recognition tools. Not only are Asian, Black, and Indigenous people frequently misidentified by these systems — because the algorithms that power them may contain the biases of their creators — they’re also significantly overrepresented in law enforcement databases due to racial profiling and over-policing.

“Surveillance tools like face recognition, including the recently disclosed Clearview program, aggravate existing racial disparities in the criminal justice system.”

What is notable about Clearview AI is not so much its particular technology, but that, because of how it collects data, communities not historically oppressed by the surveillance state may feel its impact.

“Surveillance tools like face recognition, including the recently disclosed Clearview program, aggravate existing racial disparities in the criminal justice system,” said Adam Schwartz, senior staff attorney at the Electronic Frontier Foundation, a digital rights nonprofit and advocacy group.

Clearview AI’s clientele reportedly includes federal and local law enforcement such as sheriff’s departments and the Federal Bureau of Investigation, as well as a handful of private companies. The company claims to have perfected an algorithm capable of identifying someone from even a low-quality photo.

Unlike most facial recognition tools used by law enforcement, Clearview AI scrapes a vast amount of data from the open web, gathering photos for individuals across racial, socioeconomic, and geographic spectrums who are not usually targeted by law enforcement dragnets. The full demographics of its database aren’t publicly known, and the truth of its claims have been questioned in reporting by BuzzFeed News, but lawmakers and privacy watchdogs are nevertheless concerned about its universal potential.

“Widespread use of your technology could facilitate dangerous behavior and could effectively destroy individuals’ ability to go about their daily lives anonymously,” wrote Sen. Edward Markey to Clearview AI founder Hoan Ton-That last week.

Previously, law enforcement has used various facial recognition tools, such as Amazon’s Rekognition, that rely on mug shots (including those of minors) and driver’s license photos to suggest a match — and most, if not all, of these technologies are inherently flawed. “Darker-skinned people are arrested more, appear in criminal databases more, and thus are matched more,” Sidney Fussell wrote of Rekognition in Gizmodo.

With Clearview AI, police can run suspect photos against a much larger database of people who may have never been arrested or don’t have a driver’s license. What’s more, photos that are uploaded to the database are then stored to the company’s server, according to the Times. Similar to how mug shot databases reflect police bias, people overly targeted by law enforcement will also be overrepresented in Clearview AI’s system as police continue to use the software. Right now, the only way to request that your photo be deleted is to submit your “name, a headshot, and a photo of a government-issued ID” in an email to Clearview AI.

We’ve witnessed how data collected in the name of law enforcement can be used nefariously when predatory companies like Mugshots.com and Busted Newspaper scraped millions of mug shots from law enforcement websites — public in most states — optimized their visibility on search engines like Google, and extorted millions of dollars from people by demanding fees for their removal. These websites still exist, and Google has since down-ranked them in search, but state laws intended to penalize their business model have produced middling results. Lawyers for mug shot websites have also tried to argue that they’re protected under the First Amendment. (One way that Clearview AI differs is that automated scraping violates the guidelines of companies like Facebook and Twitter. While Twitter at least has sent the company a cease and desist letter, it’s unclear what impact this will ultimately have on the technology.)

There’s no easy fix for this problem, because data-scraping technology has applications far beyond Clearview AI’s facial recognition database, as Wired’s Louise Matsakis recently explained. Consider the way journalists and researchers have scraped website data to track white supremacist groups, for example. These complexities have played out by way of the Computer Fraud and Abuse Act (CFAA), a federal “anti-hacking” law originally drafted to prohibit unauthorized access to a computer or network that has been arbitrarily invoked to punish individuals, such as the late internet activist Aaron Swartz. In the wake of Clearview AI, civil liberties groups want a more holistic response to intrusive technologies, one that satisfies people’s right to privacy, addresses issues of consent, and controls the use of biometric data. More than 40 organizations have signed a letter calling for a moratorium on facial recognition technology in the United States, noting its potential to be “used by authoritarian governments to control minority populations.”

The consequences of Clearview AI are that law enforcement systems may become even more stacked against people of color as they’re used in tandem with racially biased databases. And while the revelation that anyone with a photo online could end up in its database may ultimately crystalize public concern around facial recognition technology, it’s important to recognize that these fears aren’t new — it’s just that now, they affect potentially everyone.

“This is one of the reasons,” Schwartz said of the technology’s racial bias, “[and] privacy is another, that we want a ban on government use of facial recognition and a requirement that businesses get informed opt-in consent before processing someone’s biometrics.”

Staff writer at OneZero covering social platforms, internet communities, and the spread of misinformation online. Previously: VICE

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store