You Can Fight Facial Recognition

How to opt out of tech that invades your privacy

Too often, facial recognition feels like a mysterious, society-pervading technology that is too complex for individuals to understand or combat. We read about scary new applications of the tech and its increasingly concerning role in determining who gets a job, who gets a loan, or even who gets arrested. Because facial recognition is often mobilized by governments and massive corporations, though, it’s easy for individuals to feel powerless in the face of these technologies.

But we’re not powerless. New laws, new tech, and new collective action movements are giving consumers the tools we need to fight back against indiscriminate or harmful uses of facial recognition technologies. After I wrote a story last month investigating Covid-19 temperature tablets that integrate with facial recognition databases, I received a reader question that addressed this head-on. Tech professional Michele Piper reached out via Twitter to ask, “Are there things we, as individuals, can do to protect ourselves other than knowing this company?”

Here are a variety of specific ways that you — as a normal person — can protect yourself, your family, and society at large from abusive uses of facial recognition.

Opt out

Until recently, facial recognition companies often operated in the shadows, building massive databases of the faces of innocent people without their knowledge or consent. As of early 2020, facial recognition company Clearview AI had built a database of 3.8 billion photos — including nearly every American — and secretly sold it to thousands of police agencies, retailers, and wealthy individuals.

New laws, though, are increasingly exposing the activities of companies like Clearview. In mid-2020, the California Consumer Privacy Act (CCPA) went into effect in the Golden State. The law gives Californians broad powers to see the data that large companies have gathered about them and to opt out of data gathering by asking companies to delete their data. I used the CCPA to see the profile Clearview had built about me, and the law also gives me the right to have my profile deleted if I want.

California voters apparently like these new powers — in November, a ballot measure in the state expanded the CCPA and created a new agency to enforce the law. Combined with privacy legislation in Europe (primarily the GDPR, or the General Data Protection Regulation) and new facial recognition specific laws in other U.S. states (notably a major biometric law in Illinois), consumers have more tools than ever to see what facial recognition data has been gathered about them and to opt out of data gathering if they choose.

If you live in California or another jurisdiction with robust privacy laws on the books, you can reach out directly to companies like Clearview to control the data they gather, even if the company you’re contacting isn’t based in California. The CCPA requires companies to establish at least two ways that consumers can reach out to file requests for their data. These are usually listed in the company’s privacy policy, which you can often find in the footer of the company’s website.

To file a request, locate the company’s privacy contact information. Often, companies use a web form or a dedicated email address (such as privacy@company.com) to field CCPA requests. Expect to provide documentation showing that you’re a resident of California or another protected jurisdiction. Some companies require you to submit a copy of your driver’s license to verify that you’re eligible while others provide a third-party identity verification service, like the kind you may have used to apply for a credit card or auto lease.

It can feel creepy to hand over even more personal information to a company like Clearview in order to access the information they already have. One piece of good news is that the CCPA has specific provisions preventing companies from retaliating if you file a request. If you feel that a company has retaliated against you (or isn’t honoring your valid request), you can file a consumer complaint against them with the state attorney general. Fines under the CCPA can be hefty (and GDPR fines have already exceeded $300 million), so most companies would rather comply than risk an avoidable fine.

Once you’ve obtained access to your data, you have several options on what to do next. You can do nothing and simply make a note of what data companies are holding about you. You can also request that the data be deleted or prevent companies from selling your data to third parties. Europe, in particular, protects the right of deletion aggressively, with a “right to be forgotten” built into the provisions of the GDPR.

What if you don’t live in California, Europe, or another covered jurisdiction? In some cases, you may still be able to access and control your data. Confirming eligibility is challenging and time-consuming for large companies, which may receive thousands of CCPA requests each year. Rather than confirm each requestor’s eligibility (and risk fines if they mistakenly deny an eligible request), many companies have extended CCPA rights to all consumers, regardless of their location.

In some cases, companies have also responded to the CCPA by opening up far more customer data by default rather than taking the time and effort to field individual requests. As a result of the CCPA and a landmark $550 million settlement over misuse of facial recognition data under Illinois’ Biometric Information Privacy Act (BIPA), Facebook now allows all users to opt out of facial recognition services without needing to file a formal request. You can do this by visiting the Face Recognition tab in Facebook’s settings interface. You can also opt out of facial recognition in products from Google, Apple, and others.

In other settings, you may be able to opt out of facial recognition, too. In airports, for example, you can often apply for alternative security clearance in boarding systems, although the process reportedly isn’t easy. If your employer offers facial recognition services for time tracking or building access, you may be able to opt out of these as well. The same goes for college campuses, many of which have chosen to abandon plans for facial recognition systems after student and faculty backlash.

Opting out can potentially protect you from abusive uses of facial recognition. But beyond protecting yourself, it sends the message to companies and organizations that they can’t use facial recognition indiscriminately since they’ll be exposing themselves to the risk of millions of dollars in fines. This risk also serves to increase the cost of creating massive face databases by adding layers of regulatory and legal compliance to the process.

Again, these extra steps may discourage all but the biggest companies (or tiny startups and researchers who are exempt from CCPA) from building databases in the first place. By opting out for yourself — and exercising your rights under laws like the CCPA — you’re also establishing a framework that makes the indiscriminate use of facial recognition much harder for companies going forward.

Deny raw data

In order to build a massive face database of consumers, companies first need access to huge numbers of publicly available photos. Companies like Clearview AI scrape billions of images from public websites, services like Meetup, newspapers, and even social media sites like Facebook and Twitter. While the practice has been challenged, mass scraping of public images likely continues to happen.

What does that mean for consumers? If you post a photo online on a public-facing website, you should assume that it will be ingested by facial recognition companies and added to their databases along with any personal information they can glean about you from the page on which the photo appeared. The same goes for photos that you post of your kids, friends, co-workers — even potentially your pets.

To prevent this from happening, you need to take steps to actively deny companies access to photos of your face. Consider updating your privacy settings on social networks like Facebook and Instagram to avoid displaying your photos publicly. After my investigation into Clearview AI revealed that the company had taken several photos from my public Facebook, I changed my privacy settings to make my account private. You can do the same thing, limiting access to people who you’ve already friended on the platform.

In making these changes, consider your own preferences but also the preferences of other people you photograph — or who photograph you. I rarely post photos of my kids online because I want them to eventually make their own decisions about which photos to share and also to decide on their own comfort level with being included in face databases (minors are often protected by laws like the CCPA, but that doesn’t guarantee that large companies are following the laws).

If other people take photos of you or your kids, ask them to run their posts by you before sharing the photos publicly. Follow the same practice with others, too, getting consent before posting their photo publicly and especially before posting photos of their kids. Remember that once a photo has been posted publicly and ingested by a company like Clearview AI, taking it down (or even knowing where it ended up) is extremely difficult. It’s better to stop photos from getting out in the first place if you’re concerned about how they’ll be used.

Remember that once a photo has been posted publicly and ingested by a company like Clearview AI, taking it down (or even knowing where it ended up) is extremely difficult.

What if you appear in a group shot at an event or in a crowd scene in a larger photograph? These kinds of public photos are likely less problematic than photos posted to social media sites because they’re rarely associated with other identifying information about you. If you happen to appear in a group shot at a charity 5K for example, it’s probably not a problem. Very little other information would likely be attached to your face, and it would be hard to determine much about you beyond the fact that you were present at the event. These kinds of photos also likely enjoy broader First Amendment protections too.

But if a company like Clearview finds a photo of you on your Facebook page, they can associate that face record with any other information you share publicly on your page, such as your location, occupation, family members, and the like. Those are the photos that are potentially more damaging because they reveal more about who you are. They’re also likely more legally problematic because they tend to be taken in situations where you have an expectation of privacy, like inside your own home. Work to avoid sharing photos that connect your face with other personal data or that show you in otherwise private spaces.

Denying raw data, though, isn’t about becoming a digital ghost. Especially during Covid-19, it’s essential for many of us to interact online. You may need to have a public presence for work, too. There are still ways to do so without revealing your face. Generated Media, a generative A.I. company, offers a service where consumers can upload a photo of their face and replace it with a similar-looking artificial doppelganger. For certain uses (such as online dating), these replacement faces can give a suggestion of what you look like without revealing your actual face. Another option is to use a Bitmoji or a similar illustration as your profile photo on public-facing social media networks.

What Orwell didn’t anticipate is that consumers wouldn’t need governments in order to build a panopticon — we’re perfectly happy to do it for ourselves.

You might also want to create closed-off, private social networks where you can share photos with trusted friends, family members, or colleagues without revealing them publicly. A simple WhatsApp group message chain can be a great place to share photos from a special event without posting them to a public network like Facebook, and services like WhatsApp often use strong encryption. Several services — such as Family Album — let you share photos of your kids with selected family members privately. Check these services’ privacy policies, though, to ensure they’re not selling your photos to third parties without your knowledge.

Even if photos you post online aren’t directly linked to your identity, companies may still slurp them up in order to train facial recognition systems. According to NBC News, for example, companies ingested millions of Creative Commons photos from Flickr in order to train facial recognition platforms and other A.I. systems. If you’ve ever uploaded a photo to Flickr, you can enter your own Flickr handle into a tool NBC provides and see if your photos were used. If they were, you can reach out and request their removal.

Laws like the CCPA can help you wrest back control over your data once it’s already public. But again, a better solution — and an easier one for consumers to implement — is to limit the photo data that becomes public in the first place. In many cases, a simple update to your Facebook privacy settings can remove hundreds or thousands of face photos from public circulation. Consider updating these settings for your own social profiles today — both to protect you and to protect others you photograph.

Resist the wow factor

In his classic book 1984, George Orwell imagined a future where governments created a panopticon of ubiquitous surveillance. What Orwell didn’t anticipate is that consumers wouldn’t need governments in order to build a panopticon — we’re perfectly happy to do it for ourselves.

Consumer gadgets and services increasingly use facial recognition, often in order to make products more interactive and fun to use. The Mozilla Foundation points out, though, that the same “datasets and algorithms” used for “mundane applications” like “face filters designed to age or gender-swap profile pictures or digital billboards designed to track age, gender, and race” are often “used to implement systems that have been used to label Black people as ‘gorillas,’ decide whether to hire for a job, profile Uyghur Muslims in China, predicting ‘first impressions,’ for police profiling, or even predicting who looks like a criminal.”

Logging into your phone using your face — or using a filter that lets you vomit rainbows — feels innocuous and fun. But in reality, your use of these systems may provide training data to companies which then use that data for oppression or to cause harm. And using powerful facial recognition technologies for “fun” applications also serves to normalize them, making them seem less insidious than they actually are.

Privacy expert and Harvard Kennedy School Shorenstein Center research fellow Chris Gilliard agrees. He advises consumers to “reject all forms of [facial recognition], even seemingly ‘cool’ or innocuous ones, as it furthers normalization and entrenchment.”

You probably don’t need to use that rainbow vomit filter, and many tech tools work perfectly fine with their facial recognition features disabled. To discourage the normalization of facial recognition tech, resist the “wow” factor of gadgets that use the technologies, and switch these features off on your own devices whenever you can.

Speak up

Gilliard also told OneZero, though, that “the notion of individualized actions as a response to systemic and institutionalized harms is a trap: We need to act collectively.” While individuals can take actions to protect themselves from certain uses of facial recognition — and to limit the amount of face data they make available publicly — other uses of the technologies can only be stopped by legislative and judicial actions.

Many government agencies, for example, are not subject to consumer-oriented privacy laws like CCPA. They can often gather face data indiscriminately using existing surveillance camera networks and use this data for policing and other purposes without consumers having any recourse or any ability to opt out. This data can also be combined with increasingly sophisticated image processing algorithms to potentially track people based on their attendance at a protest or similar sensitive event.

Preventing these uses of facial recognition requires developing laws that specifically ban problematic uses of the technologies. More than 10 American cities (including San Francisco and Portland) have already banned police use of facial recognition systems, often citing pervasive issues of racial bias, and many more cities and states are likely to follow. State biometric laws like the one in Illinois are already having a large impact, and other states are reportedly considering similar laws.

Alex Marthews, chair of Restore the Fourth, an organization that works to combat government surveillance, told me in a tweet that consumers can support facial recognition bans and restrictions by getting involved with organizations like his or by supporting organizations such as the American Civil Liberties Union (ACLU), which has filed several major lawsuits to restrict facial recognition. Other organizations involved in crafting these laws include the Electronic Frontier Foundation (EFF) and Amnesty International. More than 40 such groups recently sent an open letter to President Joe Biden urging a federal ban on certain facial recognition technologies.

If you feel that facial recognition should be banned (or like me, you feel that it should be subject to Fourth Amendment restrictions), consider supporting one or more of these groups with a donation or by volunteering. You can also write directly to your legislators to ask for facial recognition bans or restrictions consistent with your own beliefs.

Another option for collective action is to vote with your wallet. For many large companies, facial recognition is a tiny sliver of their overall business and one that they’re not especially keen to protect. Amazon, for example, announced a mortarium on the use of Rekognition, its own facial recognition platform, in 2020. While it didn’t give a specific reason, it’s likely that Amazon execs saw mounting public opposition to facial recognition and didn’t want to risk a consumer backlash against the company’s massively profitable e-commerce business as a result of sketchy uses of its likely barely profitable facial recognition service.

Other big companies including IBM and Microsoft have joined Amazon in either pausing development of the technologies or stopping development altogether. Again, these moves are likely the result of backlash from the companies’ customers and, in many cases, their own employees. Collective action against abusive uses of facial recognition can happen both at the government level and with the companies and service providers we choose to patronize or for whom we work.

Facial recognition technologies may still seem opaque or scary to many consumers. But as individuals, we’re no longer powerless in the struggle to ensure these powerful technologies are used responsibly.

If you live in a jurisdiction with strong privacy laws, use them aggressively and often in order to protect your own rights and to opt out of any data gathering that you find excessive. You’ll help to keep yourself safe, and you’ll make violating consumers’ privacy more expensive (and thus less attractive) to large companies.

Even if you’re not in a covered jurisdiction, you can take steps to limit the face data that you make available online and to avoid needlessly associating your face data with other personal information. Through positive consent, you can also ensure that you’re not inadvertently jeopardizing the privacy of your children, colleagues, or family members by posting their own face data on the public internet without their buy-in. You can also avoid normalizing facial recognition by switching off face-driven features on your phone and your own tech gadgets.

And finally, you can decide where you stand on issues of facial recognition and mass surveillance more generally and then speak out through advocacy and collective action. Donate to an organization that is working to protect consumer privacy or volunteer your time to help the cause.

Facial recognition may be worrisome and scary. But if we join together (while taking steps to keep ourselves safe as individuals), we can dictate the terms under which these powerful technologies are used — and avoid abusive or oppressive uses.

Co-Founder & CEO of Gado Images. I write, speak and consult about tech, privacy, AI and photography. tom@gadoimages.com

Get the Medium app