The Public Is Being Misled by Pandemic Technology That Won’t Keep Them Safe

Technology like thermal imaging is little more than security theater

Passengers of the metro are captured by thermographic or body temperature measurement cameras, in response to find those possibly infected with the coronavirus, in Panama City, on April 21, 2020. Photo: Luis Acosta/AFP/Getty Images

This op-ed was co-authored by Evan Selinger, professor of philosophy at the Rochester Institute of Technology, and Brenda Leong, senior counsel and director of A.I. and ethics at the Future of Privacy Forum.

The lockdown on commercial industry and personal activity in response to the global Covid-19 pandemic has been in place for almost two months in many parts of the U.S. Due to financial desperation and frustration with isolation, nonessential businesses are starting to reopen and more people are going out in public despite ongoing health concerns.

Seeking to frame this economically driven agenda with a veneer of public health responsibility, governments and businesses are implementing a variety of precautions, including using thermal imaging cameras to detect elevated skin temperatures. Unfortunately, the use of this technology, like some of the others in the pandemic response kit, is “security theater,” to use a term coined by the security and privacy expert Bruce Schneier. It’s a dangerous, possibly life-threatening mirage that looks like strong leadership but, in fact, shimmers over empty promises that inspire false confidence about personal health and safety.

Schneier has been warning us for years of this kind of facade, calling out familiar examples, from offices stationing a “uniformed guard-for-hire” to check visitors’ ID cards to airports banning liquids and using full-body scanners to search for explosive material that, it turns out, they are not great at detecting anyway. So much magical thinking pervades airport security that Schneier has bluntly declared, “The two things that have made flying safer since 9/11 are reinforcing the cockpit doors and persuading passengers that they need to fight back. Everything beyond that isn’t worth it.”

Security theater is linked to the ideology of solutionism and bolstered by the common human tendency to want to show strength in the face of danger. An example in recent years is local school boards turning to facial recognition technology to protect students from gun violence. Boosters claim these systems can prevent the next Parkland shooting. Schools like Lockport in New York spent approximately $1.4 million for “software and servers.” But facial recognition technology won’t solve this problem, and, conversely, it introduces new risks in educational settings, such as minorities being subjected to heightened disciplinary action and the scholastic environment increasingly resembling a prison. These include due process issues, a host of privacy concerns, and an array of technical and economic challenges.

Consequently, privacy groups, including the Future of Privacy Forum (an organization we’re both affiliated with), have called for a moratorium on using “facial recognition systems for security purposes at public school facilities.” And the New York State Education Department decided it’s not currently approving project applications that include facial recognition technology under its Smart Schools Bond Act, legislation that finances improvements in “educational technology and infrastructure to improve learning and opportunity for students throughout the State.” While parents and local leaders want to protect their community’s children, sometimes doing the wrong thing is worse than doing nothing.

As we seek to meet the difficult social and economic challenges of this pandemic, we need to vigilantly guard against the latest form of security charade, what law professor Peter Swire calls “public health theater.” Whether it’s government officials flaunting responses that lack scientific validity, like covering city streets with disinfectants, or selecting ineffective technological options like poorly designed contact tracing programs, or even airports adding hand-sanitizer dispensers, it’s dangerous to prime people’s expectations by relying on protective measures that can’t deliver the goods. Thermal imaging cameras are quickly becoming one of the most widely deployed props for perpetuating the illusion that hazardous conditions are under control. Allowing or even condoning this trend is morally objectionable for businesses and governments alike.

Thermal imaging is a poor pandemic response tool

Fevers are a possible symptom of Covid-19, and so thermal screening — checking to see if an individual is running a temperature — is one of the recommended steps in locations where individuals must necessarily congregate for work or commerce. In grocery stores and other settings, workers are required to take their temperature upon arrival. Unlike a thermometer, thermal scanning cameras measure a person’s skin temperature. This does not directly correlate to core body temperatures, and these systems weren’t designed for medical use.

Now, however, thermal imaging cameras are being sold as valuable screening machines that have the potential to identify people who may be infected with the coronavirus. Traditional thermometers that measure core body temperature are accurate tools for detecting fevers, but companies and governments around the world are spending heavily in a “gold rush” to procure thermal cameras because of their perceived advantages: thermal cameras can scan multiple people at once; thermal cameras can scan people from a distance while they’re moving; thermal cameras don’t require the inconvenience and stress of individuals lining up for individual checks; thermal cameras can be inconspicuously installed to scan people without them even noticing; and, thermal cameras can be paired with other surveillance technologies, including pandemic drones and facial recognition systems.

These features are so tempting that thermal cameras are being installed at an increasing pace. They’re used in airports and other public transportation centers to screen travelers, increasingly used by companies to screen employees and by businesses to screen customers, and even used in health care facilities to screen patients. Despite their prevalence, thermal cameras have many fatal limitations when used to screen for the coronavirus.

  • They are not intended for medical purposes.
  • Their accuracy can be reduced by their distance from the people being inspected.
  • They are “an imprecise method for scanning crowds” now put into a context where precision is critical.
  • They will create false positives, leaving people stigmatized, harassed, unfairly quarantined, and denied rightful opportunities to work, travel, shop, or seek medical help.
  • They will create false negatives, which, perhaps most significantly for public health purposes, “could miss many of the up to one-quarter or more people infected with the virus who do not exhibit symptoms,” as the New York Times recently put it. Thus they will abjectly fail at the core task of slowing or preventing the further spread of the virus.

As scientists and health care professionals in general agree, thermal imaging systems are an inappropriate and ineffective strategy for identifying individual cases of Covid-19 and preventing its spread. Beyond the wasted costs, installing thermal scanning as part of a “back to work” process might create a false sense of security that convinces some to prematurely return to their jobs and emboldens others to relax more effective strategies, such as social distancing and responsible contact tracing efforts.

Thermal imaging is security theater

Since thermal scanning is no more useful for pandemic management than the disavowed combo of hydroxychloroquine and azithromycin is for treating the virus once contracted, it’s hard to understand why the technology is being so broadly adopted around the world and why official U.S. agencies are turning a blind eye to the glaring deficiencies. The Center for Disease Control (CDC) includes thermal screening in its recommendations for back-to-work practices as various industries seek to reopen in the post-lockdown period, without insisting upon medically reliable techniques. The Equal Employment Opportunity Commission (EEOC) allows employers to collect personal medical information during a pandemic that’s otherwise prohibited, including temperature, without limiting the process to medically valid practices.

Because the government isn’t carefully limiting the permission to gather personal health data on employees, customers, travelers, and people in other public venues, it’s effectively endorsing unreliable surveillance systems and giving intelligent people reason to believe those measures increase public health and safety. This is the very definition of security theater — the implicit and false message that thanks to the steps taken, it’s safe to reengage with some pre-lockdown activities without risk of contracting or further spreading the virus.

Beyond the health risks, there are privacy harms. For individuals, temperature taking is yet another officially authorized collection of sensitive personally identifiable information (PII). For society, the ubiquitous and sometimes hidden presence of thermal scanning cameras contributes to surveillance creep. It’s another step along the path of normalizing continuous monitoring of health data more broadly and bolstering dangerous surveillance tools, like facial recognition. Law enforcement in China, Dubai, and Italy are already using “smart” surveillance helmets equipped with thermal imaging and facial recognition and license plate scanning capabilities. As Jay Stanley, a senior policy analyst at the American Civil Liberties Union and author of “Temperature Screening and Civil Liberties During An Epidemic” told us:

Lurking in the wings behind temperature screenings are other metabolic readings that can increasingly be read remotely, including breathing rate and heart rate. We’ve already seen claims that a ‘Covid-detecting drone’ can make such readings. We may also see calls for Covid screening to include other measurements such as blood oxygenation. These kinds of readings can reveal a lot of private information about the state of a person’s health.

The prime pandemic policy rule

A single rule should guide all of the data-driven responses to Covid-19. For any effective technology solution, if the privacy and civil liberties challenges can be identified and successfully mitigated, the enterprise is worth doing — even if the impact on public health is only marginal. Right now, every protective measure, from responsible contact tracing to public mask-wearing, follows this prime pandemic rule, providing at least some added value as a tool to combat this deadly threat, with privacy challenges that can be reasonably addressed. Thermal imaging is a clear exception. It’s fraught with privacy and civil liberties problems. It’s ineffective in the role for which it is touted. And most importantly, it’s likely to backfire and worsen public health. For all of these reasons, thermal imaging systems shouldn’t play a part in the strategies to confront the spread of Covid-19 or make reopening the economy safer.

The challenges of confronting an almost unprecedented threat like the pandemic are daunting. Digital contact tracing measures are less effective than desired, and hard to implement effectively. There are not enough tests to go around, and a vaccine remains a distant goal. The net effect is an economic crisis that government and business leaders are desperate to alleviate. But allowing people to rely on ineffective safeguards is misleading at best, and at worst, threatens economic recovery and lives.

Prof. Philosophy at RIT. Latest book: “Re-Engineering Humanity.” Bylines everywhere. http://eselinger.org/

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store