Black-Box Algorithms Shouldn’t Decide Who Gets a Vaccine

A failure at Stanford teaches us the limits of medical algorithms

Liz O'Sullivan
OneZero
5 min readJan 29, 2021

--

Close-up of Moderna vaccine. Photo: William Campbell/Getty Images

This story was co-authored by Dr. Rumman Chowdhury, CEO of Parity, an enterprise algorithmic audit platform company.

“The algorithm did it” has become a popular defense for powerful entities who turn to math to make complex moral choices. It’s an excuse that recalls a time when the public was content to understand computer code as somehow objective. But the past few years have demonstrated conclusively that technology is not neutral and instead reflects the values of those who design it, and it is fraught with all the usual shortcomings and oversights that humans suffer in our daily lives.

Right before the holidays, protests erupted at Stanford Hospital when the first 5,000 doses of Covid-19 vaccine arrived at the hospital. A proprietary algorithm was developed to distribute the supply to hospital staff, and although the hospital allegedly sought input from “ethicists” and “infectious disease experts” to fairly allocate the vaccine, it qualified only seven out of 1,300 patient-facing Covid-19 doctors. This prompted outrage from those who were excluded and confusion among administrative staff. Stanford’s response, in a familiar move, swiftly blamed “the complex algorithm” for this shortfall, and in doing so sought to absolve themselves of responsibility for the ethical failure.

Algorithms are a representation of dozens of decisions that can solidify this moral calculus into policies. These policies form the basis for automations that allow for greater deployment speed and scale. In the case of Stanford’s vaccine algorithm, the designers prioritized age and the percentage of Covid-19 tests in each department that had come back positive. The “algorithm” in dispute here isn’t an A.I. or machine learning algorithm as many assumed, but a medical algorithm. It’s a simpler set of calculations but more bureaucratically complex.

As it turns out, Stanford’s set of predetermined rules neglected to account for factors that disproportionately affected junior staff. Residents generally aren’t tied to a single department, which meant they were denied a bonus point in the algorithm’s calculations for not “belonging” to a location that was hit hard by the virus. Even the seemingly simple decision to prioritize older workers yields an ethical challenge, since age and location in this case serve as proxies for career status. This meant that junior staff who are truly the first line of defense fell to the bottom of the list. It is also worth noting who these rules didn’t bother to consider at all: janitorial staff and workers who are directly exposed to the virus. These workers, after all, are tasked with changing dirty sheets, cleaning bedpans, and handling virus-exposed medical equipment.

When algorithmic interventions in health care fail, they often do so disproportionately, leaving the most at-risk communities to suffer.

How did this happen? The decision-makers, not the algorithm, fell short in appreciating the nuances of the situation. Pointing to a “complex algorithm” is a dangerous and disingenuous practice when what was at fault was poor design. This kind of moral outsourcing ignores the significant human input required to design any kind of algorithm — even the more mathematically complicated A.I./machine learning kind.

When algorithmic interventions in health care fail, they often do so disproportionately, leaving the most at-risk communities to suffer. In health care, algorithms like those that aim to govern kidney allocation or attempt to proactively identify health risk have disadvantaged poor and Black communities in favor of the rich and the white. With Covid-19, algorithms tasked to allocate funding for relief aid have been found to favor white recipients at an alarming rate. It’s never been clearer to those of us in the field of algorithmic fairness that health care models can deepen the existing inequality in our society. And the Stanford development shows just how much more work we have to do.

Stanford administrators failed to identify the features of their model that were tied to career status and prestige. They failed to seek input from the hospital administrators and staff who were to be the most affected by their decision-making. Notably, they failed to even acknowledge the hospital cleaning staff and other workers who may have as much risk of Covid-19 exposure, as these individuals were excluded entirely from the first round. Administrators made these decisions quietly, behind closed doors, and when called out on the obvious unfairness of the outcome chose to absolve themselves of responsibility by blaming the mathematical function they created.

This incident is but one of many policy failures attributed to an opaque algorithm. Last summer, thousands of U.K. students showed up to protest an unfair grading algorithm that cost some students their college admission, with no recourse to fight back. That algorithm disproportionately disadvantaged lower-income students, prompting protests chanting a similar tune to the Stanford staff’s rally, and ended with an end to algorithmic grading in the U.K.. Here in the United States, Stanford represents merely the first in what promises to be a long chain of ethically complex issues in Covid-19 aid, where the stakes are as high as life or death. In a few cases where algorithms have been used in the United States for government intervention, they have resulted in disproportionate and discriminatory outcomes. In the absence of federal guidelines for vaccine distribution, hospitals across the country have been largely on their own and may increasingly turn to algorithms for help in the fight against Covid-19.

In contrast, Israel’s vaccine rollouts have been relatively smooth, and the technology it has deployed provides a quick and easy way to reach patients based on predetermined rules that were set by the state. It’s a good example of how technology can aid the process of speeding up a national vaccine rollout; technology there functions as an enabler instead of a decision-maker. When it comes to decisions that amount to life or death, opaque technology can enable them but should never decide.

With Covid-19 especially, we must work to build public trust in the vaccine. In the few examples of Covid-19 algorithms we’ve seen so far, the practice of seeking input from staff and those most affected has largely been ignored.

We can’t afford to make any more mistakes or we will all face the consequences of a failure to achieve herd immunity around the world. Outsourcing the moral responsibility of a mistake like this one to a black-box algorithm will only heighten public distrust of those in power who are deciding how to prioritize this life-saving aid. Wherever algorithms are concerned in the fight against Covid-19, we must all fight to ensure that the ethical decision-making is open and transparent so that failures like this one can be prevented before they begin.

--

--

OneZero
OneZero

Published in OneZero

OneZero is a former publication from Medium about the impact of technology on people and the future. Currently inactive and not taking submissions.

Liz O'Sullivan
Liz O'Sullivan

Written by Liz O'Sullivan

AI Activist, Pragmatic Pacifist, & Lover of Tinfoil Hats. VP Arthur AI (arthur.ai); Technology Director STOP (stopspying.org); Member ICRAC (.net). Views mine.