Algorithms meant to help patients in need of extra medical care are more likely to recommend relatively healthy white patients over sicker black patients, according to new research set to be published in the journal Science.
While the researchers studied one specific algorithm in use at Brigham and Women’s Hospital in Boston, they say their audit found that all algorithms of this kind being sold to hospitals function the same way. It’s a problem that affects up to 200 million patients being sorted through this system, their paper claims.
Sendhil Mullainathan, co-author of the paper and professor at the University of Chicago, says the research is intended to empower “customers” — hospitals in this case — to vet the mechanisms behind the software they’re buying.
“We’re going through this phase where customers buying crucial products aren’t being informed in what they are,” Mullainathan says. “It’s like when I buy a car—I don’t literally know what’s happening under the hood.”
Here’s how the algorithm works: When a patient is enrolled in a hospital’s electronic health record system, the risk algorithm assigns that patient a score based on information available, like whether that person has a chronic illness, their age, billing for previous health care visits, and specific biomarkers like blood pressure.
The algorithm is meant to reduce costs across the system, operating under the assumption that those who have spent more on their health care in the past would benefit from more proactive care, reducing the drain on emergency care and other overburdened services. Patients sorted into these programs receive special nursing attention and quicker access to primary care physicians, according to the researchers.
If a patient is designated as high-risk of requiring extra care, meaning they score a 97% or above as determined by the algorithm, they are sorted automatically into the extra care program.
But the algorithm sorts patients according to what they had previously paid in health care fees, meaning…