Advertisement
Tech by VICE

‘Significant Racial Bias’ Found in National Healthcare Algorithm Affecting Millions of People

A series of studies argue that by focusing on costs as a proxy for health, risk algorithms are ignoring the racial inequalities in healthcare access.

by Edward Ongweso Jr
Oct 25 2019, 12:00pm

Hero Images

Significant racial bias has been uncovered in algorithms yet again—this time in a nationally deployed healthcare system that insurers use to dole out healthcare to millions of people annually, according to a new study published in Science.

The risk-prediction tool was found to consistently underestimate the health needs of Black patients. Black patients who were sicker were given equal risk scores to white people who were healthier, diverting health care resources away from Black patients in need.

In the study, researchers Zaid Obermeyer, Brian Powers, Christine Vogeli, and Sendhi Mullainathan said the bias was so significant that fixing the problem “would increase the percentage of Black patients receiving additional help from 17.7 to 46.5%.”

At its core, the risk-prediction tool works by inputting a person’s insurance information and using their healthcare costs as a proxy for their overall health. The system uses that data to calculate a risk score, which can then determine who needs how much care. Healthcare systems then use "high risk care management programs" to give extra care to high-risk patients.

The problem is that insurance and healthcare systems are riddled with deep structural inequalities between Black and white patients. Black patients have always experienced problems while accessing health care, thanks to a racial wealth gap, transportation barriers, or the lack of trust between doctors and Black patients after the revelations of the Tuskegee syphilis experiment. So, even though the algorithm doesn’t take race into consideration directly, bias sneaks in when using cost as a proxy for health.

In an article accompanying the study, sociologist Ruha Benjamin, author of Race After Technology, points out that while cost is "a seemingly benign choice of label" it ends up yielding "potentially life-threatening results."

“The design of different kinds of systems, whether we’re talking about legal systems or computer systems, can create and reinforce hierarchies precisely because the people who create them are not thinking about how social norms and structures shape their work. Indifference to social reality is, perhaps, more dangerous than outright bigotry,” Benjamin said in an email.

The point is, she wrote in her article, that "if individuals and institutions valued Black people more, they would not ‘cost less’, and thus this tool might work similarly for all.”

So, while racial bias can be rooted out by shifting the variables used to manage access to healthcare access, we still need to root out those racial inequalities altogether.

By focusing on alternative labels such as health through “active chronic conditions” or “avoidable costs” such as emergency visits, the results are dramatic: "predicting health leads to the highest fraction of Black patients identified as high risk, predicting cost leads to the lowest, and predicting avoidable costs is somewhere in the middle,” the authors write.

The study also suggests that this problem would persist no matter the type of healthcare system, public or private. Obermeyer, one of the authors involved in analyzing the algorithm’s racial bias, told Motherboard that while “this [studied] population was entirely insured—either good commercial insurance or Medicare” the problems would “crop up anywhere” due to structural inequality and differential treatment faced by people of color across the globe.

In an email to Motherboard, Obermeyer said that cooperating with the manufacturer to apply the alternative labels proposed by the researchers led to an 84 percent reduction in bias—reflecting what the numbers would look like if Black people did not “cost less” in the eyes of a broken system.

Tagged:
healthcare
Insurance
black box
racial bias
algorithmic discrimination