Doctors Are Very Worried About Medical AI That Predicts Race

Researchers warn that systems which predict patients’ race from X-rays and CT scans will fuel medical discrimination.
Janus Rose
New York, US
X-rays hanging up in a medical office. Getty Images.
Getty Images

Researchers are trying to puzzle out one of the most disturbing recent findings in the field of machine learning: why AI systems can accurately predict a patients’ self-reported race solely based on medical images like CT scans and X-rays.

A new survey published in the Lancet Digital Health medical journal analyzed multiple machine learning models, using both public and private datasets containing images of patient medical images including mammograms, chest X-rays, and CT scans. But while the researchers were able to confirm the models’ ability to predict race with high accuracy, they were stumped as to what exactly is enabling the systems to consistently guess correctly.


The survey used multiple datasets of medical images to try and isolate which features of the images were responsible for the system’s high accuracy. But like a previous study into the phenomenon, the researchers found that even factors like disease and physical build were not strong predictors of race—in other words, the algorithmic systems don’t seem to be using any particular aspect of the images to make their determinations.

“Although an aggregation of these and other features could be partially responsible for the ability of AI models to detect racial identity in medical images, we could not identify any specific image based covariates that could explain the high recognition performance presented here,” the researchers wrote.

This is extremely worrying given that medical algorithms have been shown to produce different outcomes for patients of different races. Several previous studies have shown that Black and female patients are less likely to receive an accurate diagnosis from automated systems that analyze medical images. This reflects the long-standing problem of medical racism, which predates the usage of machine learning and has been shown to disproportionately affect Black patients seeking care.

Naturally, the researchers are worried that this will result in algorithmic systems that can accurately predict race and then deliver disparate results based on that determination.

“The combination of reported disparities and the findings of this study suggest that the strong capacity of models to recognise race in medical images could lead to patient harm,” the study’s authors write. “In other words, AI models can not only predict the patients' race from their medical images, but appear to make use of this capability to produce different health outcomes for members of different racial groups.”

AI ethicists have repeatedly warned about the dangers of automated systems that determine sensitive information like race and gender based on physical appearance, describing them as part of an algorithmically-mediated resurgence of racist pseudoscience. The paper’s authors similarly warn that implementing these systems will have dire effects on marginalized populations, especially those that have historically faced medical discrimination.

“To conclude, our study showed that medical AI systems can easily learn to recognise self-reported racial identity from medical images, and that this capability is extremely difficult to isolate,” the authors wrote. “We strongly recommend that all developers, regulators, and users who are involved in medical image analysis consider the use of deep learning models with extreme caution as such information could be misused to perpetuate or even worsen the well documented racial disparities that exist in medical practice.”