FYI.

This story is over 5 years old.

Tech

Bias In Maternal AI Could Hurt Expectant Black Mothers

Black women experience worse maternal mortality rates than their white counterparts. Will machine learning make it worse or better?
Image: US Air Force/Staff Sgt. Josie Walck

Giving birth in the United State can pose a fatal risk. This stark reality became evident to me immediately after I was discharged from a seven-week stay at a public hospital in Ohio, 34 weeks into my pregnancy. There, a team of doctors had monitored my fetus with an uncertain prognosis, constantly warning me of the potential for my daughter to be stillborn and the risk of my own death. Weeks earlier, I had been labeled by doctors as being a “high-risk” mother: a 30-something African-American with fibroids.

Advertisement

Based on the alarming disparity of maternal morbidity and mortality rates between Black and white women in the US, my daughter and I beat the odds. The World Health Organization estimates that about 60,000 expecting mothers in the US suffer birth complications, and close to 1,200 women die every year as a result—numbers that have more than doubled in recent years. It’s even worse for Black women: Between 2011 to 2014, Black women died at a rate three times higher than white women for every 100,000 live births.

Subpar obstetric training and systemic bias are but a few causes for this disparity. A 2017 ProPublica report revealed Black women are more likely to experience risky childbirths regardless of education or class. New mother and tennis champion Serena Williams has notably become a face for maternal morbidity given her blood clot scare, where her concerns were dismissed. And Black women are less likely to establish a rapport with their doctors, be empowered to make necessary medical decisions or receive empathy from medical staff.

A staggering number of these deaths are preventable through better public health strategies, training and access to care. But the world of tech also may play a role here. There is growing interest in the use of artificial intelligence in healthcare, including childbirth, that could use algorithms to find better health outcomes for both mothers and their babies. But if the racial and gender bias within the technology is not addressed, it could perpetuate the cycle.

Advertisement

Machine learning, a form of AI, is already used to build self-driving cars and to create facial recognition photo tags on Facebook. In the healthcare industry, artificial intelligence is spoken about in relation to big data, where algorithms are developed based on training datasets to predict risks in patients and manage health outcomes. Philips, PeriGen, and EarlySense are just a few developers of obstetric decision support systems attempting to use of artificial intelligence in perinatal and maternal care.

If the racial and gender bias within the technology is not addressed, it could perpetuate the cycle

There are various methods of applying machine learning to obstetrics. Hewlett Packard created a monitoring system using artificial intelligence to predetermine questions physicians can ask high-risk patients based on patients’ recorded medical history. More modern forms of artificial intelligence use pattern recognition techniques to predict hypertension, preeclampsia and severe maternal morbidity in expectant mothers. One of PeriGen’s AI products, the Early Warning System, continuously monitors vital signs and other health indicators to identify possible threats to the mom or child. EarlySense, meanwhile, monitors a woman’s ovulation cycle and claims to help predict fertility dates.

“One of the promising uses of machine learning in healthcare is the development of algorithms that can process patient data from doctor's notes to test results and pick up on patterns that may indicate the onset of dangerous conditions that can lead to maternal morbidity, such as infection and sepsis,” Serena Yeung, an incoming professor and researcher at the Stanford Artificial Intelligence Laboratory, told me in an email.

Advertisement

National endeavours have tried to address the morbidity and mortality crisis for Black mothers using data and machine learning. New York City health officials, in conjunction with the De Blasio administration, announced a campaign in July aimed at reducing racial disparities in maternal morbidity. The $12.8 million initiative will include implicit bias training for healthcare providers in addition to the implementation of data tracking techniques to reduce complications in expecting mothers of color.

But as objective as algorithms may seem on the surface, they are also subject to bias. Tech companies like Google have been criticized for racist AI, one image labeling incident in particular identified Black faces as gorillas. In 2016, Microsoft apologized after it inadvertently created a racist AI chatbot, and Faceapp similarly grappled with its own racist photo filters.

Allyson Morman, an anesthesiologist at Sinai Hospital in Baltimore, Maryland, told me in a phone call that biased machine learning techniques are a result of human error. “[T]he unfortunate historical perceptions of either women being melodramatic/malingering or of African-Americans as being ungrateful, undereducated, and/or pain seeking, now both apply. Their symptoms become overlooked or outright ignored altogether and the quality of their medical care suffers as a result.”

Bias is steeped in a history of structural and institutional racism in maternal health. In the early 20th century, federally funded programs supported eugenics, forced or coerced sterilization of “undesirable” communities including Black women and girls. By 1974, nearly 7,600 individuals were forcibly sterilized in North Carolina alone and the last of 213 victims received the last of their settlement earlier this year following a 15 year legal battle.

Advertisement

Black women like Keisa Carroll have inherited this burden. Carroll, a Black activist based in Ohio, said she knew early during the pregnancy that her first child had holoprosencephaly, a life-threatening disorder. She believes had she been a white woman, instead of a Black teenager at the time, she would have received better patient care when she lost her child in 2005.

“They did not numb me. They did not care. They had no compassion and my baby had a gash on her head,” she said. “There was no human element.”

To combat algorithmic and real life bias, Timnit Gebru, a postdoctoral researcher at Microsoft Research argues machine learning models should include training data with a sufficient amount of data for Black women with maternal morbidity. She says the foundation of US healthcare is based on the belief that African-Americans were “inherently uninsurable due to their low life expectancy.” In fact, uninsured expecting mothers are three times as likely to experience maternal death as women with insurance.

“We know that African-American women experience [bias] most probably due to poorer care and bias in the system,” Gebru told me in an email. “However, people could just determine that this higher morbidity is inherent to their race without thinking of the factors that causes it.”

In an attempt to develop a new approach to predicting preterm births, one of the leading birth complications, researchers at Columbia University and UNC Charlotte used a dataset from the National Institute of Child Health and Development-Maternal Fetal Medicines Unit Networks to train a supervised learning model called a support vector machine.

Advertisement

"Human discretion should not be taken out of the equation"

The researchers found that the machine proved to be more effective than “hand picked” linear models when it came to racial bias. While the algorithm includes different socio-demographic variables such as marital status and race in its calculations, the findings suggested that race was not as significant a variable as others. Of course, such outcomes could be due to the lack of representation in the training dataset itself.

It is also important that those who are developing the algorithms to include the insights of medical professionals and researchers who are experts in the issue of bias in healthcare, especially among African-American mothers. “Human discretion should not be taken out of the equation,” Gebru said. “It is very dangerous to blindly apply the results of these types of algorithms. It’s important to involve those who care, and have studied this particular issue in the development of these algorithms on the technical side as well.”

Morman encourages the continued use of algorithms, noting that “machine learning should guide decisions, not determine decisions. They allow us to provide the best care for our patients in the most efficient way. As long as it is executed well, it could be great asset.”

Omotayo O. Banjo is Associate Professor in the Department of Communication at the University of Cincinnati, USA. She focuses on representation and audience responses to racial and cultural media.

This piece is part of a series of stories produced in partnership with The Plug.