Across the US, the government’s early attempts to distribute the COVID-19 vaccine have been, at best, underwhelming.
In Ohio, nearly 900 doses of Moderna’s COVID-19 vaccine went to waste because a provider stored them at the wrong temperature. In New York City, clinics were forced to hold onto vaccine doses for weeks instead of administering them to high-risk patients, or even throw them out, because they couldn’t find anyone who met the state’s stringent criteria for the first phase of inoculation. Florida, meanwhile, partnered with the event ticket platform Eventbrite to manage vaccine appointments, resulting in a confusing, biased, and scam-prone distribution system.
Distributing the limited number of COVID-19 vaccines to the people who most need them is a daunting logistical challenge, and in many cases, the bureaucratic, human-run systems charged with overseeing it have fallen short. In some states, broken and confusing scheduling systems have driven frustrated residents to write code and build their own volunteer websites for booking vaccine appointments.
It’s the kind of problem, some tech firms say, that is ripe for artificially intelligent overseers—and it may be the very thing needed to jump-start a lucrative new era of automated health care decision-making and delivery.
“I think it did, in many ways, take the COVID-19 pandemic to put that rocket fuel behind AI” in health care, Dr. John Showalter, the chief product officer for Jvion, a clinical AI company that’s been operating since 2011, told Motherboard. “I feel like we’re right on that precipice. 2021 is going to have a lot of reports out about how clinical AI helped with COVID-19. By 2025, people are going to be like ‘clinical AI, yawn’” because it’s so ubiquitous.
There is a spectrum of use cases for AI in vaccine distribution. Health systems are already using AI chatbots from companies like Hyro and Praktice AI to field calls from the large number of patients inquiring about whether they’re eligible for the vaccines, and to schedule appointments and follow ups.
Big tech companies like Google and Microsoft have developed vaccine management systems that incorporate AI at various levels, including planning trucking routes and maintaining dose temperatures.But the potential applications that are generating the most excitement, and skepticism, are AI tools designed to automate or influence decisions about where vaccines should go in the country—and who should get them.
In the early months of the pandemic, California asked companies to propose technological solutions to problems like COVID-19 test shortages. Aible, an AI startup based in the state, offered—for free—to create algorithmic models that would identify who to prioritize in testing in order to save lives and reduce the pandemic’s economic impact. State officials didn’t respond to the offer, Aible CEO Arijit Sengupta, told Motherboard, but he said the company is now talking with one of the major vaccine makers about creating a similar system to guide vaccine supply chains and prioritization.
“Matching up the demand and supply is something that’s not happening right now,” he said. “It would be better than what we are doing today—it would not be perfect, nothing is ever perfect—but what is good about a system like this is it learns on a daily basis and it adjusts.”
Most states are currently in the first phases of vaccination, where almost all available doses are reserved for health care workers, people over 65, and some individuals with chronic conditions. But once those populations are inoculated, states will have to make difficult decisions about where to send doses and who to prioritize next.
Jvion has created models that map which areas of the country and which population groups are most likely to experience severe effects or die if exposed to a wave of COVID-19 and, separately, which areas are the highest priorities for vaccine distribution based on the CDC’s prioritization recommendations and other health and socio-economic factors. The company has sent analyses of millions of patients to its customers, which include health systems, to guide their vaccine outreach.
Take Perry County, Pennsylvania. One of Jvion’s models, which incorporates the CDC’s guidelines, labels the county a very low priority for vaccination. But its other model, measuring community vulnerability to COVID-19, considers Perry County at the highest level of risk for severe, community-wide morbidity.
The reasons for the disparity aren’t necessarily intuitive. They draw on data that health systems likely aren’t considering and algorithmic pattern matching that is, by definition, unhuman. Why is Perry County at such high risk for COVID-19 morbidity? According to Jvion’s models, the fact that a county has “low commercial retail availability” and “low commercial/industrial job density” are influential risk indicators. Factors that influence the calculated risk level in other counties include the rate at which residents commute more than 60 minutes to work and the prevalence of environmental health hazards.
Jvion’s models are trained on a database of 36 million “independent lives,” Dr. Showalter said. That includes people’s medical claim records, socio-economic information about where they live from agencies like the Environmental Protection Agency and U.S. Department of Agriculture, and credit scoring data from companies like Experian and Transunion. As they built the models, there was nowhere near enough information available about actual COVID-19 cases, so Jvion instead trained its models primarily on data about the health care trajectories of people with respiratory conditions and illnesses like Influenza.
Jvion’s models are not currently being used as automatic decision making systems—determining who gets prioritized for vaccines without human oversight. But AI is increasingly being used to inform a variety of triaging problems in health care, and experts say a regulatory action plan for medical AI that the Food and Drug Administration (FDA) published in January is a long-awaited signal that the agency is preparing to open the doors to a new array of medical AI tools.
Currently, in order for AI systems to be eligible for FDA approval as medical devices, they must be static tools—the algorithmic model that comes out of the box doesn’t change. But one of the strengths of this kind of technology is that models can constantly be trained on new data, tweaked, and improved over time. The agency’s new action plan calls for it to develop guidelines for allowing, and regulating, evolving systems.
Monitoring continuously changing algorithms that contribute to life-or-death decision making is a tricky proposition, though, and one that has kept the FDA from moving more quickly to accept these kinds of systems as medical devices.
But the COVID-19 pandemic has sped up health care providers’ adoption of AI tools—such as home monitoring systems for vital signs that reduce the chances of virus transmission. These tools may not need FDA approval as medical devices, but nonetheless influence health care decision making, Sara Gerke, a research fellow at Harvard Law School’s Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics, told Motherboard. That increased exposure will likely quicken the industry’s adoption of more advanced, higher-stakes tools, despite the many legal and ethical issues that remain.
“I personally believe that AI has potential for being used for allocation of vaccines, but that’s for the future,” Gerke said. “I would not trust the AI right now because there will be so many hidden biases in the data. First of all, what kind of data do you even use? Even if you take it from the electronic health records data then you already have a bias because in Black communities, many can’t even go to the doctor. Right now using it, I find it very difficult.”