On Tuesday, the US National Institutes of Health, the largest biomedical research agency in the world, announced that it would be ending a three-year moratorium on research funding for projects that involve making pathogens stronger and deadlier than they are in nature.
Known as “gain-of-function” studies, this type of research is ostensibly about trying to stay one step ahead of nature. By making super-viruses that are more pathogenic and easily transmissible, scientists are able to study the way these viruses may evolve and how genetic changes affect the way a virus interacts with its host. Using this information, the scientists can try to pre-empt the natural emergence of these traits by developing antiviral medications that are capable of staving off a pandemic.
If genetically engineering viruses such as influenza, MERS, or SARS to be even stronger and more easily transmissible sounds like a recipe for some _Contagion_-inspired future hell, you’re not alone in your fears.
Even though the storage and use of deadly pathogens is strictly regulated, there’s always the risk they might fall into the wrong hands. In fact, last year former CIA director John Brennan cited “bio-threats” from genetically engineered biological warfare agents as one of the top existential risks facing the United States.
There’s also the more banal option that a human error could result in the release of a super-pathogen. In 2014, for instance, several different federal labs came under scrutiny by the Centers for Disease Control and Prevention after samples containing smallpox, anthrax, and the avian flu were mishandled by lab workers. (It turns out no one had been exposed to the agents, but 75 lab workers were monitored for exposure at the anthrax lab.) Such incidents aren’t uncommon: between 2003 and 2009, there were 395 events reported that could have resulted in exposure to toxic agents, although this resulted in just seven infections.
Facing increasing public scrutiny over gain-of-function research, in late 2014 the Obama administration put a halt on NIH funding for this type of work until a more robust assessment of its risks and rewards could be undertaken.
In 2016, a draft report was issued by the National Science Advisory Board for Biosecurity (NSABB), which found that most gain-of-function research didn’t pose a serious threat to public health. This assessment was largely made following the release of a massive, 1000-page risk assessment published by an independent contractor, which examined everything from the likelihood of a researcher’s glove being punctured to a criminal breaking into a lab.
The NSABB’s report also included a set of guidelines for approving gain-of-function research in the future, which was the basis for the stringent approval guidelines released by the NIH today.
After the Department of Health and Human Services has assessed a proposed gain-of-function project for the parameters outlined in the report, it can recommend that it receive funding from the NIH for the research. Beyond that, the project will continue to be the subject of review and oversight by the HHS and the institutional body (such as a university) that is hosting the research.
While nobody is disputing that more oversight is a good thing, not everyone is convinced that gain-of-function research is worth the risk. As Harvard epidemiologist Marc Lipsitch told Nature following the news of the end of the funding embargo , gain-of-function research has “done almost nothing to improve our preparedness for pandemics, yet they risked creating an accidental pandemic.”