In the age of fake news, disinformation campaigns, and misinformation run rampant across social media, it can be difficult to know where to place your trust. This is especially true when it comes to science.
In school we’re often taught that science is an objective, infallible conduit for truth about the natural and social world and that scientists, as its practitioners, are equally free of bias or agenda. But as the COVID-19 crisis has thrust scientists further into the limelight, the public has been able to watch their discussions, blunders, and eventual achievements with higher fidelity than ever before, revealing that scientists are just as susceptible to perpetuating misinformation as the rest of us.
This is something that Jevin West and Carl Bergstrom, associate professor of data science and professor of biology respectively at the University of Washington (UW), respectively, know first hand through their work at UW’s DataLab studying the spread and creation of biased or fabricated science. Indeed, the COVID-19 pandemic has seen the spread of misinformation far and wide, including that COVID-19 vaccines will impact fertility or that the illness itself is no more deadly than the flu.
In a new paper published this April in the journal PNAS, West and Bergstrom wrote that science is at a crucial moment in its battle against both external and internal misinformation, with the fate of the public’s trust in science on the line. And while COVID may have brought these issues into a new light, West and Bergstrom told Motherboard they’ve been a part of the scientific system for much longer.
Take, for example, the adage of “publish or perish” that haunts academic circles and places the creation of new work above the quality or ultimate impact of that work. In many cases, the more prestigious journal articles you publish, the larger your claim to fame, explain West and Bergstrom.
“In some countries there is direct payment for articles that get into these higher profile journals or published in general,” says West. This used to be the case in China for example, with up to $165,000 bonus for published work by universities and research institutions, but this practice has recently been banned.
“Even if you take away the money equation,” West continues. “Scientists want to be the first one to discover something. They want to be the prestigious individual invited as a keynote to a conference.”
This desire for academic clout can also push some scientists to tell white lies when discussing their work with the media or public.
A claim a scientist wouldn’t make in their paper, for example, can often be slipped into a university press release about it and that press release may be turned around verbatim by science communicators or journalists. With the speed of the academic publishing industry, which some estimates say put out 200,000 papers last year on COVID alone, West and Bergstrom say there’s little room for nuance in these interactions.
This react-fast-think-later mentality can create overblown headlines like “Horrific video shows coronavirus particles from runners can infect you” (which West and Bergstrom write in their paper was not based on published research) that spark fear more than they do understanding. This can harm not only the public's understanding of scientific issues but can even impact how scientists and science communicators understand a subject too, say West and Bergstrom, creating a cycle of confusion and misinformation.
“The really big results about the vaccine trials or something like that are covered in the New York Times as well as published in the medical journals,” says Bergstrom. “And I might see it in the Times before I get to the medical journals the same day.”
The same attention economy that drives likes on tweets and clickbait science headlines is also driving a bias in what research even gets published to begin with, say West and Bergstrom.
Bergstrom says it’s easy to see what percentage of published results are negative (about 15 percent across disciplines) “but what you really want to know is what fraction of the negative results are published,” said Bergstrom.
“Positive results are sort of sparse in the space of all results -- I mean most things you do nothing happens,” Bergstrom continues. “[But] it's not so interesting to read about these negative results.”
And even if nuanced, negative results do make their way into journals, they’re unlikely to attract the same attention, or number of citations, as splashy positive results. This can effectively banish them to the bottom of the algorithmic search well, the pair say, making it difficult for even fellow researchers to find.
Does this mean that science as a whole is beyond repair? Definitely not, West said, but that doesn’t mean there’s not room for change either.
One way to do this, said Bergstrom, is to change the incentives that drive scientific misinformation, including putting more emphasis on publishing quality work than a quantity of work. He also suggested that preregistration for studies, in which a publication agreement is made based on the study’s design instead of its results, could help as well.
West added that evolving how social media is used to communicate science could play a huge role in injecting much-needed nuisance back into the conversation, citing extended Twitter threads as one way this is already beginning to happen.
Driving misinformation out of science isn’t going to happen overnight, but West and Bergstrom say that the hard work it will take to protect the reputation and trust in these disciplines will be worth it.
“Neither Carl nor I think science is broken,” says West. “We just think there are some ways to further improve it.”