FYI.

This story is over 5 years old.

Tech

Science Can't Be Trusted and We Don't Know How to Talk About It

If science can't trust itself, how can researchers expect the already-fickle public to care about their work? That's the massive question flying around the science web right now, the result of one issue that's steadily becoming glaring: bias is...

If science can’t trust itself, how can researchers expect the already-fickle public to care about their work? That’s the massive question flying around the science web right now, the result of one issue that’s steadily becoming glaring: bias is becoming increasingly prevalent in scientific research. It’s a real problem for science journalists as well: How can we engage a public that has no faith in science itself?

Advertisement

In a great look at the bias question in Nature, Daniel Sarewitz observes that bias has been a particular issue in the biomedicine world, and points to early-90s clinical drug trials as the tipping point where obvious biases started becoming prevalent. Perhaps unfortunately, the problem then found easy blame: pharmaceutical companies footing the bill for research that will, they hope, proves that their drugs work.

This led to what seemed like an easy fix: stricter rules about declaring where research funding came from, and attempts by more prestigious journals to chase out work that had questionable backers. But it’s become clear that funding-related biases weren’t the issue. Research selected for publication has increasingly trended towards false positives. Sarewitz explains that the bias problem is not just about getting funding for your research, but the larger social pressure on scientists, who spend much of their lives pursuing the minutiae of cells or brains or climate, to make research that pays off, and produces Important Contributions.

Like a magnetic field that pulls iron filings into alignment, a powerful cultural belief is aligning multiple sources of scientific bias in the same direction. The belief is that progress in science means the continual production of positive findings. All involved benefit from positive results, and from the appearance of progress. Scientists are rewarded both intellectually and professionally, science administrators are empowered and the public desire for a better world is answered. The lack of incentives to report negative results, replicate experiments or recognize inconsistencies, ambiguities and uncertainties is widely appreciated — but the necessary cultural change is incredibly difficult to achieve.

Advertisement

And that’s the rub, isn’t it? Even as more people acknowledge the problem, and suggest fixes (like creating journals dedicated to null results), it’s extremely difficult to get the establishment to change. The system should be simple, with novel, well-reasoned hypotheses and thoughtful recreations of past research getting support. The results, positive or negative, should have nothing to do with whether work is published. But that’s not the case.

Science journals want the latest, craziest, most incredible research to help their own circulation, and it’s helpful on the journal side for those results to be positive. Let me offer a rather extreme example: If a journal ran a paper titled “The potential for Mars rock to cure cancer,” it would certainly get coverage. But readers are going to cry foul if the results are false; of course space rocks don’t cure cancer, right? In this situation, an attention-getting paper somehow feels more credible if its results are positive.

There’s also the feeling of finality when researchers can state that, for example, a certain chemical imbalance is the cause of a mental disorder, as opposed to saying that results are inconclusive. And, with evidence mounting that new research isn’t ever referencing the old, and that science essentially isn’t building on itself, there’s no support structure for methodical further investigation when results turn out false. In all those cases, there’s pressure for positive results, even if they’re actually false.

Advertisement

But let’s take this way back to the beginning. The one-directional bias isn’t just bad for the world of science: it’s very bad for the larger world, where science literacy is already pretty scarce.

Everything presented by humans is biased in some way, even if we’re trying our damnedest to be objective. That’s why it’s so important for journalists to work both sides of the story, to gather enough sources to present a well-balanced picture to readers. That’s doubly true when talking about science, where it’s impossible to expect that reporters will be experts in cutting-edge research across multiple disciplines.

Because of that, bias should be relatively easy to deal with; talk with enough people about the surprising results of some new paper, and eventually sources’ varying viewpoints are going to pull towards a bias-free middle ground. But that approach only works if bias is random. With the tide turning in one direction, it gets harder to find people that can reliably act as a counter-balance. And how’s this for a thought: If a balancing viewpoint is harder to find, then those people are likely to get more press, which increases the reward for some folks to be needlessly contrarian.

Of course, it’s easy to spiral down a rabbit hole of paranoia in a discussion like this, and I hope that it doesn’t spawn a whole lot of tinfoil-hattery. Writers have to trust that doing thorough reporting – and restraining from making the kind of blanket, definitive “here’s the cure!” statements that make for good headlines – will pay off in accuracy just the same as it always has. Still, the question looms: if one-directional bias in science makes reporting increasingly difficult, what’s a public that’s already being pandered to with sensationalist monkey business to do? Well, we’ll probably need more research.

Follow Derek Mead on Twitter: @drderekmead.

Connections: