Update: On January 7, 2019, Nature Human Behaviour announced that the authors of the study featured in this article requested a retraction. The researchers note that errors in the study meant that the finding that fake news is just as likely to go viral online as the truth under certain conditions is not supported by the evidence. Read Motherboard's article on the retraction here. The original article continues below.
It's hard not to theorize fake news. The phenomenon, in which fabricated news stories gain relative mainstream traction, is so much at the center of our the United States' current political crisis. If we can be so easily swayed by what is often clearly incorrect information, what then is democracy?
So, how fake news gains traction is obviously a crucial question. It's also one that's being pursued by more than just podcast hosts and op-ed columnists. This is an academic question as well. In particular, previous research has demonstrated that viral memes are likely to spread given the right combination of social network structure and limited attention spans.
In a paper published Monday in Nature Human Behavior, Xiaoyan Qiu, a researcher at Shanghai Institute of Technology, and colleagues explore this dynamic but through the lens of information quality. We all want quality information—excepting those likely to benefit from the spread of bad information, of course—but what happens to our ability to properly make quality judgements given overloaded network conditions and limited attention spans?
"Four centuries ago, the English poet John Milton argued that in a free and open encounter of ideas, truth prevails," Qiu and co. write. "Since then the concept of a free marketplace of ideas has been used to support free speech policies and even applied to the study of scientific research."
The marketplace analogy is key here. The marketplace of ideas implies competition which then implies a natural selection based on the quality of goods being peddled in that marketplace. The best ideas, those with the highest value, should rise to the surface while bad ideas wither. The "wisdom of the crowd," right? It's a nice idea, anyway, but Milton never saw the internet coming.
The model used by the researchers in the current paper is pretty simple. It's relating just a few metrics (where meme is synonymous with "potentially viral unit of information"): meme quality, meme popularity, attention, and information load. Attention here corresponds to the number of memes that a meme consumer can consider at once, while information load corresponds to the average number of memes reaching a consumer within a given amount of time.
Generally, the idea marketplace does enforce a selective pressure on information value, but this changes pretty dramatically as attention declines and as information load increases. You can see it in the two graphs above where μ is a metric for information load and α is a metric for consumer attention. On the left, we can see that as information load increases to the point where new information is being introduced constantly (α = 1), information quality makes virtually no difference when it comes to predicting the spread of a particular meme. Bad spreads just the same as good.
So, from this we can reach a naive conclusion that if, say, Facebook were to just limit meme volume, it would have the effect of returning some natural discriminatory power to the network. Facebook then wouldn't have to wield editorial power over content in the sense of saying this is good or bad, it would just have to cull memes wholesale, just so there is less of them. The good judgement of people would take over from there.
The catch is that by just sort of arbitrarily culling memes from a network we also reduce the diversity of memes that are available to be parsed by users in the first place. From the perspective of meme quality, that's bad too. So, the key is in finding some balance—maximizing meme diversity while also minimizing information load. Fortunately, this may not be as hard as it sounds.
"Currently, bot accounts controlled by software make up a significant portion of online profiles, and many of them flood social media with high volumes of low-quality information to manipulate public discourse," Qiu and co. write. "By aggressively curbing this kind of abuse, social media platforms could improve the overall quality of information to which we are exposed."