In a perfect world, news that is true would triumph over news that is false. But, as researchers at the Massachusetts Institute of Technology recently found, quite the opposite is true on Twitter.
One of the largest studies of link sharing on Twitter ever conducted shows that false information travels much faster and reaches many more people than true information on the platform. In fact, it took the truth about six times as long as false information to reach 1500 people on Twitter, the researchers estimate.
Even then, truthful stories rarely spread to more than 1,000 people, while clicky false stories can reach up to 100,000 people, the researchers found. “Falsehoods were 70 percent more likely to be retweeted than the truth,” the researchers wrote, in a paper to be published in the journal Science on Friday.
In the multi-year study, MIT researchers looked at 4.5 million tweets, three million Twitter users, and 126,000 stories shared over nine years on Twitter. Researchers separated truth from rumor with six independent fact checking organizations: snopes.com, politifact.com, factcheck.org, truthorfiction.com, hoax-slayer.com, and urbanlegends. Then they compared the speed and distance of both false and true news.
“Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information,” the researchers reported.
The results suggest false news on Twitter may be able to change minds, move markets and influence behavior more powerfully than factually accurate news written by credible, trained journalists.
“The most surprising finding was how clearly we could see the difference between how false and true news spreads. I didn’t realize it was going to be so clear and prominent,” said Soroush Vosoughi, the study’s lead author.
“Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth“
What about bots?
Further, the researchers found that automated bots — which have made headlines for their role in spreading false information during the the 2016 presidential election — played a minimal role. When the researchers used an algorithm to detect bots to isolate human users, it didn’t alter their conclusions. Robots appeared to spread true news at the same rate at which they spread false news.
“False news spreads more than the truth because humans, not robots, are more likely to spread it,” the researchers pointed out.
In other words, the problem, if you want to call it that, is you.
“I think our study shows that, if you’re blaming all the problems on bots, you’re misallocating resources on how to fix the problem,” said Vosoughi.
Read: Journalists and Trump voters live in separate online universes
Vosoughi started investigating the spread of misinformation as a graduate student in Cambridge, inspired by his experience during the 2013 Boston bombings, which unfolded on TV. “I remember very clearly that me and my friends were using social media as our main source of news. It turns out, a lot of the stuff we were seeing were rumours,” he said.
In his years of studying the problem since, the data has helped him draw a conclusion that is intuitive for anyone in social media: fake news is created to be shared, and for that reason it is simply more novel, provocative, and juicier that true news, Vosoughi said.
“[The study] confirms much of what we are seeing in social media overall, which is that headlines sell the story and false news is both less expensive to make and more interesting to read, as has always been the case with tabloids,” says Joan Donovan, the project lead on media manipulation at the New York-based research group Data & Society.
Donovon has been investigating the spread of false information shortly after the Marjory Stoneman Douglas High School shooting in Parkland, Florida, on February 14th. False rumors that the shooter, Nikolas Cruz, was a member of a paramilitary white supremacist group, and that the students were “crisis actors,” were quick to spread.
Catastrophic events such as the massacre in Parkland often trigger flurries of misinformation. “There is a timeline in which experts are still investigating and there is sometimes a 12-24 hours window where lots of information and misinformation travels, as people speculate who, or what, was the cause of the crime or the cause of the damage,” Donovan said.
Donovan was not surprised that humans are the primary culprits when it comes to spreading rumor and falsehoods. “When people share things they suspect are false, they don’t do it necessarily because the information is true or false,” she said, but “because it says something about their participating in the political discussion.”
Why people RT
People share things for any number of reasons; a retweet is powerful, regardless of what people say in their Twitter profiles. “It’s particularly troubling because even though many people put in their profile, ‘RT not equals endorsement,’ it really does,” said David Lazer, professor of political science and computer science at Northeastern University. “There’s a social endorsement that comes when you tweet or retweet content.”
But Lazer cautioned it’s very difficult for an algorithm to definitively single out bots. “Some bots are really easy to detect but it wouldn’t be difficult to produce bots that, in mass, look very human-like,” he said.
Duncan Watts, principal researcher at Microsoft Research, said panicking about “fake news” on Twitter would be the wrong reaction to the study. Falsehoods are not necessarily shared because they are false but because they confirm an already-held view.
“It is highly likely that much of the consumption and sharing of fake content is done by people who have already made up their minds, and who are engaging with the content more as a form of entertainment [or] cheerleading than as information,” he said.
Cover image: Leslie Xia