FYI.

This story is over 5 years old.

Tech

How 'Robo-Journalists' Could Flood the Internet with Fake News

Certain media outlets already use software to pump out content, and there's nothing to stop propagandists doing the same.

(Top photo: Scottish First Minister Nicola Sturgeon on a television monitor during a press conference. Photo: Jane Barlow/PA Wire/PA Images)

For financial journalists at the Associated Press (AP), earnings season is no longer the bore it once was. The newswire now uses software to churn out more than 3,500 reports, essentially covering the whole US stock market and freeing up the time of its writers to concentrate on more cerebral tasks.

Advertisement

So-called robo-journalism – or "automated news production" – isn't the fancy of wishful futurists; it has arrived. But for the time being at least, reporters can breathe easy. AP, for example, has not let any of its staff go as the result of adopting the technology. Instead, it is supplementary.

The benefits of tasking robots with data-heavy grunt work are obvious. Not only can news organisations scale up the volume and speed of their output in areas such as financial and sports reporting by parsing vast mines of numbers, assuming that automatons have been correctly configured, they remove the margin for human error. With that, of course, they also remove human scrutiny from data – although machine learning can filter outliers and data anomalies, and this is improving day by day.

Mind you, behind every robot there is a person with motives and intentions. Before AP can pump out myriad financial reports, somebody must create a number of templates for the software to fill in the blanks and "write" an appropriate story. And someone has to decide on what company reports will be covered in the first place. These are all human inputs.

It's not hard to see how automation could, in the wrong hands, be appropriated for pernicious ends. Buy up a catalogue of web domains, create the necessary templates, feed in unverified data that paints, say, the government in a positive light – maybe heavily massaged crime stats or unemployment figures – and flood the internet with propaganda. And, in doing so, set – or sway – the public agenda.

Advertisement

While Automated Insights – the company that supplies AP's platform, Wordsmith – doesn't have a strict policy stipulating who they can and can't sell their products to, Joe Procopio, chief innovation officer, says the company only sells to established media companies. He makes the case that parties with corrupt agendas tend not to have much data to begin with, and that it's easier for humans to tap out fake news prose manually than it is to automate the process.

"If you're going to forge the data, whether you handwrite stories, use interactive charts and graphs or use automation to convey a message from that data, it's difficult for us to police that, in the same way that Microsoft can't police people putting bad data into Excel," he says. "What we offer is just a tool."


WATCH: Meet the Real People Behind Your Virtual Boyfriend


While propagandists have yet to employ robot-journalists per se, automation is already being used to disseminate disinformation. Recent research suggests that as much as 15 percent of Twitter accounts are bots, with millions of those sharing links to fake news articles written by people. But surely the public is discerning enough to separate the trustworthiness of information reported on traditional news outlets such as the BBC compared to claims on hyper-partisan blogs like Truthfeed?

Gillian Bolsover of the Oxford Internet Institute, which looks into "computational propaganda", says it can be difficult to tell what sources are credible, even for researchers. "There are several problems," she says. "Where do you draw the line between scandal and opinion? How do you know what is a credible news source when these sources can edit their own Wikipedia pages? The problem is that, for ages, the internet and social media has helped create an echo chamber where people are mostly exposed to information that matches their pre-existing opinions and biases. We are now seeing these echo chambers become fortified and made solid."

Advertisement

It doesn't help that Google's algorithms have been found wanting. As recently as this month the search engine had to remove the results from its Featured Snippet tool that suggested Obama had planned a Communist coup d'état. Still today, typing "is Obama plan" will bring up a predictive search for "is Obama planning a coup". The problem is that, without human oversight, the search giant's algorithms give undue precedence to spurious sources with high rankings.

"That is really troubling because it's getting between people and information as they search for truth and facts," says Jonathan Albright, an assistant professor at Elon University. "Even if they don't choose to pursue that search query, these things that are popping up are injecting huge amounts of bias into people's searches when they want clarification on an issue."

In mapping out the ecosystem of right-wing fake news sites, Albright found that around a quarter of the traffic coming to a sample of these sites was direct, with some 50 percent arriving from Facebook. In other words, millions of visits come from people seeking out bogus information that supports their existing worldview – their echo chamber – before these messages are amplified on social media.

Agenda-setting and reshaping public opinion in the media are nothing new. Organisations spend billions every year attempting to gull hacks, influence opinion and even public policy. Will Moy, the founder of Full Fact, set up the fact-checking charity after his time working in parliament, where he saw spurious claims made in press releases and unverified reports taken at their word, and even influencing lawmaking. Mevan Babakar, a digital products manager at the non-profit, says mistruths are propagated in the media at all times, but that during periods of political change – such as the ongoing Brexit process and the recent US presidential election – this is heightened.

Advertisement

For instance, according to Vote Leave director Dominic Cummings, the campaign published a billion "dark ads" on social media in the run-up to the Brexit referendum. "The worrying thing about that, and I'm sure the other side did it too, is that we don't know who they targeted and we don't know what they said or what the main message was – and neither does the Electoral Commission," says Babakar.

"You hear technologists say that 'in X number of years artificial intelligence will surpass human intelligence in various ways'. But there are some fundamental limitations. It can't necessarily spot a story or provide the analysis that a journalist can."

Was this the part Cambridge Analytica – the data analytics company co-owned by billionaire Trump donor Robert Mercer – played in supporting the Leave campaign? The company, which claims to have helped Trump secure his presidential win, also claimed in February of 2016 to have "helped supercharge Leave.EU's social media campaign by ensuring the right messages are getting to the right voters online". The company is now being investigated by the UK's Information Commissioner's Office as part of a broader inquiry into how voters' personal data is being captured and exploited in political campaigns, and a Cambridge Analytica spokesman has denied the company played any part in the EU referendum campaign.

Still, if the worst is confirmed when it comes to the company's role in the US election – critics have asserted their Trump victory claim is little more than PR spin – Cambridge Analytica has mastered the arts of big data and psychometrics to roll out a vast, surreptitious social media advertising campaign that appealed to the hopes and fears of the most granular personality types.

Advertisement

At Full Fact, which has already raised investment from Google, the team is seeking further funding for automation that will help to defuse disinformation. It hopes by the end of the year to publish two tools which will show users how widely a claim has been spread on the internet and where, and annotate claims with real-time pop-ups that offer balanced conclusions. In this way, technology is being developed as an aid for real-life journalists who, despite the increasing employment of their automated counterparts, are likely to remain in demand for the foreseeable future.

"You hear technologists say that 'in X number of years artificial intelligence will surpass human intelligence in various ways'. But there are some fundamental limitations. It can't necessarily spot a story or provide the analysis that a journalist can," says Neil Thurman, a professor at the University of Munich and City, University London who earlier this month published a report that weighed up the pros and cons of robo-journalism. "People will always want information to be filtered, which journalists and news organisations do. Then it's a question of who you trust and where you go to have your information filtered to be able to see the wood for the trees."

In an age where the subversion of facts has become accepted practice, the curation of truth – whether by humans, machines or both – has seldom been more valuable.

More on VICE: 

A Wikipedian Explains How Wikipedia Stays Reliable in the Fake News Era

Here's How Much 'Real News' You Can Get from Celebrity Twitter Alone

How to Prevent Fake News from Spreading on Social Media