Bots and their cousins—botnets, bot armies, sockpuppets, fake accounts, sybils, automated trolls, influence networks—are a dominant new force in public discourse.You may have heard that bots can be used to threaten activists, swing elections, and even engage in conversation with the President. Bots are the hip new media; Silicon Valley has marketed the chatbot as the next technological step after the app. Donald Trump himself has said he wouldn't have won last November without Twitter, where, researchers found, bots massively amplified his support on the platform.Scholars have argued that nearly 50 million accounts on Twitter are actually automatically run by bot software. On Facebook, social bots—accounts run by automated software that mimic real users or work to communicate particular information streams—can be used to automate group pages and spread political advertisements. Recent public revelations from Facebook reveal that a Russian "troll farm" with close ties to the Kremlin spent around $100,000 on ads ahead of the 2016 US election and produced thousands of organic posts that spread across Facebook and Instagram. The same firm, the Internet Research Agency, has been known to make widespread use of bots in its attempts to manipulate public opinion over social media.Despite the fervor over the political usage of bots during several recent global elections, such as last year's US elections, the term "bot," like "fake news," remains ambiguous. It's now sometimes used to refer to any social media persona producing content with which others do not agree.
What's a Bot and What's It Do?
- Sockpuppets (part-human/part-bot or, simply, cyborgs) initiate the conversation, seeding new ideas and driving discussion in online communities.
- The ideas are then amplified by as many as tens of thousands of automated accounts that we call "amplifier bots," repurposing, retweeting, and republishing the same language.
- "Approval bots" engage with specific tweets or comments, "Liking," "retweeting," or "replying" to enhance credibility, and give legitimacy to an idea.
- In hotly contested topic areas, bots are often used to harass and attack individuals and organizations in an attempt to push them out of the conversation.
Data for Democracy researchers Kris Shaffer, Ben Starling, and C.E. Carey noted this phenomenon after French president Emmanuel Macron's campaign emails were hacked just days before the French elections. A group of users organized in the "/pol" channel in the anonymous internet community 4chan. Shaffer, Starling, and Carey tracked how these catalyst users designed a campaign intended to disseminate the hacked Macron campaign data to a more mainstream audience on Twitter.Once the content had been posted to Twitter, it was quickly amplified by high-follower accounts like @DisobedientMedia and @JackPosobiec, who, with over 230,000 followers, function as what the researchers call "signal boosters." Ben Nimmo, information defense fellow at the Atlantic Council (and a co-author on this article), has outlined the same roles broadly as shepherds and obedient sheepdogs.However they are labeled, these roles are consistent from campaign to campaign: someone crafts a message and strategy, and automated accounts and fake personae are used to make it trend. Once trending, a meme easily attracts attention from more social media users until it spreads organically into the mainstream and the media. Recent research shows that bots can also be built to target particular human users who might be more likely to engage with and share a given piece of propaganda. The authors argue that automated agents can be used to target users with particular private views or network positions in order to spread, or curtail, fake news.
The Future of Bots
But the same technology that allows machines to understand and communicate in human language can and will be used to make social media bots even more believable. Instead of short, simple messages blasted out to whoever will listen, these bots will carry on coherent conversations with other users, with infinite patience to debate, and persuade. They can also be programmed to collect, store, and use information gleaned from these conversations in later communication.Text communication is just the beginning. University of Washington researchers recently created an eerily believable fake video of Barack Obama giving a speech, proving that it is possible to generate audio and video that passes as human. Similarly, it is increasingly possible to create audio and video of entirely fake events. The technology that allows bots to engage in credible dialogue will eventually be married with the ability to produce audio and video. Humans already have trouble recognizing relatively naive text bots masquerading as real users; these advances in machine learning will give propagandists even more power to manipulate the public. Reality, at least in the digital space, will increasingly be up for grabs. This uncertainty is, in and of itself, a victory for the propagandist. Civil society, however, can and must work to generate sensible responses to this problem.
Reality, at least in the digital space, will increasingly be up for grabs.
How to Respond to the (Ro)bots
Watch more Motherboard:
- Label bots as automated accounts. This is technically achievable and would increase transparency in online political conversation.
- Share data for network analysis with researchers. This would enlighten not only the operators of bots and how they are connected, but also the dynamics of how bot messages spread throughout a social network.
- Enforce algorithmic "shadow bans," in which accounts are not removed from the platform but their activity is hidden from other users. Such bans could silently minimize the reach of suspected accounts, though they will not address the core issue of authenticity.