Whether they're spreading porn, selling Trump memorabilia, or intimidating activists on a mass scale, bots have become an established part of Twitter. But making your own bot army may not be a trivial task; obviously, it can take time to manufacture a load of accounts, and making them believable enough to trick onlookers can be a pain.
How would you go about creating a Twitter armada? Although the process may not completely work today, one researcher found that by writing code that autonomously mimicked real users, and then spread to other targets, he could quickly generate accounts that may reasonably pass off as real users.
"I kept losing track which were the real accounts and which were mine after a while," security researcher Fionnbharr Davies told Motherboard in a Twitter message.
Back when Davies first tried out his experiment at the end of 2012, he didn't use Twitter's own API as he assumed it would be monitored for abuse. But it turned out if you used Twitter with a very old web browser like Internet Explorer 6, the site presented an easy-to-script-for plain HTML version of the site. At the time, Twitter also required users to complete a CAPTCHA during account creation, so Davies spent $10 on a CAPTCHA breaking service. To handle the wave of signups, some more code handled creating email accounts with a number of free email services.
It's not totally clear if this exact methodology would work today—there's a good chance Twitter's anti-fraud mechanisms have changed since 2012—but the fundamental idea of the experiment stands. Davies' trick is in starting with a patient zero; that is, an original target account that his script clones, by signing up to Twitter with a similar name, grabbing all the original pictures and location data it can, and then tweeting whenever the original account tweets.
And, it spreads. When patient zero then tweets at someone or mentions them, the script then clones the second account.
"And so it grows organically as people naturally talk to each other," Davies explained in a recent blog post.
After some initial testing, Davies says he deployed the script against a group that tweeted constantly: Beliebers. Soon, his web of fake users had allegedly reached around 3,000 accounts before he told the botnet to stop growing. After a couple of months, presumably when Twitter spotted the fake accounts, he says the total number fell.
"About 1,000 were still around last I checked at the start of the year, so they've been alive for a while," Davies said. Motherboard verified that at least some of the accounts Davies created are still active at the time of writing.
One way of spotting whether an account is likely a bot is by its creation date; is this user only a few days old? But Davies approach theoretically gets around that problem: if left for long enough, the script will build an army of accounts with varying lengths of life, making identification of the bots that bit harder.
All of this may have only been for research purposes, but bots and trolls have gained more political weight since Davies' experiment. The Russian government deploys its own disruptors on social media to spread misinformation, albeit it appears in a more manual fashion. But it's easy to see the attraction of automatically building a bot army.
"I've looked into it a couple of time over the years and they've made a few changes but it wouldn't be hard to get it all going again," Davies told Motherboard.
A Twitter spokesperson told Motherboard in an email that "Twitter takes fighting spam seriously, and we want our users to enjoy the service without being concerned about spam."
"It's going to get wild once chat AI starts to get better," Davies said.
Update: This piece has been updated to include comment from Twitter.