Researchers Apologize for Gaming Reddit with Fake Accounts
Apparently, it was pretty easy to do.
Image: Flickr/Blake Patterson
It should be pretty clear by now that Reddit, like Facebook, is a giant digital petri dish primed for experimentation. Even so, as demonstrated by a recent apology posted to Reddit by researchers who gamed the site, it's not always obvious when a platform is being toyed with.
On Sunday, researchers from consulting firm Thinkst took to Reddit to apologize for flooding subreddits like /r/worldnews, which boasts more than 6 million subscribers, and /r/netsec, which has upwards of 130,000, with fake accounts to game votes as part of a study last year. Without this confession, the community might never have known about the specifics of the deception.
"We had dutifully reported all of the bugs we found during the research and were pretty open and public about the results and report," Haroon Meer, one of the study's authors, told me in an email. "Reporting the results to the Reddit mods completely fell through the cracks."
Gaming sites like Reddit with fake votes or comments is done using phony accounts called "sock puppets." The goal of creating legions of such accounts is usually to shift the tone of discussion one way or another, to promote certain content, or both. Last year, a scientist used sock puppets to promote his own work on Reddit without anybody knowing, for example, and Russia is rumored to have its own political troll army trawling the comments of sites online.
The really insidious part about this whole thing is that you'd likely never know if a post you just voted on was gamed from the start.
The purpose of the research, which was funded by Radio Free Asia, was to investigate how fake accounts in news website comment sections and on voting-based sites like Reddit and Hacker News could influence discourse and what kind of content people see.
After creating fake Reddit accounts using web proxies and solving the CAPTCHA puzzles manually, the researchers report that they were able to send posts to the front page of /r/netsec with ease. They were less successful on /r/worldnews due to the higher user activity, but they were still able to downvote posts into obscurity—from 70 points to 30, for example.
"When suppressing a post with down-votes, it is best to start as soon as possible after a post is created, to have a good chance of dragging the post score below the user-visibility threshold as soon as possible," the researchers wrote in the study. "Once out of sight, the post is much less likely to attract organic upvotes."
According to the researchers, although they were operating 50 fake accounts on the /r/netsec subreddit, moderators only identified 20, highlighting that it's not always obvious which accounts are being used to cheat.
Reddit did not respond to a request for comment.
Meer told me that their next step is to build a tool that can automatically detect fake accounts working in concert. That's good news, because apparently taking advantage of a site meant to encourage community-building is really easy to do, and pretty difficult to stop.