Tech

Reddit Moderators Brace for a ChatGPT Spam Apocalypse

Reddit moderators say they already see an increase in spam and that the future will “require a lot of human labor.”
GettyImages-1244227518
Image: Avishek Das/SOPA Images/LightRocket via Getty Images

In December last year, the moderators of the popular r/AskHistorians Reddit forum noticed posts popping up that appeared to carry the hallmarks of AI-generated text. 

“They were pretty easy to spot,” said Sarah Gilbert, one of the forum’s moderators and a postdoctoral associate at Cornell University. “They're not in-depth, they're not comprehensive, and they often contain false information.” The team quickly realized their little corner of the internet had become a target for ChatGPT-created content. 

Advertisement

When ChatGPT launched last year, it set off a seemingly never-ending carousel of hype. According to evangelists, the tech behind ChatGPT may eradicate hundreds of millions of jobs, exhibit “sparks” of singularity-esque artificial general intelligence, and quite possibly destroy the world, but in a way that means you must buy it right now. The less glamorous impacts, like unleashing a tidal wave of AI-produced effluvium on the internet, haven’t garnered the same attention so far. 

The two-million-strong AskHistorians forum allows non-expert Redditors to submit questions about history topics, and receive in-depth answers from historians. Recent popular posts have probed the hive mind on whether the stress of being “on time” is a modern concept; what a medieval scribe would’ve done if the monastery cat left an inky paw print on their vellum; and how Genghis Khan got fiber in his diet. 

Shortly after ChatGPT launched, the forum was experiencing five to 10 ChatGPT posts per day, says Gilbert, which soon ramped up as more people found out about the tool. The frequency has tapered off now, which the team believes may be a consequence of how rigorously they’ve dealt with AI-produced content: even if the posts aren’t being deleted for being written by ChatGPT, they tend to violate the sub’s standards for quality.

Advertisement

The moderators suspect some ChatGPT posts are aimed at “testing” the mods, or seeing what the user can get away with. Other comments are clearly part of astroturfing and spamming campaigns, or engaged in “karma farming,” where accounts are set up to accumulate upvotes over time, giving them the appearance of being authentic, so that they can be deployed for more nefarious purposes later on.

But it's not just one well-moderated forum encountering the issue. In fact, Reddit’s ChatGPT-powered bot problem is “pretty bad” right now, according to a Reddit moderator with knowledge of the platform's wider moderation systems, who wished to remain anonymous. Several hundred accounts have already been removed from the site, and more are being discovered daily, they said, adding that most of the removals are being done manually because Reddit’s automated systems struggle with AI-created content. Reddit declined to offer any comment on this. 

In February, AskHistorians and several other subreddits were hit by a coordinated bot attack using ChatGPT. The automated system of bots was caught inputting questions asked on AskHistorians into ChatGPT, and then spitting out responses through an army of shill accounts, says Gilbert. The same botnet posted on a lot of the “ask” subs, e.g. r/AskWomen, r/AskEconomics, and r/AskPhilosophy. 

Advertisement

Identifying that the bot’s spammy answers were produced with ChatGPT wasn’t the problem, it was “that they were coming in so fast and so quick,” says Gilbert. At the height of the attack, the forum was banning 75 accounts per day, over the course of three days. Although they can’t be sure of the purpose of the attack, they did notice a couple of posts that were promoting a video game. 

A recent Reddit transparency report highlighted the huge problem of spam and 'astroturfing' accounts, where fake accounts are set up with the purpose of promoting a product – but generative AI like ChatGPT could be set to hugely exacerbate the issue. While astroturfing used to rely on copy-pasted text that was shared by many different accounts, the likes of ChatGPT can now create entirely novel spam posts at the touch of a button.

“The bot problem was already extremely bad and Reddit's automatic anti-spam systems barely help, and by the time they do, it's too late and the bot's existence has generally served its purpose,” commented u/abrownn, who moderates r/Technology, one of Reddit's biggest forums with over 14 million subscribers. 

“Bots on Reddit are overwhelmingly used for simple advertising purposes, not political manipulation like everyone likes to claim. Most things advertised by these bot accounts are adult oriented: marijuana/Delta8 ads, porn, gambling, or they're sold or operated to mass-advertise drop shipped goods, most of which are credit-card skimming scams or deliver different goods than ordered or never deliver at all.”

Advertisement

In addition to r/AskHistorians, subs including r/AskPhilosophy, r/AskEconomics, and r/Cybersecurity said that they were experiencing problems with ChatGPT, but at a manageable frequency right now. “ChatGPT has a style that's fairly easy to identify, but the real test is the quality, and it appears that ChatGPT is very bad at philosophy,” said a moderator from AskPhilosophy. 

But with regards to the bot attack, “It's only a matter of time before someone else tries it, and presumably they're going to get better at evading our quality control efforts,” says the AskPhilosophy moderator. They believe ChatGPT comments have now become relatively uncommon on the forum. “Either that, or they're getting better at fooling us.”

A r/Cybersecurity moderator says the sub has a good detection rate for catching ChatGPT content when it’s used explicitly for marketing. However, karma farming, where fake accounts masquerade as real ones, presents a trickier issue. “User reports will occasionally catch these, but our own moderation tools are frankly useless, and we have no idea what percentage of inauthentic content we're catching in this category currently,” they said. As a result, “Our problem isn't necessarily ‘what we've found so far’ but ‘what we've missed’.” 

Advertisement

Regardless of whether it’s a serious issue right now, most subs are bracing themselves for the future – especially if large language models like GPT-4 get better at crafting human-sounding content.

A study of OpenAI’s GPT-3 and GPT-2 XL found humans struggled to reliably spot AI-produced text. The study was carried out before the onset of the current generative AI hype, and most people weren’t sure how to identify AI. “Machine generated text tends to be very fluent, very grammatical and very coherent, but […] it gets sidetracked easily and says a lot of irrelevant things,” said Liam Dugan, a PhD student at the University of Pennsylvania and the paper’s lead author. 

“Humans go into [an AI-text detection task] expecting surface level errors and misspellings or ungrammatical sentences, where in actuality, what they really should be looking for is, is this factual? Is this common sense? Is this relevant to what the post was talking about?”  

Tools like GPTZero analyze text to predict whether it was written by a large language model, but they’re not infallible when it comes to detecting AI-produced content. To complicate matters, two recent papers show that using paraphrasing models that jumble up ChatGPT-generated text significantly undermines today’s AI text detectors. 

Reddit is working on AI-detection tools for forums that want to root out this kind of content, a Reddit employee told Gilbert and the rest of the team.

For now though, the job is falling mostly to moderators. “It's going to require a lot of human labor, which is no fun,” said Gilbert. “We all do this as a volunteer activity.” But Reddit, along with other social media platforms, has a huge incentive to get a handle on this now, before the problem gets worse. “They want people to read their ads, right?” pointed out Gilbert. “It’s not like [Google’s AI chatbot] Bard is going to buy anything.” 

Whether the problem can be fought in a meaningful way might be decisive in whether social media continues to exist in its current form. “I think a lot of claims about ‘GPT will revolutionize [whatever]’ are bullshit,” said the r/Cybersecurity moderator, “but I'd bet the farm that traditional social media has a finite lifespan, largely because inauthentic content is becoming so realistic and cheap to make that we're going to struggle to find who's real and who's a bot.”