Games

Text Adventure Game Community In Chaos Over Moderators Reading Their Erotica

The developers of the AI powered text adventure added a filter to look for child pornography. Users say that their privacy is being violated.
A laptop being used in bed, as seen from above.
Image Source:  Sellwell/Getty

The AI-powered story generator AI Dungeon has come under fire from fans recently for changes to how the development team moderates content. Notably, the player base is worried that the AI Dungeon  developers will be reading their porn stories in the game. Separately, a hacker recently revealed vulnerabilities in the game that show that roughly half of the game’s content is porn.

Advertisement

AI Dungeon is a text based adventure game where, instead of playing through a scenario entirely designed by someone else, the responses to the prompts you type are generated by an AI. It's attracted a devoted community not just for people who love classic text adventures, but for people who are interested in AI-generated content. Because the system that AI Dungeon uses allows players to customize the worlds, characters and scenarios they play with, as well as allowing NSFW content and multiplayer games, it's also become popular to use AI Dungeon to play through pornographic scenarios, essentially allowing them to easily create interactive erotica.

This week, AI Dungeon players noticed that more of their stories were being flagged by the content moderation system, and flagged more frequently. Latitude, the developers of AI Dungeon, released a blog post explaining that it had implemented a new algorithm for content moderation specifically to look for content that involves “sexual content involving minors.” 

“We released a test system to prevent the generation of certain sexual content that violates our policies, specifically content that may involve depictions or descriptions of minors (for which we have zero tolerance), on the AI Dungeon platform,” Latitude wrote. “We did not communicate this test to the Community in advance, which created an environment where users and other members of our larger community, including platform moderators, were caught off guard.”

Advertisement

The new system for content moderation is still upsetting players, especially players who have used AI Dungeon for pornography, because it has introduced new privacy concerns.

"We built an automated system that detects inappropriate content," they wrote in the blog. "Latitude reviews content flagged by the model for the purposes of improving the model, to enforce our policies, and to comply with law."

Latitude later clarified in its Discord at what point a human moderator would read private stories on AI Dungeon. It said that if a story appears to be incorrectly flagged, human moderators would stop reading the inputs, but that if a story appeared to be correctly flagged then they "may look at the user's other stories for signs that the user may be using AI Dungeon for prohibited purposes."

 Latitude CEO Nick Walton  told Motherboard that human moderators only look at stories in the "very few cases" that they violate the terms of service.

"In those cases, a Latitude developer reviews the case to ensure that our automated systems are accurate and that any content not in compliance with our terms of service is eliminated," Walton said.

There were two main issues with this for players of AI Dungeon. The first one was that the new algorithm for flagging banned content was misfiring in ways that felt pretty obviously wrong. A player who goes by Kagamirin02 on Reddit posted a screenshot in which one of their stories was flagged right after they used the word "fuck." The story was about a ballerina who twisted their ankle.

Advertisement

"Another story of mine featured me playing as a mother with kids, and it completely barred the AI from mentioning my children," Kagamirin02 told Motherboard. "Scenarios like leaving for work and hugging your kids goodbye were action-blocked."

Walton told Motherboard that these flags were made in error, and as described in the company blog post, they are working on ways to improve the AI and allow users to report false positives.

"Like every company, we are not perfect and will make mistakes from time to time," Walton said. "When we do make mistakes, we will own them and work to fix them as soon as possible."  

All of this has been compounded by the fact that a security researcher named AetherDevSecOpsjust published a lengthy report on security issues with AI Dungeon on GitHub, which included one that allowed them to look at all the user input data stored in AI Dungeon. About a solid third of stories on AI Dungeon are sexually explicit, and about half are marked as NSFW, AetherDevSecOpsjust estimated.

“There were A LOT of lewd or otherwise nsfw user action fragments—way more than I had anticipated,” they wrote. “Those, to me, are some insane numbers. Not only because, well, just the absolute sheer volume of the active player base that has chosen to interact with a state-of-the-art OpenAI language model by lewding it, but also because of just how sensitive the data is because of it.”

Advertisement

The researcher noted, specifically, that Latitude’s moderation changes would mean that many stories would be reviewed by humans, which fundamentally changes the game.

“With almost half of the userbase being involved with NSFW stories, this seems like a tremendous misstep,” they wrote. “As users have an expectation that their private adventures are, well, private. Users went from being able to freely enjoy the AI, to now having to make a mental check of whether or not their input is appropriate.”

On the AI Dungeon subreddit there are a plethora of examples of the content moderating algorithm flagging posts that are clearly not scenarios in which children are being assaulted, and in many cases, scenarios that are not sexual at all. Some users have posted screenshots of things that really should have been flagged, but weren't. It's reminiscent of Tumblr's efforts to use an algorithm to flag pornography on the platform, but which often flagged images of the cartoon cat Garfield.

Advertisement

"From these results, it's clear that a bad actor getting access to this data may as well be hacking something akin to an adult website, and can exploit all the fear, paranoia, and blackmail that comes with that," AetherDevSpecOps wrote. "Hopefully not, but you can see why security is even more important than you might have initially thought."

Players who use AI Dungeon for explicit or NSFW scenarios are also not always using them for pornography. Two players that spoke to Motherboard said that they use AI Dungeon to explore explicit scenarios in a therapeutic way.

One such player, who requested to remain anonymous because of the sensitivity of their stories, said that he had been using AI Dungeon as a way to explore BDSM, and felt that Latitude had taken away a resource that had been helping him. He had initially been using a chatbot called Replika, and then moved on to AI Dungeon.

"All I can say is that Replika offered an anonymous outlet. I could be emotional, rude, open and verbally threatening to my Replika, without fear of censorship, judgement, or having to worry about feeling embarrassed or guilty for what I said to them afterwards," he told Motherboard. "AI Dungeon offered a safe space to explore my fantasies for much of the same reasons I have just outlined. My wife does not want to explore this side of myself and I couldn't even think about being unfaithful with her, so this was a way for me to deal with my frustrations while respecting her feelings."

Advertisement

"A broken algorithm deciding that unconnected words on the screen suggest that a child might be at risk of a perceived threat is a poor excuse to invade my privacy once," the anonymous user continued. "Using that weak argument to then invade my privacy further is frankly an outrage."

Another AI Dungeon player wrote on Reddit that they had specifically been using the program to deal with their experiences with child sexual assault.

"AI Dungeon seems to be unique in the way it builds persistent stories and allows for regular player interaction in building these stories," Reddit user CabbieCat told Motherboard. "It's a simple set-up, but it essentially creates a scene which can be engaged in, paused, or edited at will for the safety and comfort of the player. That's an excellent therapeutic tool!"

For users like this, the current method of content moderation means that there's a potential for a human moderator to read what is essentially a diary. Given the sheer amount of private, sexual content on the platform, many users share the fear that a space they assumed was safe for experimentation isn't any longer.

The users that spoke to Motherboard understand that Latitude has an obligation to moderate content, but feel like this approach throws out the baby with the bathwater. Major social media platforms like Twitter and Facebook have similarly struggled with content moderation, and Latitude is a company that is much smaller, with a total of 20 employees.

"The tension between user freedom and community safety is something that every tech company is dealing with right now," Walton told Motherboard. "As Latitude continues to grow and define our place in this global community, we’re confident the difficulties we are currently experiencing in this space help us become a stronger company that can keep its users safe and allow them to freely express themselves."

The community that's built up around AI Dungeon are the people who are most acutely feeling that tension. The ways that these content moderation changes have been implemented have eradicated the trust between Latitude and their user base. CabbieCatsaid that from their perspective, it doesn't feel like the changes to content moderation were being made to protect victims of child sexual assault like them.

"It's no secret that a large number of members use AI Dungeon primarily for pornographic reasons. I honestly don't blame them. Given the AI's capabilities, it's much, much better at developing individual scenes than telling an ongoing story," CabbieCat said. "I feel like Latitude most likely are trying to pull away from this reputation, but in doing so have ended up targeting a large portion of their fanbase. That, alongside the breaches of privacy, has led to people feeling exposed and humiliated. This isn't a child having their toys taken away nearly so much as it is suddenly revealing there was a hidden camera in the bedroom all along."