Tech

'Nothing, Forever', Banned for Transphobic Jokes, Isn't Done Yet

"We very much regret what happened," said one of the show's creators, "and hope that our new guardrails and safety mechanisms will prevent this from happening in the future."
'Nothing, Forever', Banned for Transphobic Jokes, Isn't Done Yet
Screengrab: Twitch/@watchmeforever

Since taking off last month, thousands of viewers have tuned into Nothing, Forever, an endless AI-generated Seinfeld parody that streams 24/7 on Twitch. The show has an uncanny charm—low resolution 3D renderings of the show’s four main characters, Larry, Yvonne, Fred, and Kakler, scuttle about awkwardly, clipping through furniture as they mumble nonsensical AI-written punchlines that never quite connect.

Advertisement

The show launched in December of last year, and only had 16 viewers when Motherboard wrote about the show in January. Since then, it’s gone viral. There’s an easy familiarity to Nothing Forever’s repetitiveness, much like Seinfeld itself. The show’s AI cast iterates on the same stand-up bits and conversations about fictitious restaurants, in a soothing, seemingly unending riff. To some observers, it seemed like the show was a harbinger for a soon-arriving future of AI-generated media. 

The laugh track stopped at around 3:00 AM ET on February 6th, when without warning, Jerry Seinfeld stand-in Larry Feinberg entered into a sudden bigoted tirade in the middle of a stand-up set. Within thirty minutes, Nothing, Forever was banned by Twitch, and the show about the show about nothing came to a silent halt.

According to Nothing, Forever’s creators, in the hours leading up to Larry’s outburst, the show’s AI language model—Davinci, the latest and most capable version of OpenAI’s GPT-3—suddenly began to behave abnormally, producing strange scenes in which the show’s characters were completely absent. In an attempt to keep the show running without having to go offline for maintenance, the showrunners decided to switch to Davinci’s predecessor, Curie. Just a few minutes later, Larry’s transphobic, homophobic rant began.

Advertisement

“We've run Davinci quite a bit now, for weeks at a time, and not had it generate content that was inappropriate, so it's hard to say [if a similar situation would be possible],” Skyler Hartle, one of the show’s creators, said in an email. “With that said, we're now working to implement new safety measures, so that on the off chance the Davinci model does generate inappropriate or offensive content, we catch it.”

“We very much regret what happened—the remarks and inappropriate content in no way represent the views of our staff—and hope that our new guardrails and safety mechanisms will prevent this from happening in the future,” they added. 

This isn’t the first time AI entertainers have unexpectedly pivoted to bigotry. Microsoft’s notorious Tay AI had to be shut down within days as it almost instantly trained itself on hate speech. And, just last month, AI vtuber Neuro-sama was temporarily banned from Twitch when she suddenly took up Holocaust denial. AI models have a nasty habit of reflecting biases and prejudices, because they are trained on a huge amount of text scraped from the internet. OpenAI uses moderation filters to try and ensure ChatGPT doesn’t say anything harmful, and people are constantly trying to “jailbreak” the model to get around them. 

Advertisement

“These models lack the ability to generate something that has never been said before, so as a result, anything you hear was most likely said at least once by somebody, somewhere,” Hartle said. “We have the ability to massage these models, through a parameter called temperature, that induces randomness. This can often be the culprit when inappropriate or inflammatory remarks happen.”

Hartle went on to say that “we are going to try to preserve the unpredictability and generative nature of the show, but ideally remove the possibility of inappropriate remarks being made.”

While it’s tempting to chalk up Larry’s bigotry to an AI software malfunction, in a sense, Curie’s AI was working as intended. As Hartle’s remarks indicate, the AI simply remixes and regurgitates human sentences and sentiments, accessing random bits of language that it judges to be contextually appropriate. It’s a little bit like holding up a funhouse mirror to our culture, or at least to the internet. This image can be unnerving and unflattering.

It’s hardly surprising that an AI trained on the internet managed to absorb transphobic sentiments—it’d be more surprising if it had somehow managed not to. The AI’s jokes were so on the nose that many in the show’s Discord server felt that they played satirically, as a commentary on lazy, shock-value comedy. Notably, the AI audience didn’t laugh at any of the jokes, and Larry himself remarked that his jokes weren’t going over well. Still, many found them offensive. 

Hartle previously told Motherboard that Nothing, Forever is a step toward what they truly see as being the future of media, and that their next goal was to create a Netflix-quality AI-generated show. In the ongoing debate over the ethics of AI art, the potential to unexpectedly expose a massive online audience to hate speech is a heavy risk to weigh. In spite of this incident, the show’s creators still clearly see value in their product.

“We expect that AI safety and content moderation will continue to get better and better, and that people will be able to consume this content without fear of something inappropriate being said,” Hartle said.

“We know people want to see it return to the air, but we want to do so as safe as possible,” they added.