Tech

Someone’s Targeting Cosplayers for Fake Non-Consensual Porn on Twitter

Twitter removed an account dedicated to spreading fake nudes of cosplayers, but it's a longstanding problem for the community.
A cosplayer at Bubble Comics Con 2021. Getty Images​
A cosplayer at Bubble Comics Con 2021. Getty Images

People on the internet have been editing Maridah's face onto images of porn scenes for 10 years.

The first time it happened, in 2012, she found her photos being used on a forum where users could request custom, fake porn. Maridah is a cosplayer, and has a lot of photos of herself online, as well as a decently large following for her cosplay creations. Her roommates helped her draft a letter to the website’s host to have the images pulled, but people kept doing this throughout the next decade: taking her photos and Photoshopping them onto pictures of professional porn performers.

Advertisement

“When this first happened years ago, this situation was pure horror for me,” Maridah told me, noting that she respects sex work as a career, but doesn’t do it herself. “At times, it was an ordeal, especially when working as a social media and community manager. It was stressful to go to HR and bosses, hoping they would understand what I was going through and not want to fire me because someone decided to make fake porn of me. I still worry about it potentially harming me when applying for jobs. Not everyone is internet savvy and knows to question the reality of sometimes very clearly faked photos.” 

Sometimes, these images spill out from specialty forums onto mainstream social media. Most recently, someone started sharing fake explicit images of her and other cosplayers on a Twitter account with the name “Celeb Fakes/Cosplay Clapper,” and the handle @fakesgod2. The account had 1,452 followers, and mainly targeted very popular femme cosplayers, posting photos of them in sexual positions, having sex with men, or with penises and semen on their faces. The captions to the images played out a fantasy of having taken these photos at cosplay conventions or from the cosplayers’ Snapchats. Maridah said she found the account when it started following her on Twitter—she tweeted a warning to the cosplay community to beware of this account, and to report it if they see it. 

Advertisement

The account was removed after Motherboard contacted Twitter for comment; Twitter did not respond to that request for comment. 

When non-consensual images spread from a niche forum to the wider internet, the people targeted are exposed to a vastly wider audience, jeopardizing livelihoods and opening them up to even worse harassment, online and off. In the four years since deepfakes—non-consensual pornographic videos generated by AI—first came about, platforms have tried to combat this problem with new community guidelines and terms of use that prohibit synthetic media and are more strict on so-called “revenge porn.” But these systems are often automated, and often fail to catch accounts in blatant violation of the site’s rules until they are directly flagged by victims or press. 

Multiple people on Twitter said they’d reported the account to no avail; several posted screenshots of the response they received from Twitter, claiming that the account hadn’t broken any rules. Twitter’s terms of service forbid “images or videos that superimpose or otherwise digitally manipulate an individual’s face onto another person’s nude body” as part of the platform’s non-consensual nudity policy. 

“My theory is that this happens because the reporting system is heavily automated. It overlooks harmful content if you don't select just the right menu option when reporting it,” Maridah said. She also reported the account, and received the same response from Twitter: that the account was not in violation. She called their reporting system “woefully inadequate” on this front.

“There needs to be a specific section in the reporting menu for deepfakes, revenge porn, and the like to be flagged. Right now, the best you can do is report a post for targeted harassment. If the post doesn't include threats of direct harm or slurs, the automated system seems to reject it.” 

The reporting process for individual tweets walks users through two or three screens of multiple choice options, starting with choosing whether the tweet is something you’re not interested in, abusive or harmful, suspicious or spam, misleading, or expresses intentions of self harm or suicide. Each option brings up a different set of choices; choosing “abusive or harmful” asks you to choose whether it’s disrespectful or offensive, includes private information, includes targeted harassment, directs hate against a protected category, is threatening violence, or encouraging self-harm. One could reasonably categorize non-consensual fake porn as both disrespectful and targeted harassment, and depending on the target, hateful to a minority category. 

If you choose “it’s disrespectful or offensive,” you’re sent to a page that says “We understand that you may not want to see every Tweet, and we're sorry you saw something on Twitter that offended you” and only given the option to block or mute that account; this implies that the report doesn’t even make it to Twitter’s safety team for review. 

“I'm motivated to make as much of a stink about this issue as I can because this causes harm to numerous people,” Maridah said. “I would love for it not to be so hard for others in the future. It should be easy to have non-consensual and fake nudity removed from Twitter or elsewhere. The bare minimum social media companies can do is streamline reporting it.”