Tech

Captcha Is Asking Users to Identify Objects That Don't Exist

Discord's captcha asked users to identify a 'Yoko,' a snail-like object that does not exist and was created by AI.
FwoPXx4WIAIHLqT

People trying to use Discord are being asked to identify an object that does not exist. The object in question is a “Yoko,” which appears to be a kind of mix between a snail and a yoyo. Multiple people have reported seeing a prompt to identify a Yoko when asked to solve a simple captcha prompt while trying to use Discord.

The picture of the Yoko, as well as the other images in the captcha, appear generated by AI. Another user complained on Twitter that they’d failed to pass a captcha to log into Discord when it asked him to identify images of a puzzle cube. Again, the pictures appeared to be AI generated. 

Advertisement

Discord’s captchas are run by a company called hCaptcha. “The technology that generates these prompts is proprietary to our third-party partner and Discord does not directly determine what is presented to users,” Discord told Motherboard.

“While most hCaptcha interactions do not result in a visual challenge, many variants are used at any given time,” a spokesperson for hCaptcha told Motherboard. “This particular question was a brief test seen by a small number of people, but the sheer scale of hCaptcha (hundreds of millions of users) means that when even a few folks are surprised by a challenge this often produces some tweets.”

hCaptcha bills itself as a privacy focused alternative to the ubiquitous reCAPTCHA. According to a blog from 2018 about how it works, hCaptcha’s prompts are generated by clients looking for “high quality human annotations for their machine learning needs.”

hCaptcha makes money from both clients like Discord who purchase pro and enterprise plans to run captcha services and from clients who generated the prompts. hCaptcha is using the captchas to help train machine learning systems and generative adversarial networks. This isn’t the first time people have noticed weird AI images popping up in hCaptcha services and it won’t be the last.

Two months ago, a Redditor noticed Discord was asking it to distinguish AI generated soccer players amidst a group of pictures of people playing hockey and golf. In March, another Redditor complained that the Discord captchas had become unintelligible. “Like ‘Choose the robot’ where none of the images look human to begin with,” FunnerThanUsual said on the Discord subreddit. “Then I never get the email to validate my account even after jumping through hoops. Seriously, blocking bots is fine, trying to train AI with shitty rendered captcha images is another thing entirely.”

The issue with hCaptcha’s strange AI generated prompts highlights two issues with machine learning systems. The first is that the AI systems require an enormous amount of human input to not be terrible. Typically image labeling is outsourced to foreign workers who do it for pennies on the dollar. The other is the issue of data drift. The longer these machine learning systems run, the more input they require. Inevitably, they begin to use data they’ve generated to train themselves. Systems that train on themselves long enough become AI Hapsburgs, churning out requests to identify incomprehensible objects like “Yokos.”