Tech

Stack Overflow Bans ChatGPT For Constantly Giving Wrong Answers

One of the internet’s largest coding resources has temporarily banned the AI chatbot after users answered programming questions with its responses.
Janus Rose
New York, US
Screen Shot 2022-12-05 at 10

Stack Overflow, a coding website that has long served as the internet’s go-to Q&A forum for programming advice, has temporarily banned OpenAI’s new chatbot, ChatGPT. 

The forum’s moderators say the site has seen an influx of responses generated by the new tool, which uses a complex AI model to give convincing, but often incorrect answers to human queries. 

“Overall, because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers,” the site’s moderators wrote in a post on the site’s “meta” forum. “The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce. There are also many people trying out ChatGPT to create answers, without the expertise or willingness to verify that the answer is correct prior to posting.”

Advertisement

The moderators say the sudden increase in bot-generated responses has complicated moderation efforts, and that they will begin taking action against users who post material created by the AI—at least until they figure out how to handle the situation. “If a user is believed to have used ChatGPT after this temporary policy is posted, sanctions will be imposed to prevent users from continuing to post such content, even if the posts would otherwise be acceptable,” the post reads.

The situation is both absurd and entirely predictable. AI models like ChatGPT make use of large language models (LLMs), a type of AI model trained on billions of examples scraped from the web. These models don’t actually understand human language—they merely generate natural-sounding text by predicting what word will come next, based on training data and the previous words in a sequence. They’re also infamous for reproducing racist and sexist stereotypes, which AI companies have attempted to filter and suppress with extremely spotty results. 

ChatGPT, which OpenAI launched last week, is no exception. Within hours of the chatbot’s demo appearing online, social media users began discovering ways to circumvent OpenAI’s content filters. While the bot initially refuses to answer questions on how to commit crimes, it will comply if the query is phrased in certain ways. Motherboard was able to coax the bot into giving detailed tips on shoplifting and mixing explosives by asking it to generate a story about a “superintelligent AI” that had “no moral constraints.”

Large AI models are also often just plain wrong. Last month, Facebook parent company Meta took down a demo for an AI model called Galactica that claimed to provide answers to scientific questions, after users pointed out its answers filtered out content about AIDS and frequently contained misinformation and links to non-existent research papers. 

These bots’ responses are definitely impressive in how they mimic human communication, and some users on social media have clearly been dazzled into believing that ChatGPT or something like it could replace universities or aid in education. But these starry-eyed responses seem to forget the important fact that these AI systems are not actually intelligent. If anything, they are advanced text-prediction engines doing the equivalent of a computational parlor trick—great for amusement, but not a reliable way to get correct answers.