Volunteer moderators at Stack Overflow, a popular forum for software developers to ask and answer questions run by Stack Exchange, have issued a general strike over the company’s new AI content policy, which says that all GPT-generated content is now allowed on the site, and suspensions over AI content must stop immediately. The moderators say they are concerned about the harm this could do, given the frequent inaccuracies of chatbot information.
“Stack Overflow, Inc. has decreed a near-total prohibition on moderating AI-generated content… tacitly allowing the proliferation of incorrect information (“hallucinations”) and unfettered plagiarism on the Stack Exchange network,” reads an open letter written by the moderators, who are all volunteers elected by the community.
Videos by VICE
“This poses a major threat to the integrity and trustworthiness of the platform and its content. Effective immediately, we are enacting a general moderation strike on Stack Overflow and the Stack Exchange network, in protest of this and other recent and upcoming changes to policy and the platform that are being forced upon us by Stack Overflow, Inc.”
The new policy, enacted in late May, requires moderators to stop moderating AI-generated content simply for being AI-generated. Without proper moderation of AI-generated content, though, moderators say the quality and accuracy of Stack Exchange’s information will quickly decline.
“AI chatbots are like parrots,” reads a post by moderators on Meta Stack Exchange further explaining their demands. “ChatGPT, for example, doesn’t understand the responses it gives you; it simply associates a given prompt with information it has access to and regurgitates plausible-sounding sentences. It has no way to verify that the responses it’s providing you with are accurate. ChatGPT is not a writer, a programmer, a scientist, a physicist, or any other kind of expert our network of sites is dependent upon for high-value content. When prompted, it’s just stringing together words based upon the information it was trained with. It does not understand what it’s saying.”
This, it continues, allows users to regurgitate ChatGPT’s answers without understanding them themselves, which goes against the very purpose of the site: “To be a repository of high-quality question and answer content.”
The content of the new AI policy is one big problem. The other, moderators say, is the lack of transparency surrounding it. They write in the post that in December, the site had enacted a temporary policy banning all ChatGPT use due to its “general inaccuracy” and violations of the site’s referencing requirements. This policy was supported both by volunteer moderators and Stack Exchange staff, and resulted in many post removals and user suspensions.
However, on May 29, the moderators write that a new policy was implemented in private, requiring “an immediate cessation of issuing suspensions for AI-generated content and to stop moderating AI-generated content on that basis alone.” The following day, a slightly different version of the policy was released to the public, without the language requiring moderators to stop restricting all AI content.
“The new policy overrode established community consensus and previous CM support, was not discussed with any community members, was presented misleadingly to moderators and then even more misleadingly in public, and is based on unsubstantiated claims derived from unreviewed and unreviewable data analysis,” the Meta Stack Exchange post reads.
It continues, “The fact that you have made one point in private, and one in public, which differ so significantly has put the moderators in an impossible situation, and made them targets for being accused of being unreasonable, and exaggerating the effect of the new policy.”
Moderators are also demanding non-AI-related improvements to the site. “The strike is also in large part about a pattern of behavior recently exhibited by Stack Exchange, Inc,” the post reads. “The company has once again ignored the needs and established consensus of its community, instead focusing on business pivots at the expense of its own Community Managers.” One example they list is the chat function, which they say is desperately out-of-date but has been ignored for years.
The strike is the first major action against ChatGPT content flooding online sites. But moderators on other forums are similarly concerned. Moderators on Reddit have braced for a slew of AI-generated posts with inaccurate information. One Reddit moderator, who wished to remain anonymous, told Motherboard that Reddit’s ChatGPT-powered bot problem is “pretty bad,” and that several hundred accounts had already been manually removed from the site, since Reddit’s automated systems struggle with AI-created content.
The moderators want the AI policy to be retracted and revised, the inconsistency between the public and private versions of the policy to be resolved and apologized for, and the company to “stop being dishonest” about its relationship with the community.
Stack Overflow’s Vice President of Community, Philippe Beaudette, told Motherboard in a statement that, “A small number of moderators (11%) across the Stack Overflow network have stopped engaging in several activities, including moderating content. The primary reason for this action is dissatisfaction with our position on detection tools regarding AI-generated content. Stack Overflow ran an analysis and the ChatGPT detection tools that moderators were previously using have an alarmingly high rate of false positives.” The moderators write in their post that they were aware of the problems with the detection tool.
“We stand by our decision to require that moderators stop using the tools previously used,” Beaudette continued. “We are confident that we will find a path forward. We regret that actions have progressed to this point, and the Community Management team is evaluating the current situation as we work hard to stabilize things in the short term.”