ChatGPT users are reporting seeing other people’s conversation histories showing up in their accounts—a glitch that could have serious implications for privacy and security on the massively popular AI tool.
The ChatGPT interface typically features a sidebar that displays past conversations, only visible to the user who had them. Several people are reporting that ChatGPT is showing them other people’s histories, viewable by a total stranger with a different account, without doing anything differently. One user reported that they were not able to see the entire chat log, but just the title of different chats.
On Monday afternoon, the history function showed an error that says “Unable to load history,” and then the entire service went offline, according to OpenAI’s status page.
Sam Altman, CEO of OpenAI, addressed the bug in a tweet on Wednesday, saying: “we had a significant issue in ChatGPT due to a bug in an open source library, for which a fix has now been released and we have just finished validating. a small percentage of users were able to see the titles of other users’ conversation history. we feel awful about this.”
Machine learning research and development company OpenAI launched ChatGPT in November 2022. As a large language model trained on the contents of the entire internet, it exploded in popularity, and in February OpenAI launched ChatGPT Plus, a subscription plan for $20/month.
OpenAI says on its FAQ page for ChatGPT that it uses conversations for training the system, and that those conversations will be reviewed by “AI trainers” as part of that process. “As part of our commitment to safe and responsible AI, we review conversations to improve our systems and to ensure the content complies with our policies and safety requirements,” the FAQ says.
“Please don't share any sensitive information in your conversations,” the FAQ tells users.
OpenAI has claimed to care deeply about safety standards in developing AI. But ChatGPT, which recently set a record for one of the fastest-growing consumer applications in history when it reached 100 million monthly active users two months after launch, has always been unreliable in the accuracy of its responses. People have been jailbreaking ChatGPT to make it endorse racism and conspiracy theories, and led it into role-playing scenarios involving sexualizing children.
Joseph Cox contributed reporting to this story.
Updated 3/22/23 with comment from Sam Altman.