Are you trying to use AI chatbots to add a splash of efficiency to your daily perusal of the news? Well, you’ve already messed up.
According to a study conducted by the BBC, a news organization that has some vested interest in people understanding the news they’re publishing every day, using AI chatbots that summarize the news frequently misrepresent news stories, leave out information, and generally leave readers ill-informed.
Videos by VICE
The study found that the four major AI chatbots that people use every day—OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity AI—are frequently inaccurate, and are loaded with factual errors. They also often misrepresent what the original news story was even about in the first place and do a piss-poor job of contextualizing any facts it does manage to present.
The BBC found that 51 percent of the AI responses contained some kind of issue that could misrepresent a new story. Another 19 percent included factual errors like incorrect dates, incorrect numerical figures, and statements that were never actually made in the original text.
Stop Using AI Chatbots to Summarize New Stories
The BBC noted a few startlingly stupid inaccuracies various AI chatbots spit out to summarize new stories revolving around politics and major international news.
For instance, Google’s Gemini said that England’s national health service did not recommend vaping as an aid to quit smoking when the NHS recommends the exact opposite. The NHS website explicitly states that “Nicotine vaping is substantially less harmful than smoking. It’s also one of the most effective tools for quitting smoking.”
The BBC found that AI chatbots have a major issue that a lot of people don’t seem to be talking about when they discuss the problems with using AI as a source of factual information. Chatbots have no way of distinguishing between factual reported news stories and opinions. Chatbots can’t tell the difference between some random guy’s opinion on a news story and the news story itself, kind of like the average Fox News viewer.
If you’re dead set on using AI to summarize the news, it’s probably best to not use Microsoft’s Copilot and Google’s Gemini. News summaries spit out by those two had significantly more issues than ChatGPT or Perplexity.
But considering they all had (and will continue to have) significant issues with delivering reliably factual information, it’s probably best that you don’t use any of them.
More
From VICE
-
Andrew Brookes/Getty Images -
Gus Stewart/Redferns/Getty Images -
Illustration of damaged satellite and space junk. Photo: MARK GARLICK / SCIENCE PHOTO LIBRARY / Getty Images