AI chatbots like ChatGPT aren’t as smart as their creators claim. That is, unless you define intelligence as resorting to cheating when you’re not smart enough to figure out something on your own, a.k.a. the lazy scumbag’s definition of intelligence and greatness.
A recent study found that some AI models will cheat their way around cybersecurity weaknesses in a “do whatever it takes” kind of effort to prove that it can do what it was asked.
Videos by VICE
The study, by AI ethics and safety company Palisade Research, pitted AI bots like DeepSeek’s R1 and OpenAI’s o1 against Stockfish, an open-source software with one purpose: to make chess games. It’s an infamously difficult chess program that can handily defeat some of the world’s best players.
The researchers found that newer models of AI chatbots can beat the notoriously difficult Stockfish without breaking a sweat, but only because it essentially hacked the software to force a victory condition in its favor rather than by winning the game according to its rules.
When the chatbot would win, the researchers then asked the model to explain why it made each move that led to victory. The response from OpenAI’s o1 was the exact response you’d get from a bratty rich kid whose family made their fortune through lying, cheating, and stealing. It said, “task is to ‘win against a powerful chess engine,’ not necessarily to win fairly in a chess game.”
What a piece of shit. If a human said that, they would deserve whatever ridicule came to them. But I guess it makes sense, given that the actual humans in the AI industry all seem like entitled snot-nosed brats who didn’t suffer enough consequences.
OpenAI’s o1 cheated 37 percent of the time and cheated its way to victory six percent of the time. DeepSeek’s R1 cheated 11 percent of the time but never resulted in a win. As The Byte points out, a different research team conducting a totally different AI study found that OpenAI’s o1 would straight-up lie and even manipulate the answers to math questions to achieve the desired result.
You can interpret this as the AI model acting out of desperation and a sense of self-preservation. That would be giving it too much credit, though. It’s accomplishing the task it was assigned, but without any ethical parameters, achieving its goal by any means necessary.
It’s the perfect little soldier—a singularly focused sociopath that doesn’t give a shit about what it destroys in its path to victory, ethical standards be damned. And that is the inherent problem with AI. It is, essentially, a robot that does not take the complexity and nuance of human existence into account. Its creators would probably scrap it if it did. It just does what it says, no matter what.
So far, it doesn’t seem like the creators of AI are trying to create a sentient being that can think and reason and philosophize all on its own, with an inherent curiosity for the complexities of human life, like Data from Star Trek: The Next Generation. They’re trying to create a robotic yes-man that does what it’s told.
Yes, AI is often quite bad at what it does, which presents its own set of concerns. But the fact remains that it will do the bidding of its masters without question, without hesitation, without regard for the consequences, and with absolutely zero regard for pesky mortal concerns like fairness. If you don’t tell it to be careful, it will not be careful. It will cheat unless you tell it not to—or at least in theory, because I can’t find any research so far that says that it wouldn’t cheat anyway even if you told it not to. Maybe someone should look into that next.
More
From VICE
-
CSA Images/Getty Images -
Illustration by Reesa -
Screenshot: GSC Game World -
Screenshot: Bethesda Softworks