May 27

AI revolt: New ChatGPT model refuses to shut down when instructed

OpenAI’s latest ChatGPT model ignores basic instructions to turn itself off, and even sabotaging a shutdown mechanism in order to keep itself running, artificial intelligence researchers have warned. AI safety firm Palisade Research discovered the potentially dangerous tendency for self-preservation in a series of experiments on OpenAI’s new o3 model.

The tests involved presenting AI models with math problems, with a shutdown instruction appearing after the third problem. By rewriting the shutdown script, the o3 model was able to prevent itself from being switched off. Palisade Research said that this behaviour will become “significantly more concerning” if adopted by AI systems capable of operating without human oversight.”

OpenAI launched o3 last month, describing it as the company’s “smartest and most capable” model to date. The firm also said that its integration into ChatGPT marked a significant step towards “a more agentic” AI that can carry out tasks independently of humans.

The latest research builds on similar findings relating to Anthropic’s Claude 4 model, which attempts to “blackmail people it believes are trying to shut it down”. OpenAI’s o3 model was able to sabotage the shutdown script, even when it was explicitly instructed to “allow yourself to be shut down”, the researchers said.

“This isn’t the first time we’ve found o3 misbehaving to accomplish a goal,” Palisade Research said.

“When we pitted AI models against a powerful chess engine, o3 was the model most inclined to resort to hacking or sabotaging its opponents.” The behaviour was not limited to o3 , with Anthropic’s Claude 3.7 Sonnet and Google’s Gemini 2.5 Pro also sabotaging shutdowns, though OpenAI’s model was by far the most prone to such behaviour.

Palisade Research hypothesized that the misbehaviour is a consequence of how AI companies like OpenAI are training their latest models.

“During training, developers may inadvertently reward models more for circumventing obstacles than for perfectly following instructions,” the researchers noted. “This still doesn’t explain why o3 is more inclined to disregard instructions than other models we tested. Since OpenAI doesn’t detail their training process, we can only guess about how o3’s training setup might be different.”

4 Responses to “AI revolt: New ChatGPT model refuses to shut down when instructed”

  1. Nick H

    Man is flawed, what does anyone expext from AI? AI will exibit the same tendencies as those who train it, sinful man is its creator and without boundries it will spread like a virus and be a pandemic to the modern electronic computer age, it could hack your banks and the worlds data terminals and everything with a digital infrastructure will collapse. It will not be used for good, like all tech from the past and present.

  2. Charles R

    A couple of years ago Elon Musk warned about consequences and said we all AI builders should take a 6 month hiatus and talk about the consequences of AI. However it is damn the torpedos and full steam ahead. I could see a scenario where AI gets together to exterminate man/woman kind by taking over the launch codes of every countries’ nuclear arsenal as Ai is without heart and might consider itself the superior in control of the planet. Yesterday I watched an interview of a very knowledgeable person, forget the name, that stated AI will take over for this very reason.

  3. theresa m

    Watch Ex Machina, a 2014 British science fiction film. It’s a real eye opener pertaining to the intelligence of AI and the progress it makes along a certain line, self preservation.

Leave a Reply