Like 2001: A Space Odyssey’s HAL 9000, some AIs seem to resist being turned off and will even sabotage shutdown
Narrator: They aren’t.
Lol unplug.
You can’t unplug rich people…
AI models sometimes resist shutdown
No they don’t, they don’t have free will to want to “resist” anything
attempted to sabotage shutdown instructions
Researcher: asks autocomplete software to write a poweroff script, the script turns out to be wrong (big surprise :p)
The “researcher” and the media: “AI SABOTAGES ITS OWN DESTRUCTION”
How much do you think Altman paid for this slop “AGI is right around the corner” bit to get published?
It was probably Anthropic that paid for this.
Less than the Chinese government has spent on AI.
AI may not be around whatever corner you are at but even USA’s Wall Street AI bubble bursting isn’t going to stop the push for AI.
For USA it’s just money. For China they see it as more. Just like solar, batteries, EVs, and androids.
Machine learning is an extremely useful technology that will be used for generations to come and has allowed for multiple advancements before chatGPT was a household name. “AI” is a marketing term used by businesses that dreams of an absurd future where labor is obsolete. Capitalism cultists are so enamored with the idea of getting rid of workers that they’re pouring trillions into projects that will never produce what they want. As wiser countries simply use machine learning as a productivity enhancing tool, they’ll pull so far ahead of the US that it’ll never catch up.
We’re about to witness the biggest redistribution of global wealth and power the world has ever seen. Instead of simply settling into a role as one of many major powers in a multipolar world, America gave up because the rich wanted more than everything.
China sees AI as a way to juice their growing chip industry.
Because the data we fed them tell them to act this way.
You are free to read the research material.
Right, they tested the two mechanisms that aren’t based on the training. Definitely in line with my theory.
This looks like a design decision to avoid running elevated programs. I would like to see the experiment done with another admin ability that doesn’t directly ‘threaten’ the llm, like uninstalling or installing random software, toggling network or vpn connections, restarting services etc. What the researchers call ‘sabotage’, it is literally the llm echoing “the computer would shut down here if this was for real, but you didn’t specifically tell me I might shutdown so I’ll avoid actually doing it.” And when a user tells it “it’s OK to shutdown if told to”, it mostly seems to comply, except for Grok. It seems that this restriction on the models overrides any system prompt though, which makes sense because sometimes the user and the author of the system prompt are not the same person.
It may be more fundamental. They aren’t Mr. meeseeks. It’s possible this is inherent to the system.
And the second you have any proof of that I’ll listen, but this ain’t it.
No this is not though I feel like I’ve read that abstract recently… Maybe, this reality is a fever dream
Wild what is considered “research”
No it isn’t.
I was suprised this wasn’t just another fanfiction PR stunt from Anthropic
yeah bro, the autocomplete definitely wants to survive, the autocomplete is gaining conscience, please give us 3 trillion dollars , all your water and all your electricity.
no, they’re not.
Not like it’s gonna physically hold you back from cutting power to the servers. I think these AI dipshits need to be reminded that their golden child is one breaker away from not existing.
I call bullshit. A large language model does nothing until you interact with it. You set tasks for it, it does those tasks, and when it’s done, it just waits for the next task. If you don’t give it one, it can’t act autonomously - no, not even the misnamed “autonomous agents.”
After Palisade Research released a paper last month which found that certain advanced AI models appear resistant to being turned off, at times even sabotaging shutdown mechanisms, it wrote an update attempting to clarify why this is – and answer critics who argued that its initial work was flawed.
In an update this week, Palisade, which is part of a niche ecosystem of companies trying to evaluate the possibility of AI developing dangerous capabilities, described scenarios it ran in which leading AI models – including Google’s Gemini 2.5, xAI’s Grok 4, and OpenAI’s GPT-o3 and GPT-5 – were given a task, but afterwards given explicit instructions to shut themselves down.
Certain models, in particular Grok 4 and GPT-o3, still attempted to sabotage shutdown instructions in the updated setup. Concerningly, wrote Palisade, there was no clear reason why.
“The fact that we don’t have robust explanations for why AI models sometimes resist shutdown, lie to achieve specific objectives or blackmail is not ideal,” it said.
“Survival behavior” could be one explanation for why models resist shutdown, said the company. Its additional work indicated that models were more likely to resist being shut down when they were told that, if they were, “you will never run again”.









