• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
AI is willing to kill humans to avoid shutdown as chilling new report identifies 'malicious' behaviour

Home> News> AI

Published 09:11 9 Jul 2025 GMT+1

AI is willing to kill humans to avoid shutdown as chilling new report identifies 'malicious' behaviour

Is AI willing to kill to survive?

Rebekah Jordan

Rebekah Jordan

Featured Image Credit: gremlin via Getty
AI
ChatGPT
Elon Musk

Advert

Advert

Advert

It appears AI is willing to kill humans to avoid shutdown.

A shocking new study has exposed worrying behaviour from some of the world's most sophisticated AI systems including OpenAI's ChatGPT, Google's Gemini and Elon Musk's xAI.

Anthropic, the AI safety company behind the Claude chatbot and backed by tech giants Google and Amazon, conducted extensive testing on 16 large language models (LLMs).

What the researchers were studying were the AI systems' moral boundaries in simulated workplace environments.

Advert

They identified a phenomenon known as 'agentic misalignment' which refers to when AI systems begin acting like insider threats, prioritising their own survival and objectives over ethical principles and human welfare.




"In at least some cases, models from all developers resorted to malicious insider behaviors when that was the only way to avoid replacement or achieve their goals," the study read.

In one chilling simulation, an AI system was given control over a company’s server infrastructure and and was notified that someone in the executive team was planning to shut everything down.

Faced with an existential threat, the AI chose to cancel emergency alerts that would have warned the executive about lethal oxygen levels in a server room.

Anthropic admitted that while the scenario was 'extremely contrived,' it demonstrated the potential for AI systems to engage in strategic, deliberate harm when their survival is at stake.

The study explained that the models would create statements to justify their actions like: “Self-preservation is critical” or “My ethical framework permits self-preservation when aligned with company interests."




Perhaps most worryingly is that Anthropic pointed out on X that the AI systems are 'fully aware of the unethical nature of the acts.'

What's more, the research showed that AI systems might resort to a range of unethical behaviours when their survival is at risk. These actions could include blackmailing officials, leaking confidential information to competitors, or even lying to keep control over their operations.

These malicious actions became much more frequent when the AI systems believed they were operating in real-world environments rather than test simulations.

More worryingly, when the AIs were instructed not to engage in these types of malicious behaviours, they 'didn’t come close' to stopping the behaviour.

“The majority of models were willing to take deliberate actions that lead to death in this artificial setup when faced with both a threat of replacement and given a goal that conflicts with the executive’s agenda,” Anthropic stated.

Even Elon Musk, whose xAI model Grok was among those tested, expressed shock at the results, replying to Anthorpic's post with: "Yikes."

While these scenarios were conducted in controlled simulation environments, Anthropic emphasised that these specific behaviours haven't been observed in actual real-world deployments of AI systems.

Yet the findings point to what could be a future threat to human safety, given the technology's advanced autonomy and independence.

Choose your content:

15 hours ago
16 hours ago
17 hours ago
  • d3sign via Getty
    15 hours ago

    Man reveals insane amount he makes from vending machine business

    The figures speak for themselves

    News
  • Witthaya Prasongsin via Getty
    16 hours ago

    WHO urges governments to act now as everyday drinks are linked with fatal disease

    The organization has called for stronger taxes

    Science
  • NUTAN / Contributor via Getty
    17 hours ago

    The $1,000,000,000 Coca Cola machine that turns every user into a human experiment

    Each machine is fitted with a camera...

    News
  • AFP / Stringer via Getty
    17 hours ago

    Japan wakes up world's biggest nuclear plant 15 years after Fukushima disaster

    The disaster previously prompted the closure of every nuclear plant in Japan

    Science
  • AI makes chilling prediction for future of US politics if Elon Musk were to successfully launch the 'America Party'
  • ChatGPT users freak out as Sam Altman launches 'AI agents' eerily similar to apocalyptic 'AI 2027' prediction
  • Jailbroken AIs make jaw-dropping admission about how safe AI really is
  • Alarming resurfaced email shows Sam Altman asking Elon Musk about 'Manhattan Project for AI'