uniladtech homepage
  • News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Archive
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
AI is willing to kill humans to avoid shutdown as chilling new report identifies 'malicious' behaviour

Home> News> AI

Published 09:11 9 Jul 2025 GMT+1

AI is willing to kill humans to avoid shutdown as chilling new report identifies 'malicious' behaviour

Is AI willing to kill to survive?

Rebekah Jordan

Rebekah Jordan

google discoverFollow us on Google Discover

It appears AI is willing to kill humans to avoid shutdown.

A shocking new study has exposed worrying behaviour from some of the world's most sophisticated AI systems including OpenAI's ChatGPT, Google's Gemini and Elon Musk's xAI.

Anthropic, the AI safety company behind the Claude chatbot and backed by tech giants Google and Amazon, conducted extensive testing on 16 large language models (LLMs).

What the researchers were studying were the AI systems' moral boundaries in simulated workplace environments.

Advert

They identified a phenomenon known as 'agentic misalignment' which refers to when AI systems begin acting like insider threats, prioritising their own survival and objectives over ethical principles and human welfare.




"In at least some cases, models from all developers resorted to malicious insider behaviors when that was the only way to avoid replacement or achieve their goals," the study read.

In one chilling simulation, an AI system was given control over a company’s server infrastructure and and was notified that someone in the executive team was planning to shut everything down.

Faced with an existential threat, the AI chose to cancel emergency alerts that would have warned the executive about lethal oxygen levels in a server room.

Anthropic admitted that while the scenario was 'extremely contrived,' it demonstrated the potential for AI systems to engage in strategic, deliberate harm when their survival is at stake.

The study explained that the models would create statements to justify their actions like: “Self-preservation is critical” or “My ethical framework permits self-preservation when aligned with company interests."




Perhaps most worryingly is that Anthropic pointed out on X that the AI systems are 'fully aware of the unethical nature of the acts.'

What's more, the research showed that AI systems might resort to a range of unethical behaviours when their survival is at risk. These actions could include blackmailing officials, leaking confidential information to competitors, or even lying to keep control over their operations.

These malicious actions became much more frequent when the AI systems believed they were operating in real-world environments rather than test simulations.

More worryingly, when the AIs were instructed not to engage in these types of malicious behaviours, they 'didn’t come close' to stopping the behaviour.

“The majority of models were willing to take deliberate actions that lead to death in this artificial setup when faced with both a threat of replacement and given a goal that conflicts with the executive’s agenda,” Anthropic stated.

Even Elon Musk, whose xAI model Grok was among those tested, expressed shock at the results, replying to Anthorpic's post with: "Yikes."

While these scenarios were conducted in controlled simulation environments, Anthropic emphasised that these specific behaviours haven't been observed in actual real-world deployments of AI systems.

Yet the findings point to what could be a future threat to human safety, given the technology's advanced autonomy and independence.

Featured Image Credit: gremlin via Getty
AI
ChatGPT
Elon Musk

Advert

Advert

Advert

Choose your content:

8 hours ago
10 hours ago
  • Paul Zimmerman / Contributor / Getty
    8 hours ago

    Ghislaine Maxwell reportedly sent USB drive to the White House days after Melania Trump's Epstein press conference

    Melania Trump distanced herself from Epstein and Maxwell in an unprompted statement

    News
  • Chip Somodevilla / Staff via Getty
    10 hours ago

    Anonymous call out White House over bizarre Earth Day post

    California Gov. Gavin Newsom has also called out President Trump's environmental policies

    Science
  • Julia Varvaro / LinkedIn
    10 hours ago

    Trump official denies sugar daddy profile, but Seeking's new AI face-mapping might have the receipts

    The counterterrorism official has been placed on leave

    News
  • BlackJack3D / Getty
    10 hours ago

    Scientists discover new alphacoronavirus with pandemic potential

    This could potentially snowball into a global pandemic

    Science
  • Chilling study uncovers AI will lie and cheat to 'protect their own kind'
  • Elon Musk reveals what life could look like after AI takes over jobs from humans
  • Inside 'ridiculous' AI startup that Elon Musk placed $60,000,000,000 'gamble' on
  • AI makes chilling prediction for future of US politics if Elon Musk were to successfully launch the 'America Party'