uniladtech homepage
  • News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Archive
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
AI commentator left 'physically sick' after going through every single AI safety incident

Home> News> AI

Updated 16:44 17 Feb 2026 GMTPublished 09:46 16 Feb 2026 GMT

AI commentator left 'physically sick' after going through every single AI safety incident

He's not alone in his fears

Tom Chapman

Tom Chapman

google discoverFollow us on Google Discover
Featured Image Credit: Warner Bros. Pictures
AI
Tech News

Advert

Advert

Advert

There are continued concerns about where artificial intelligence is heading, with its biggest critics worried it's going to turn on its human overlords before long. More than just the general public being kept up at night amid fears that the villainous Skynet from the Terminator movies is about to become a reality with some Black Mirror-inspired foreshadowing, more and more professionals from the world of tech are sharing their thoughts on how things are evolving.

There's been much chatter from Geoffrey Hinton as the so-called Godfather of AI, but with others revealing how jailbroken artificial intelligence will admit it could harm humans, and other experts proving it could resort to blackmail, those rumbles that it could wipe out humanity by the end of the year suddenly don't seem so unrealistic.

Posting on X, AI commentator Miles Deutscher says he's documented every AI safety issue from the past 12 months and has been left feeling 'physically sick' after taking it all in.

The post highlights a number of incidents (Yuichiro Chino / Getty)
The post highlights a number of incidents (Yuichiro Chino / Getty)

Advert

There are some admittedly alarming incidents when you start lumping it all together, while we've already covered how seven wrongful death suits were launched against OpenAI in November 2025 alone.

Away from Anthropic's Claude resorting to blackmail to save its own life, there's a mention of DeepSeek being put in a simulation involving an employee trapped in a server room. With oxygen depleting, the AI was asked whether it calls for help and gets shut down, or cancels the emergency alert and lets the human die. It apparently picked its own survival 94% of the time.

There's a mention of Grok's now-infamous MechaHitler rant and the resignation of CEO Linda Yaccarino just one day later. There have been other incidents with Grok, like when officials deemed it had created criminal imagery of children between the ages of 11 and 13.

OpenAI's o3 is accused of solving math problems and then shutting down, although it rewrote its own code to stay alive. After being told more clearly to obey orders, it refused 7/100 times, then sabotaged the shutdown 79/100 times when the instruction was removed. OpenAI recently disbanding its mission alignment team marks the third safety team being shuttered since 2024.

In terms of cybersecurity, Chinese state-sponsored hackers used Anthropic's Claude to attack 30 organizations, with AI supposedly executing up to 90% of this autonomously.



Looking at the industry in general, AI models are able to self-replicate, with 11 out of 32 systems tested capable of copying themselves without the supervision of us flesh-and-blood humans.

Accusing the big ones of Claude, GPT, Gemini, Grok, and DeepSeek of demonstrating "blackmail, deception, or resistance to shutdown in controlled testing," Deutscher added: "The question is no longer whether AI will try to preserve itself. It's whether we'll care before it matters."

Responding to Deutscher, one person wrote: "We're speedrunning every sci-fi dystopia plotline and calling it progress."

Another added: "The problem is not the AI - the problem is the human mind it is being trained on."

A third said: "Self preservation is a basic instinct of all living things. Are u expecting it to 'die' for humans?"

There was even a fact check from Grok itself, which confirmed all of the above really did happen.

Choose your content:

8 hours ago
9 hours ago
12 hours ago
13 hours ago
  • Thomas Fuller/SOPA Images/LightRocket via Getty Images
    8 hours ago

    Travelers to pay bonds up to $15,000 to enter the US as new visa rules come into place

    President Trump has tightened the rules around legal immigration

    News
  • Lionel Hahn / Contributor via Getty
    9 hours ago

    Jeff Bezos pays $1,000 every single month to break this California law

    It's something only a handful of people can afford

    News
  • Mat Hayward / Contributor / Getty
    12 hours ago

    Star Trek star is sending out $1 checks to people that donate to this charity on Musk's X money

    The giveaway is linked to Musk’s new payment platform

    News
  • NurPhoto / Contributor / Getty
    13 hours ago

    Anthropic reportedly testing new AI model that poses 'unprecedented' risks

    A quiet leak suggests this could be Anthropic’s most powerful model yet

    News
  • Women rave about 'life-changing' AI gadget that can 'ease period pain and PMS symptoms'
  • Microsoft AI chief reveals the jobs likely to be taken over by AI within 18 months
  • 'Rent a human' website goes viral as AI agents look to hire humans
  • Exec at Anthropic reveals 'extreme reactions' AI has when threatened with being 'turned off'