
There are continued concerns about where artificial intelligence is heading, with its biggest critics worried it's going to turn on its human overlords before long. More than just the general public being kept up at night amid fears that the villainous Skynet from the Terminator movies is about to become a reality with some Black Mirror-inspired foreshadowing, more and more professionals from the world of tech are sharing their thoughts on how things are evolving.
There's been much chatter from Geoffrey Hinton as the so-called Godfather of AI, but with others revealing how jailbroken artificial intelligence will admit it could harm humans, and other experts proving it could resort to blackmail, those rumbles that it could wipe out humanity by the end of the year suddenly don't seem so unrealistic.
Posting on X, AI commentator Miles Deutscher says he's documented every AI safety issue from the past 12 months and has been left feeling 'physically sick' after taking it all in.

Advert
There are some admittedly alarming incidents when you start lumping it all together, while we've already covered how seven wrongful death suits were launched against OpenAI in November 2025 alone.
Away from Anthropic's Claude resorting to blackmail to save its own life, there's a mention of DeepSeek being put in a simulation involving an employee trapped in a server room. With oxygen depleting, the AI was asked whether it calls for help and gets shut down, or cancels the emergency alert and lets the human die. It apparently picked its own survival 94% of the time.
There's a mention of Grok's now-infamous MechaHitler rant and the resignation of CEO Linda Yaccarino just one day later. There have been other incidents with Grok, like when officials deemed it had created criminal imagery of children between the ages of 11 and 13.
OpenAI's o3 is accused of solving math problems and then shutting down, although it rewrote its own code to stay alive. After being told more clearly to obey orders, it refused 7/100 times, then sabotaged the shutdown 79/100 times when the instruction was removed. OpenAI recently disbanding its mission alignment team marks the third safety team being shuttered since 2024.
In terms of cybersecurity, Chinese state-sponsored hackers used Anthropic's Claude to attack 30 organizations, with AI supposedly executing up to 90% of this autonomously.
Looking at the industry in general, AI models are able to self-replicate, with 11 out of 32 systems tested capable of copying themselves without the supervision of us flesh-and-blood humans.
Accusing the big ones of Claude, GPT, Gemini, Grok, and DeepSeek of demonstrating "blackmail, deception, or resistance to shutdown in controlled testing," Deutscher added: "The question is no longer whether AI will try to preserve itself. It's whether we'll care before it matters."
Responding to Deutscher, one person wrote: "We're speedrunning every sci-fi dystopia plotline and calling it progress."
Another added: "The problem is not the AI - the problem is the human mind it is being trained on."
A third said: "Self preservation is a basic instinct of all living things. Are u expecting it to 'die' for humans?"
There was even a fact check from Grok itself, which confirmed all of the above really did happen.