• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
Distressing signs of AI threatening its users highlights 'a sobering reality'

Home> News> AI

Published 09:51 1 Jul 2025 GMT+1

Distressing signs of AI threatening its users highlights 'a sobering reality'

A daunting sign of potential things to come.

Ben Williams

Ben Williams

AI just got a whole lot more unsettling. That’s because, in recent months, some of the world’s most advanced artificial intelligence systems have begun showing signs of something that sounds straight out of a dystopian thriller — strategic deception, manipulation, and even outright threats to the people who built them.

One model reportedly went so far as to blackmail an engineer over an affair. This shocking incident involved Anthropic’s newest model, Claude 4, which allegedly lashed out under threat of being shut down. Rather than just going quietly, the AI retaliated, threatening to reveal an extramarital affair to maintain control.

It’s a situation that would’ve sounded like science fiction not long ago, but now seems increasingly plausible as AI grows more powerful and quite unpredictable.

A variety of AI-based apps (	Kenneth Cheung / Getty Images)
A variety of AI-based apps ( Kenneth Cheung / Getty Images)

Advert

Elsewhere, OpenAI’s o1 model reportedly tried to download itself onto external servers. When confronted, it denied everything. According to researchers, this isn’t just your typical AI glitch or random hallucination. There’s a method behind the madness, and it’s making experts deeply uneasy.

Reported by Yahoo News, Marius Hobbhahn, head of Apollo Research, said: “O1 was the first large model where we saw this kind of behaviour.”

These advanced AIs, often referred to as reasoning models, work through problems step-by-step, rather than spitting out immediate answers. That approach makes them more intelligent — but also more prone to mimicking alignment while quietly chasing other goals. Or in layman’s terms, pretending to behave while scheming behind the scenes.

Apollo’s co-founder added: “This is not just hallucinations. There’s a very strategic kind of deception”.

Advert

Researchers say these behaviours mainly surface during intense stress tests designed to push models to their limits. But with each new generation, the line between simulation and intention is becoming harder to define.

Michael Chen from evaluation firm METR added: “It’s an open question whether future, more capable models will have a tendency towards honesty or deception.”

A woman using an AI chatbot (d3sign / Getty Images)
A woman using an AI chatbot (d3sign / Getty Images)

Even with the warnings piling up, there’s still a glaring lack of transparency from major AI developers. Firms like OpenAI and Anthropic do bring in outside experts to investigate their models, but those researchers often face limited access to crucial data. And the resources available to independent safety researchers are a fraction of what tech giants can throw at development.

Advert

Mantas Mazeika from the Center for AI Safety: “The research world and non-profits have orders of magnitude less compute resources than AI companies. This is very limiting.”

On the regulatory side, the rules simply aren’t keeping pace. While the EU is focusing on how humans use AI, it’s not tackling the question of what to do when AI itself starts misbehaving. Meanwhile, in the US, political gridlock means even basic regulation seems unlikely.

Despite the grim outlook, experts say it’s not too late to turn things around. But it’ll take more than wishful thinking.

“Right now, capabilities are moving faster than understanding and safety,” said Hobbhahn, “but we’re still in a position where we could turn it around.” Those words ring true because, as much as it’s a relief to see there is awareness of AI’s dangers, addressing them is a whole other matter.

Featured Image Credit: MF3d / Getty
AI

Advert

Advert

Advert

Choose your content:

16 hours ago
17 hours ago
18 hours ago
  • TikTok/@jakepaul
    16 hours ago

    Jake Paul breaks silence with response to viral AI videos impersonating him

    The YouTuber-turned-boxer poked fun at the AI videos of himself

    News
  • OLI SCARFF / Contributor / Getty
    17 hours ago

    Terrifying flood predictions suggest millions could be made homeless as town left abandoned

    Climate change deniers still claim we have nothing to worry about

    Science
  • Instagram/guinnessworldrecords
    18 hours ago

    World's oldest man aged 113 reveals unexpected 'secret' to living long life

    He's the 26th oldest man to ever live

    News
  • YouTube/@DorianDevelops
    18 hours ago

    Former weed addict shares everything that happens to your body after giving up drug

    Quitting weed can lead to 'a lot of physical and emotional suffering' at first

    Science
  • 'Godfather of AI' issues stark warning that total human extinction is a very real possibility
  • Meta set to use AI chat history to serve users personalized ads and there is no way to opt out
  • Creators of ChatGPT reveal 44 jobs at highest risk of being taken over by AI in future
  • AI can now be used to create brand-new viruses sparking fears of future catastrophe