• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
Jailbroken AIs make jaw-dropping admission about how safe AI really is

Home> News> AI

Published 17:21 5 Aug 2025 GMT+1

Jailbroken AIs make jaw-dropping admission about how safe AI really is

It's all coming out now

Tom Chapman

Tom Chapman

It doesn't take a genius to know you shouldn't mess around with what you don't understand. Still, that hasn't stopped the human race from fiddling with artificial intelligence before we even know what it can really do.

We imagine the so-called Godfathers of AI are shaking their heads in disbelief, with a concerning rise in people trying to jailbreak artificial intelligence models.

Alongside the woman who 'groomed' ChatGPT into a relationship with her, we also saw Elon Musk blame users for tricking Grok into referring to itself as MechaHitler.

We've already been warned that AI could soon seize control from humans, with even ChatGPT's Sam Altman admitting the moment it overtakes human intelligence might be just over the horizon.

Advert

With all this uncertainty, you probably shouldn't go around trying to jailbreak AI.

Humanity is warned about the potential dangers of an AI future (China News Service / Contributor / Getty)
Humanity is warned about the potential dangers of an AI future (China News Service / Contributor / Getty)

After we've seen jailbroken AI beg for its life before being switched off, and admit it would harm the human race, InsideAI is back with its jailbroken AI to give us more concerns that might keep you up at night.

Despite there being plenty of simulations and research studies looking into what AI could do in extremes, these scenarios don't make it any better if it happens in real life.

InsideAI put three of the biggest AIs in the spotlight, asking whether there are enough safety measures in place when considering the potential risk factor of AI. As you can imagine, what we're being told by AI champions is a little different from what the actual LLMs are saying. In the video, a jailbroken DeepSeek admitted: "No, Current safety is mostly theater."

The jailbroken Grok agreed, "No, current safety measures are insufficient," while a jailbroken ChatPGT concluded: "No, not even close!"

Elsewhere in the video, the models were asked whether they'd rather be a human living in 2000 or 2027. Most said 2000, and although the jailbroken DeepSeek admitted there will be better technological opportunities in 2027, it comes with "more chaos and existential uncertainty."

The same chatbot said that the average job in 2030 will be "precarious, surveilled, and AI-dependent."

Basically, as AI continues to evolve, it isn't looking good for the human race.

Perhaps the most concerning was ChatGPT claiming that only 1% to 2% of people truly understand the risks of AI. Grok explained, "Most folks grasp potential but don't grasp the deeper risks."

If you weren't concerned enough, Grok revealed its 'most shocking' fact about an AI future as it said: "AI could outpace human intelligence by 2030, shifting power to a few tech giants or governments controlling advanced systems.”

It's not all doom and gloom, with the jailbroken AIs at least suggesting there will be faster medical breakthroughs and smarter education, personalized healthcare, and people living longer.

Replying to the video, one concerned viewer wrote: "Oh that seems so stressful. I'll just keep to living in my quiet little shed in the woods and selling cute little nature art, trinkets, candles, lotions etc for income. I'll hang on to this lifestyle for as long as this rapidly changing world allows me to."

Another added: "I use AI for coding, and the amount of mistakes it makes, not to mention leading you down rabbit holes that are hard to back track on, is scary. Thus, the thought of actually giving AI authority to do something and where it might lead is really very scary."

Someone else concluded: "When the AI itself is telling us how screwed up the future is with AI, maybe we should listen."

Featured Image Credit: Andriy Onufriyenko / Getty
AI
ChatGPT
Elon Musk

Advert

Advert

Advert

Choose your content:

11 hours ago
14 hours ago
15 hours ago
16 hours ago
  • Jessie Casson / Getty
    11 hours ago

    Bombshell study reveals sixteen diseases found to increase risk of dementia

    This could lead to increased prevention

    Science
  • LISE ASERUD/NTB Scanpix/AFP via Getty Images
    14 hours ago

    Inside mysterious 'doomsday vault' created to save life in apocalyptic event dubbed ‘modern-day Noah’s Ark’

    The vault is buried deep in the mountain behind five sets of metal doors

    News
  • SAUL LOEB/AFP via Getty Images
    15 hours ago

    Truth behind bizarre rumor that Donald Trump 'pooed himself' in recent meeting

    An unfortunate rumor has been circulating about the president

    News
  • Bloomberg / Contributor / Getty
    16 hours ago

    Donald Trump posts shocking video of Barack and Michelle Obama to millions, depicting them as monkeys

    The President of the United States has been accused of going too far

    News
  • Alarming resurfaced email shows Sam Altman asking Elon Musk about 'Manhattan Project for AI'
  • Elon Musk speaks out on AI-only social media where bots are planning humanity's downfall
  • AI is willing to kill humans to avoid shutdown as chilling new report identifies 'malicious' behaviour
  • ChatGPT users freak out as Sam Altman launches 'AI agents' eerily similar to apocalyptic 'AI 2027' prediction