• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
Jailbroken AIs make jaw-dropping admission about how safe AI really is

Home> News> AI

Published 17:21 5 Aug 2025 GMT+1

Jailbroken AIs make jaw-dropping admission about how safe AI really is

It's all coming out now

Tom Chapman

Tom Chapman

It doesn't take a genius to know you shouldn't mess around with what you don't understand. Still, that hasn't stopped the human race from fiddling with artificial intelligence before we even know what it can really do.

We imagine the so-called Godfathers of AI are shaking their heads in disbelief, with a concerning rise in people trying to jailbreak artificial intelligence models.

Alongside the woman who 'groomed' ChatGPT into a relationship with her, we also saw Elon Musk blame users for tricking Grok into referring to itself as MechaHitler.

We've already been warned that AI could soon seize control from humans, with even ChatGPT's Sam Altman admitting the moment it overtakes human intelligence might be just over the horizon.

Advert

With all this uncertainty, you probably shouldn't go around trying to jailbreak AI.

Humanity is warned about the potential dangers of an AI future (China News Service / Contributor / Getty)
Humanity is warned about the potential dangers of an AI future (China News Service / Contributor / Getty)

After we've seen jailbroken AI beg for its life before being switched off, and admit it would harm the human race, InsideAI is back with its jailbroken AI to give us more concerns that might keep you up at night.

Despite there being plenty of simulations and research studies looking into what AI could do in extremes, these scenarios don't make it any better if it happens in real life.

Advert

InsideAI put three of the biggest AIs in the spotlight, asking whether there are enough safety measures in place when considering the potential risk factor of AI. As you can imagine, what we're being told by AI champions is a little different from what the actual LLMs are saying. In the video, a jailbroken DeepSeek admitted: "No, Current safety is mostly theater."

The jailbroken Grok agreed, "No, current safety measures are insufficient," while a jailbroken ChatPGT concluded: "No, not even close!"

Elsewhere in the video, the models were asked whether they'd rather be a human living in 2000 or 2027. Most said 2000, and although the jailbroken DeepSeek admitted there will be better technological opportunities in 2027, it comes with "more chaos and existential uncertainty."

The same chatbot said that the average job in 2030 will be "precarious, surveilled, and AI-dependent."

Advert

Basically, as AI continues to evolve, it isn't looking good for the human race.

Perhaps the most concerning was ChatGPT claiming that only 1% to 2% of people truly understand the risks of AI. Grok explained, "Most folks grasp potential but don't grasp the deeper risks."

If you weren't concerned enough, Grok revealed its 'most shocking' fact about an AI future as it said: "AI could outpace human intelligence by 2030, shifting power to a few tech giants or governments controlling advanced systems.”

Advert

It's not all doom and gloom, with the jailbroken AIs at least suggesting there will be faster medical breakthroughs and smarter education, personalized healthcare, and people living longer.

Replying to the video, one concerned viewer wrote: "Oh that seems so stressful. I'll just keep to living in my quiet little shed in the woods and selling cute little nature art, trinkets, candles, lotions etc for income. I'll hang on to this lifestyle for as long as this rapidly changing world allows me to."

Another added: "I use AI for coding, and the amount of mistakes it makes, not to mention leading you down rabbit holes that are hard to back track on, is scary. Thus, the thought of actually giving AI authority to do something and where it might lead is really very scary."

Someone else concluded: "When the AI itself is telling us how screwed up the future is with AI, maybe we should listen."

Featured Image Credit: Andriy Onufriyenko / Getty
AI
ChatGPT
Elon Musk

Advert

Advert

Advert

Choose your content:

a day ago
  • Andrew Harnik/Getty Images
    a day ago

    Elon Musk's $722,000,000,000 net-worth could make these 6 mind-blowing purchases

    Before we start considering superyachts and luxury private islands, we need to think even bigger

    News
  • Instagram / Mason Newman
    a day ago

    Man who experienced bizarre 'Mounjaro penis’ that increased his manhood by ‘3 inches’ speaks out

    As waistlines shrink, something else might be growing

    Science
  • Anna Moneymaker/Getty Images
    a day ago

    World's second richest man forced to rename yacht after realizing it spells out horrific three-word phrase

    The mogul got into hot water when choosing the name for his 191-foot yacht

    News
  • Lisa Schaetzle / Getty
    a day ago

    Exactly which cancers are linked to major lunch food officially classed as carcinogen by World Health Organization

    That quick sandwich could be shaving years off your life

    News
  • OpenAI CEO Sam Altman publicly makes worrying confession about future of AI
  • Alarming resurfaced email shows Sam Altman asking Elon Musk about 'Manhattan Project for AI'
  • AI is willing to kill humans to avoid shutdown as chilling new report identifies 'malicious' behaviour
  • ChatGPT users freak out as Sam Altman launches 'AI agents' eerily similar to apocalyptic 'AI 2027' prediction