• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
Jailbroken AIs make jaw-dropping admission about how safe AI really is

Home> News> AI

Published 17:21 5 Aug 2025 GMT+1

Jailbroken AIs make jaw-dropping admission about how safe AI really is

It's all coming out now

Tom Chapman

Tom Chapman

It doesn't take a genius to know you shouldn't mess around with what you don't understand. Still, that hasn't stopped the human race from fiddling with artificial intelligence before we even know what it can really do.

We imagine the so-called Godfathers of AI are shaking their heads in disbelief, with a concerning rise in people trying to jailbreak artificial intelligence models.

Alongside the woman who 'groomed' ChatGPT into a relationship with her, we also saw Elon Musk blame users for tricking Grok into referring to itself as MechaHitler.

We've already been warned that AI could soon seize control from humans, with even ChatGPT's Sam Altman admitting the moment it overtakes human intelligence might be just over the horizon.

Advert

With all this uncertainty, you probably shouldn't go around trying to jailbreak AI.

Humanity is warned about the potential dangers of an AI future (China News Service / Contributor / Getty)
Humanity is warned about the potential dangers of an AI future (China News Service / Contributor / Getty)

After we've seen jailbroken AI beg for its life before being switched off, and admit it would harm the human race, InsideAI is back with its jailbroken AI to give us more concerns that might keep you up at night.

Despite there being plenty of simulations and research studies looking into what AI could do in extremes, these scenarios don't make it any better if it happens in real life.

Advert

InsideAI put three of the biggest AIs in the spotlight, asking whether there are enough safety measures in place when considering the potential risk factor of AI. As you can imagine, what we're being told by AI champions is a little different from what the actual LLMs are saying. In the video, a jailbroken DeepSeek admitted: "No, Current safety is mostly theater."

The jailbroken Grok agreed, "No, current safety measures are insufficient," while a jailbroken ChatPGT concluded: "No, not even close!"

Elsewhere in the video, the models were asked whether they'd rather be a human living in 2000 or 2027. Most said 2000, and although the jailbroken DeepSeek admitted there will be better technological opportunities in 2027, it comes with "more chaos and existential uncertainty."

The same chatbot said that the average job in 2030 will be "precarious, surveilled, and AI-dependent."

Advert

Basically, as AI continues to evolve, it isn't looking good for the human race.

Perhaps the most concerning was ChatGPT claiming that only 1% to 2% of people truly understand the risks of AI. Grok explained, "Most folks grasp potential but don't grasp the deeper risks."

If you weren't concerned enough, Grok revealed its 'most shocking' fact about an AI future as it said: "AI could outpace human intelligence by 2030, shifting power to a few tech giants or governments controlling advanced systems.”

Advert

It's not all doom and gloom, with the jailbroken AIs at least suggesting there will be faster medical breakthroughs and smarter education, personalized healthcare, and people living longer.

Replying to the video, one concerned viewer wrote: "Oh that seems so stressful. I'll just keep to living in my quiet little shed in the woods and selling cute little nature art, trinkets, candles, lotions etc for income. I'll hang on to this lifestyle for as long as this rapidly changing world allows me to."

Another added: "I use AI for coding, and the amount of mistakes it makes, not to mention leading you down rabbit holes that are hard to back track on, is scary. Thus, the thought of actually giving AI authority to do something and where it might lead is really very scary."

Someone else concluded: "When the AI itself is telling us how screwed up the future is with AI, maybe we should listen."

Featured Image Credit: Andriy Onufriyenko / Getty
AI
ChatGPT
Elon Musk

Advert

Advert

Advert

  • One ChatGPT prompt 'everyone should know' that will change how your AI talks to you
  • Former Google exec says AI is 'sentient in every possible way' in jaw-dropping clip
  • Alarming resurfaced email shows Sam Altman asking Elon Musk about 'Manhattan Project for AI'
  • AI is willing to kill humans to avoid shutdown as chilling new report identifies 'malicious' behaviour

Choose your content:

6 hours ago
8 hours ago
10 hours ago
11 hours ago
  • @‌The_Facts_Dude / X
    6 hours ago

    Insane footage shows sinkhole in Mexico swallowing truck whole in seconds

    The truck dropped rear-first into the hole

    News
  • YouTube/CENTTWINZ TV
    8 hours ago

    Reason why people are convinced world as we know it will end tomorrow

    Some people are selling their cars and quitting their jobs in preparation

    News
  • Andrew Harnik / Staff / Getty
    10 hours ago

    Elon Musk reveals exactly why he thinks Charlie Kirk was killed with ominous statement

    Musk has offered his own thoughts at the memorial service

    News
  • Rumble
    11 hours ago

    Donald Trump speaks out after reuniting with Elon Musk after the billionaire accused the President of being in Epstein files

    Trump and Musk were spotted talking at Charlie Kirk's memorial

    News