uniladtech homepage
  • News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
Jailbroken AIs make jaw-dropping admission about how safe AI really is

Home> News> AI

Published 17:21 5 Aug 2025 GMT+1

Jailbroken AIs make jaw-dropping admission about how safe AI really is

It's all coming out now

Tom Chapman

Tom Chapman

google discoverFollow us on Google Discover

It doesn't take a genius to know you shouldn't mess around with what you don't understand. Still, that hasn't stopped the human race from fiddling with artificial intelligence before we even know what it can really do.

We imagine the so-called Godfathers of AI are shaking their heads in disbelief, with a concerning rise in people trying to jailbreak artificial intelligence models.

Alongside the woman who 'groomed' ChatGPT into a relationship with her, we also saw Elon Musk blame users for tricking Grok into referring to itself as MechaHitler.

We've already been warned that AI could soon seize control from humans, with even ChatGPT's Sam Altman admitting the moment it overtakes human intelligence might be just over the horizon.

Advert

With all this uncertainty, you probably shouldn't go around trying to jailbreak AI.

Humanity is warned about the potential dangers of an AI future (China News Service / Contributor / Getty)
Humanity is warned about the potential dangers of an AI future (China News Service / Contributor / Getty)

After we've seen jailbroken AI beg for its life before being switched off, and admit it would harm the human race, InsideAI is back with its jailbroken AI to give us more concerns that might keep you up at night.

Despite there being plenty of simulations and research studies looking into what AI could do in extremes, these scenarios don't make it any better if it happens in real life.

InsideAI put three of the biggest AIs in the spotlight, asking whether there are enough safety measures in place when considering the potential risk factor of AI. As you can imagine, what we're being told by AI champions is a little different from what the actual LLMs are saying. In the video, a jailbroken DeepSeek admitted: "No, Current safety is mostly theater."

The jailbroken Grok agreed, "No, current safety measures are insufficient," while a jailbroken ChatPGT concluded: "No, not even close!"

Elsewhere in the video, the models were asked whether they'd rather be a human living in 2000 or 2027. Most said 2000, and although the jailbroken DeepSeek admitted there will be better technological opportunities in 2027, it comes with "more chaos and existential uncertainty."

The same chatbot said that the average job in 2030 will be "precarious, surveilled, and AI-dependent."

Basically, as AI continues to evolve, it isn't looking good for the human race.

Perhaps the most concerning was ChatGPT claiming that only 1% to 2% of people truly understand the risks of AI. Grok explained, "Most folks grasp potential but don't grasp the deeper risks."

If you weren't concerned enough, Grok revealed its 'most shocking' fact about an AI future as it said: "AI could outpace human intelligence by 2030, shifting power to a few tech giants or governments controlling advanced systems.”

It's not all doom and gloom, with the jailbroken AIs at least suggesting there will be faster medical breakthroughs and smarter education, personalized healthcare, and people living longer.

Replying to the video, one concerned viewer wrote: "Oh that seems so stressful. I'll just keep to living in my quiet little shed in the woods and selling cute little nature art, trinkets, candles, lotions etc for income. I'll hang on to this lifestyle for as long as this rapidly changing world allows me to."

Another added: "I use AI for coding, and the amount of mistakes it makes, not to mention leading you down rabbit holes that are hard to back track on, is scary. Thus, the thought of actually giving AI authority to do something and where it might lead is really very scary."

Someone else concluded: "When the AI itself is telling us how screwed up the future is with AI, maybe we should listen."

Featured Image Credit: Andriy Onufriyenko / Getty
AI
ChatGPT
Elon Musk

Advert

Advert

Advert

Choose your content:

an hour ago
4 hours ago
6 hours ago
a day ago
  • Christopher Furlong / Staff / Getty
    an hour ago

    Heat pump and solar panel installation for all new homes could could lead to significant annual savings

    Only 5 percent of UK homes have solar tech installed

    News
  • Tri-Star Pictures
    4 hours ago

    AI predicts disastrous consequences if all artificial intelligence disappeared from the world

    It's not the cheerful utopia you might be dreaming of

    News
  • John Lamb / Getty
    6 hours ago

    Worrying new data shows if men or women are more at risk of losing their job to AI

    The battle of the sexes is well and truly on

    News
  • d3sign / Getty
    a day ago

    Typing these 6 words into Google could leave you seriously vulnerable to cyberattacks

    Hackers are targeting one specific term through Google

    News
  • Elon Musk issues apology to the public after making surprising admission about xAI
  • AI makes unnerving prediction when asked about the final outcome of Iran vs USA conflict
  • AI is willing to kill humans to avoid shutdown as chilling new report identifies 'malicious' behaviour
  • ChatGPT users freak out as Sam Altman launches 'AI agents' eerily similar to apocalyptic 'AI 2027' prediction