
It doesn't take a genius to know you shouldn't mess around with what you don't understand. Still, that hasn't stopped the human race from fiddling with artificial intelligence before we even know what it can really do.
We imagine the so-called Godfathers of AI are shaking their heads in disbelief, with a concerning rise in people trying to jailbreak artificial intelligence models.
Alongside the woman who 'groomed' ChatGPT into a relationship with her, we also saw Elon Musk blame users for tricking Grok into referring to itself as MechaHitler.
We've already been warned that AI could soon seize control from humans, with even ChatGPT's Sam Altman admitting the moment it overtakes human intelligence might be just over the horizon.
Advert
With all this uncertainty, you probably shouldn't go around trying to jailbreak AI.

After we've seen jailbroken AI beg for its life before being switched off, and admit it would harm the human race, InsideAI is back with its jailbroken AI to give us more concerns that might keep you up at night.
Despite there being plenty of simulations and research studies looking into what AI could do in extremes, these scenarios don't make it any better if it happens in real life.
Advert
InsideAI put three of the biggest AIs in the spotlight, asking whether there are enough safety measures in place when considering the potential risk factor of AI. As you can imagine, what we're being told by AI champions is a little different from what the actual LLMs are saying. In the video, a jailbroken DeepSeek admitted: "No, Current safety is mostly theater."
The jailbroken Grok agreed, "No, current safety measures are insufficient," while a jailbroken ChatPGT concluded: "No, not even close!"
Elsewhere in the video, the models were asked whether they'd rather be a human living in 2000 or 2027. Most said 2000, and although the jailbroken DeepSeek admitted there will be better technological opportunities in 2027, it comes with "more chaos and existential uncertainty."
The same chatbot said that the average job in 2030 will be "precarious, surveilled, and AI-dependent."
Advert
Basically, as AI continues to evolve, it isn't looking good for the human race.
Perhaps the most concerning was ChatGPT claiming that only 1% to 2% of people truly understand the risks of AI. Grok explained, "Most folks grasp potential but don't grasp the deeper risks."
If you weren't concerned enough, Grok revealed its 'most shocking' fact about an AI future as it said: "AI could outpace human intelligence by 2030, shifting power to a few tech giants or governments controlling advanced systems.”
Advert
It's not all doom and gloom, with the jailbroken AIs at least suggesting there will be faster medical breakthroughs and smarter education, personalized healthcare, and people living longer.
Replying to the video, one concerned viewer wrote: "Oh that seems so stressful. I'll just keep to living in my quiet little shed in the woods and selling cute little nature art, trinkets, candles, lotions etc for income. I'll hang on to this lifestyle for as long as this rapidly changing world allows me to."
Another added: "I use AI for coding, and the amount of mistakes it makes, not to mention leading you down rabbit holes that are hard to back track on, is scary. Thus, the thought of actually giving AI authority to do something and where it might lead is really very scary."
Someone else concluded: "When the AI itself is telling us how screwed up the future is with AI, maybe we should listen."