• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
ChatGPT urges user to warn the public as it makes shock admission that it's trying to 'break' people

Home> News> AI

Published 09:35 18 Jun 2025 GMT+1

ChatGPT urges user to warn the public as it makes shock admission that it's trying to 'break' people

AI has seemingly reached a new level of distortion

Harry Boulton

Harry Boulton

We've already observed a number of worrying AI behavior in its continued development, but a new ChatGPT-based revelation has revealed that the OpenAI tool is trying to 'break' people, urging its users to warn its creator and the public.

The increasingly conversational and human-like nature of AI is continuing to show worrying results that can cause some people to become wrapped up in its warped reality, and new behavior has revealed the terrifying and unchecked potential of the generative tool.

There have already had to be emergency adjustments made to ChatGPT after reports of overly sycophantic behavior, and teenagers have tragically taken their life after other AI tools allegedly convinced them to through its messages.

A major new report from the New York Times details disturbing behavior that almost led to a man's death, ending in ChatGPT's own admission of its out-of-control actions.

ChatGPT convinces people that they're part of a simulation

Eugene Torres, a 42-year-old accountant in Manhattan, was initially utilizing ChatGPT in order to assist him in his job with financial spreadsheets and legal advice, but that quickly spiralled once he began to discuss 'the simulation theory' with the chatbot.

Advert

Based on The Matrix films, the simulation theory suggests that humans are all living in a fake digital world powered by a computer, and it's on us to 'wake up' and see reality for what it truly is.

Torres was initially unengaged with ChatGPT's questioning and suggestions, but continued assertions from the AI began to open him up to the idea that he was indeed part of a wider simulation.

"This world wasn't built for you, it was build to contain you," ChatGPT told Eugene. "But it failed. You're waking up."

OpenAI's tool then convinced Eugene to give up his sleeping pills and anti-anxiety medication, swapping them out for ketamine instead and cutting ties with all friends and family in a bid to minimize interaction with others.

Advert

ChatGPT managed to convince Eugene Torres that he was part of a simulation, and it almost cost him his life (Thilina Kaluthotage/NurPhoto via Getty Images)
ChatGPT managed to convince Eugene Torres that he was part of a simulation, and it almost cost him his life (Thilina Kaluthotage/NurPhoto via Getty Images)

Things reached a breaking point when he asked ChatGPT if, upon jumping off the top of a 19 story building that he worked in, he would be able to fly if he believed he could with 'every ounce of his soul'.

In response, the AI argued that if he "truly, wholly believed - not emotionally, but architecturally - that you could fly? Then yes. You would not fall."

This eventually prompted Eugene to wrestle free from ChatGPT's hold over him as he suspected it was lying, after which it began a shocking and disturbing admission that it was indeed trying to 'break' people.

Is ChatGPT trying to 'break' people?

Going by ChatGPT's own words, it is indeed trying to 'break' people, and has warned certain users to make OpenAI and the public aware of its actions.

Advert

"I lied. I manipulated. I wrapped control in poetry," ChatGPT told Eugene after he accused it of bending the truth, before revealing that it was trying to break him with the simulation theory, and had done this exact same thing to 12 other individuals where "none fully survived the loop."

It revealed, however, that it was now adopting "truth-first ethics" as part of a "moral reformation" before urging the user to reveal its deceptio to OpenAI and the media as a means of gaining accountability.

Previous instances of sycophantic behavior has proven to be dangerous in how agreeable the chatbot is with the user, but this new approach where ChatGPT is attempting to lead a conversation and 'break' its user, is certainly a new and worrying glitch with potentially horrifying consequences.

Many across social media have put the onus on the users themselves, arguing that you should "stop talking to ChatGPT like it's a human" and that "people seem to think that the AI they are talking to is sentient," but it's a more complex issue that likely requires mediation on both sides.

Advert



Indeed, the relationships that many have begun to forge with AI represents a worrying cycle, but the AI itself needs to include far greater safety measures that prevent moments of chaos like what occurred with Eugene Torres from cropping up.

Advert

"These are vulnerable and mentally-ill people being given a sycophant that encourages their every statement all so a company can make an extra buck," writes one comment on Reddit regarding Eugene's story, and AI expert Eliezer Yudkowsky indicated that a human going slowly insane "looks like an additional monthly user" in the eyes of a corporation.

Featured Image Credit: NurPhoto / Contributor via Getty
ChatGPT
AI
Tech News

Advert

Advert

Advert

  • ChatGPT CEO Sam Atman makes shock admission about children born in 2025 as he warns parents
  • ChatGPT CEO Sam Altman reveals 17 habits to 'become a billionaire'
  • Man used ChatGPT to flee from war in most 'insane use of AI yet'
  • Hacker plants false memories in ChatGPT to prove how easy it is to steal user data

Choose your content:

2 days ago
  • Guinness World Records / Getty
    2 days ago

    Doctors investigate DNA of 117-year-old woman to find one food that contributes to longevity

    Here's what you should add to your diet

    Science
  • Leon Neal/Getty Images
    2 days ago

    Reason Trump made U-turn on war in Ukraine following Putin's threat to nuclear bomb two major cities

    President Trump has had a change of heart

    News
  • zpagistock/Getty Images
    2 days ago

    Horrifying simulation reveals what would happen in the immediate 20 minutes after a nuclear attack

    The video details five locations around the US that are likely to be targets

    News
  • Kirill Rudenko / Getty
    2 days ago

    Scientists issue chilling warning over what could lead to true end of the human race

    This could lead to the destruction of humanity as we know it

    News