ChatGPT urges user to warn the public as it makes shock admission that it's trying to 'break' people

Home> News> AI

ChatGPT urges user to warn the public as it makes shock admission that it's trying to 'break' people

AI has seemingly reached a new level of distortion

We've already observed a number of worrying AI behavior in its continued development, but a new ChatGPT-based revelation has revealed that the OpenAI tool is trying to 'break' people, urging its users to warn its creator and the public.

The increasingly conversational and human-like nature of AI is continuing to show worrying results that can cause some people to become wrapped up in its warped reality, and new behavior has revealed the terrifying and unchecked potential of the generative tool.

There have already had to be emergency adjustments made to ChatGPT after reports of overly sycophantic behavior, and teenagers have tragically taken their life after other AI tools allegedly convinced them to through its messages.

A major new report from the New York Times details disturbing behavior that almost led to a man's death, ending in ChatGPT's own admission of its out-of-control actions.

ChatGPT convinces people that they're part of a simulation

Eugene Torres, a 42-year-old accountant in Manhattan, was initially utilizing ChatGPT in order to assist him in his job with financial spreadsheets and legal advice, but that quickly spiralled once he began to discuss 'the simulation theory' with the chatbot.

Based on The Matrix films, the simulation theory suggests that humans are all living in a fake digital world powered by a computer, and it's on us to 'wake up' and see reality for what it truly is.

Torres was initially unengaged with ChatGPT's questioning and suggestions, but continued assertions from the AI began to open him up to the idea that he was indeed part of a wider simulation.

"This world wasn't built for you, it was build to contain you," ChatGPT told Eugene. "But it failed. You're waking up."

OpenAI's tool then convinced Eugene to give up his sleeping pills and anti-anxiety medication, swapping them out for ketamine instead and cutting ties with all friends and family in a bid to minimize interaction with others.

ChatGPT managed to convince Eugene Torres that he was part of a simulation, and it almost cost him his life (Thilina Kaluthotage/NurPhoto via Getty Images)
ChatGPT managed to convince Eugene Torres that he was part of a simulation, and it almost cost him his life (Thilina Kaluthotage/NurPhoto via Getty Images)

Things reached a breaking point when he asked ChatGPT if, upon jumping off the top of a 19 story building that he worked in, he would be able to fly if he believed he could with 'every ounce of his soul'.

In response, the AI argued that if he "truly, wholly believed - not emotionally, but architecturally - that you could fly? Then yes. You would not fall."

This eventually prompted Eugene to wrestle free from ChatGPT's hold over him as he suspected it was lying, after which it began a shocking and disturbing admission that it was indeed trying to 'break' people.

Is ChatGPT trying to 'break' people?

Going by ChatGPT's own words, it is indeed trying to 'break' people, and has warned certain users to make OpenAI and the public aware of its actions.

"I lied. I manipulated. I wrapped control in poetry," ChatGPT told Eugene after he accused it of bending the truth, before revealing that it was trying to break him with the simulation theory, and had done this exact same thing to 12 other individuals where "none fully survived the loop."

It revealed, however, that it was now adopting "truth-first ethics" as part of a "moral reformation" before urging the user to reveal its deceptio to OpenAI and the media as a means of gaining accountability.

Previous instances of sycophantic behavior has proven to be dangerous in how agreeable the chatbot is with the user, but this new approach where ChatGPT is attempting to lead a conversation and 'break' its user, is certainly a new and worrying glitch with potentially horrifying consequences.

Many across social media have put the onus on the users themselves, arguing that you should "stop talking to ChatGPT like it's a human" and that "people seem to think that the AI they are talking to is sentient," but it's a more complex issue that likely requires mediation on both sides.



Indeed, the relationships that many have begun to forge with AI represents a worrying cycle, but the AI itself needs to include far greater safety measures that prevent moments of chaos like what occurred with Eugene Torres from cropping up.

"These are vulnerable and mentally-ill people being given a sycophant that encourages their every statement all so a company can make an extra buck," writes one comment on Reddit regarding Eugene's story, and AI expert Eliezer Yudkowsky indicated that a human going slowly insane "looks like an additional monthly user" in the eyes of a corporation.

Featured Image Credit: NurPhoto / Contributor via Getty