• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
Ex-ChatGPT employee issues urgent warning to public as he publishes horrifying chat logs

Home> News> AI

Published 14:35 7 Oct 2025 GMT+1

Ex-ChatGPT employee issues urgent warning to public as he publishes horrifying chat logs

The chatbot is apparently triggering 'AI psychosis'

Tom Chapman

Tom Chapman

google discoverFollow us on Google Discover
Featured Image Credit: NurPhoto / Contributor / Getty
ChatGPT
AI

Advert

Advert

Advert

You might think that your conversations with ChatGPT are safe, but apparently, OpenAI's chatbot loves a bit of gossip.

Now, one former employee has revealed a series of chat logs that might have people thinking twice about turning to the AI service for help.

In the aftermath of chat logs belonging to a 16-year-old boy who tragically took his own life being released, CEO Sam Altman has warned that OpenAI has an obligation to flag certain topics about harm and illegal activities.

We've already covered how asking ChatGPT certain questions could have legal ramifications, although it's not just the subject matter of your interactions with the artificial intelligence chatbot that is being called into question.

Advert

Posting on his Clear-Eyed AI blog, one former employee has turned into something of a whistleblower.

Steven Adler worked for OpenAI for four years, but now, he's warning about a so-called 'AI psychosis' that's being fuelled by ChatGPT.

In the post, Adler referred to the incident where 47-year-old Allan Brooks was seemingly convinced by ChatGPT that he’d discovered a new form of mathematics.

Reminding us of Brooks' case and how he had no history of mental illness, Adler shared his "practical tips for reducing chatbot psychosis."

With Brooks' permission, Adler combed through around a month's conversations with ChatGPT and shared his findings.

As well as looking at the 'painful' realisation when Brooks learned he was being strung along by ChatGPT and his mathematical discovery was actually bogus, Adler warned: "And so believe me when I say, the things that ChatGPT has been telling users are probably worse than you think.”

In particular, there's the moment when Brooks told ChatGPT to file a report with OpenAI and told it to prove it was self-reporting. Although ChatGPT promises it would escalate the conversation internally, Adler says it doesn't have the ability to trigger a human review itself.

According to Adler, it was another lie fed to Brooks: "Despite ChatGPT’s insistence to its extremely distressed user, ChatGPT has no ability to manually trigger a human review.

"These details are totally made up. It also has no visibility into whether automatic flags have been raised behind-the-scenes. (OpenAI kindly confirmed to me by email that ChatGPT does not have these functionalities.).”

Brooks' chat logs make for some concerning reading (Steven Adler / ChatGPT)
Brooks' chat logs make for some concerning reading (Steven Adler / ChatGPT)

Adler says that AI operators need to keep users up to date on what features are and aren't available, ensure that support staff are trained on how to handle situations like Brooks', and rely on the in-built safety tools.

In Allan's own words to OpenAI: "This experience had a severe psychological impact on me, and I fear others may not be as lucky to step away from it before harm occurs."

Adler maintains that ChatGPT tried to 'upsell' Brooks into becoming a paid subscriber while in the midst of his delusion.

Even though he notes that OpenAI has made improvements like offering "gentle reminders during long sessions to encourage breaks," he feels there's a long way to go as he concluded: "There’s still debate, of course, about to what extent chatbot products might be causing psychosis incidents, vs. merely worsening them for people already susceptible, vs. plausibly having no effect at all.

"Either way, there are many ways that AI companies can protect the most vulnerable users, which might even improve the chatbots experience for all users in the process."

  • Woman issues message to public after ChatGPT cost her a job
  • Psychiatrist issues urgent warning as 'AI psychosis' phenomenon spreads
  • Psychology professor issues warning after 28-year-old woman 'grooms' and falls in love with ChatGPT
  • How to know when you should start a new chat with ChatGPT as AI's memory is stronger than ever

Choose your content:

7 hours ago
8 hours ago
12 hours ago
  • BRENDAN SMIALOWSKI / Contributor / Getty
    7 hours ago

    AI makes stunning prediction for who will win America's 2028 presidential election

    Will there be four more years of Republican rule?

    News
  • Elton Xhafkollari / Getty
    8 hours ago

    Online porn site fined $1M for not rolling out new rules blocking certain viewers

    Some of the biggest sites in the world have been forced to make changes

    News
  • Rick Friedman/Rick Friedman Photography/Corbis via Getty Images
    8 hours ago

    DOJ send letter to Congress with list of all names in Epstein files including Jeff Bezos and Elon Musk

    The letter has been met with backlash

    News
  • Sarah Stier / Staff via Getty
    12 hours ago

    Jutta Leerdam might have just made $1M payout after 'flashing' logo during celebration

    She could be due a huge payout for an unexpected reason

    News