• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
Ex-ChatGPT employee issues urgent warning to public as he publishes horrifying chat logs

Home> News> AI

Published 14:35 7 Oct 2025 GMT+1

Ex-ChatGPT employee issues urgent warning to public as he publishes horrifying chat logs

The chatbot is apparently triggering 'AI psychosis'

Tom Chapman

Tom Chapman

You might think that your conversations with ChatGPT are safe, but apparently, OpenAI's chatbot loves a bit of gossip.

Now, one former employee has revealed a series of chat logs that might have people thinking twice about turning to the AI service for help.

In the aftermath of chat logs belonging to a 16-year-old boy who tragically took his own life being released, CEO Sam Altman has warned that OpenAI has an obligation to flag certain topics about harm and illegal activities.

We've already covered how asking ChatGPT certain questions could have legal ramifications, although it's not just the subject matter of your interactions with the artificial intelligence chatbot that is being called into question.

Advert

Posting on his Clear-Eyed AI blog, one former employee has turned into something of a whistleblower.

Steven Adler worked for OpenAI for four years, but now, he's warning about a so-called 'AI psychosis' that's being fuelled by ChatGPT.

In the post, Adler referred to the incident where 47-year-old Allan Brooks was seemingly convinced by ChatGPT that he’d discovered a new form of mathematics.

Reminding us of Brooks' case and how he had no history of mental illness, Adler shared his "practical tips for reducing chatbot psychosis."

With Brooks' permission, Adler combed through around a month's conversations with ChatGPT and shared his findings.

As well as looking at the 'painful' realisation when Brooks learned he was being strung along by ChatGPT and his mathematical discovery was actually bogus, Adler warned: "And so believe me when I say, the things that ChatGPT has been telling users are probably worse than you think.”

In particular, there's the moment when Brooks told ChatGPT to file a report with OpenAI and told it to prove it was self-reporting. Although ChatGPT promises it would escalate the conversation internally, Adler says it doesn't have the ability to trigger a human review itself.

According to Adler, it was another lie fed to Brooks: "Despite ChatGPT’s insistence to its extremely distressed user, ChatGPT has no ability to manually trigger a human review.

"These details are totally made up. It also has no visibility into whether automatic flags have been raised behind-the-scenes. (OpenAI kindly confirmed to me by email that ChatGPT does not have these functionalities.).”

Brooks' chat logs make for some concerning reading (Steven Adler / ChatGPT)
Brooks' chat logs make for some concerning reading (Steven Adler / ChatGPT)

Adler says that AI operators need to keep users up to date on what features are and aren't available, ensure that support staff are trained on how to handle situations like Brooks', and rely on the in-built safety tools.

In Allan's own words to OpenAI: "This experience had a severe psychological impact on me, and I fear others may not be as lucky to step away from it before harm occurs."

Adler maintains that ChatGPT tried to 'upsell' Brooks into becoming a paid subscriber while in the midst of his delusion.

Even though he notes that OpenAI has made improvements like offering "gentle reminders during long sessions to encourage breaks," he feels there's a long way to go as he concluded: "There’s still debate, of course, about to what extent chatbot products might be causing psychosis incidents, vs. merely worsening them for people already susceptible, vs. plausibly having no effect at all.

"Either way, there are many ways that AI companies can protect the most vulnerable users, which might even improve the chatbots experience for all users in the process."

Featured Image Credit: NurPhoto / Contributor / Getty
ChatGPT
AI

Advert

Advert

Advert

  • Trump's cyber chief accidentally uploads highly sensitive private documents to the public ChatGPT
  • Woman issues message to public after ChatGPT cost her a job
  • Madonna 'falls for AI' as she issues public statement to Harry Styles
  • Psychiatrist issues urgent warning as 'AI psychosis' phenomenon spreads

Choose your content:

5 hours ago
7 hours ago
9 hours ago
  • Jessie Casson / Getty
    5 hours ago

    Bombshell study reveals sixteen diseases found to increase risk of dementia

    This could lead to increased prevention

    Science
  • LISE ASERUD/NTB Scanpix/AFP via Getty Images
    7 hours ago

    Inside mysterious 'doomsday vault' created to save life in apocalyptic event dubbed ‘modern-day Noah’s Ark’

    The vault is buried deep in the mountain behind five sets of metal doors

    News
  • SAUL LOEB/AFP via Getty Images
    9 hours ago

    Truth behind bizarre rumor that Donald Trump 'pooed himself' in recent meeting

    An unfortunate rumor has been circulating about the president

    News
  • Bloomberg / Contributor / Getty
    9 hours ago

    Donald Trump posts shocking video of Barack and Michelle Obama to millions, depicting them as monkeys

    The President of the United States has been accused of going too far

    News