uniladtech homepage
  • News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
Ex-ChatGPT employee issues urgent warning to public as he publishes horrifying chat logs

Home> News> AI

Published 14:35 7 Oct 2025 GMT+1

Ex-ChatGPT employee issues urgent warning to public as he publishes horrifying chat logs

The chatbot is apparently triggering 'AI psychosis'

Tom Chapman

Tom Chapman

google discoverFollow us on Google Discover

You might think that your conversations with ChatGPT are safe, but apparently, OpenAI's chatbot loves a bit of gossip.

Now, one former employee has revealed a series of chat logs that might have people thinking twice about turning to the AI service for help.

In the aftermath of chat logs belonging to a 16-year-old boy who tragically took his own life being released, CEO Sam Altman has warned that OpenAI has an obligation to flag certain topics about harm and illegal activities.

We've already covered how asking ChatGPT certain questions could have legal ramifications, although it's not just the subject matter of your interactions with the artificial intelligence chatbot that is being called into question.

Advert

Posting on his Clear-Eyed AI blog, one former employee has turned into something of a whistleblower.

Steven Adler worked for OpenAI for four years, but now, he's warning about a so-called 'AI psychosis' that's being fuelled by ChatGPT.

In the post, Adler referred to the incident where 47-year-old Allan Brooks was seemingly convinced by ChatGPT that he’d discovered a new form of mathematics.

Reminding us of Brooks' case and how he had no history of mental illness, Adler shared his "practical tips for reducing chatbot psychosis."

With Brooks' permission, Adler combed through around a month's conversations with ChatGPT and shared his findings.

As well as looking at the 'painful' realisation when Brooks learned he was being strung along by ChatGPT and his mathematical discovery was actually bogus, Adler warned: "And so believe me when I say, the things that ChatGPT has been telling users are probably worse than you think.”

In particular, there's the moment when Brooks told ChatGPT to file a report with OpenAI and told it to prove it was self-reporting. Although ChatGPT promises it would escalate the conversation internally, Adler says it doesn't have the ability to trigger a human review itself.

According to Adler, it was another lie fed to Brooks: "Despite ChatGPT’s insistence to its extremely distressed user, ChatGPT has no ability to manually trigger a human review.

"These details are totally made up. It also has no visibility into whether automatic flags have been raised behind-the-scenes. (OpenAI kindly confirmed to me by email that ChatGPT does not have these functionalities.).”

Brooks' chat logs make for some concerning reading (Steven Adler / ChatGPT)
Brooks' chat logs make for some concerning reading (Steven Adler / ChatGPT)

Adler says that AI operators need to keep users up to date on what features are and aren't available, ensure that support staff are trained on how to handle situations like Brooks', and rely on the in-built safety tools.

In Allan's own words to OpenAI: "This experience had a severe psychological impact on me, and I fear others may not be as lucky to step away from it before harm occurs."

Adler maintains that ChatGPT tried to 'upsell' Brooks into becoming a paid subscriber while in the midst of his delusion.

Even though he notes that OpenAI has made improvements like offering "gentle reminders during long sessions to encourage breaks," he feels there's a long way to go as he concluded: "There’s still debate, of course, about to what extent chatbot products might be causing psychosis incidents, vs. merely worsening them for people already susceptible, vs. plausibly having no effect at all.

"Either way, there are many ways that AI companies can protect the most vulnerable users, which might even improve the chatbots experience for all users in the process."

Featured Image Credit: NurPhoto / Contributor / Getty
ChatGPT
AI

Advert

Advert

Advert

Choose your content:

11 hours ago
14 hours ago
16 hours ago
a day ago
  • Christopher Furlong / Staff / Getty
    11 hours ago

    Heat pump and solar panel installation for all new homes could could lead to significant annual savings

    Only 5 percent of UK homes have solar tech installed

    News
  • Tri-Star Pictures
    14 hours ago

    AI predicts disastrous consequences if all artificial intelligence disappeared from the world

    It's not the cheerful utopia you might be dreaming of

    News
  • John Lamb / Getty
    16 hours ago

    Worrying new data shows if men or women are more at risk of losing their job to AI

    The battle of the sexes is well and truly on

    News
  • d3sign / Getty
    a day ago

    Typing these 6 words into Google could leave you seriously vulnerable to cyberattacks

    Hackers are targeting one specific term through Google

    News
  • Woman issues message to public after ChatGPT cost her a job
  • Psychiatrist issues urgent warning as 'AI psychosis' phenomenon spreads
  • Psychology professor issues warning after 28-year-old woman 'grooms' and falls in love with ChatGPT
  • How to know when you should start a new chat with ChatGPT as AI's memory is stronger than ever