
You might think that your conversations with ChatGPT are safe, but apparently, OpenAI's chatbot loves a bit of gossip.
Now, one former employee has revealed a series of chat logs that might have people thinking twice about turning to the AI service for help.
In the aftermath of chat logs belonging to a 16-year-old boy who tragically took his own life being released, CEO Sam Altman has warned that OpenAI has an obligation to flag certain topics about harm and illegal activities.
We've already covered how asking ChatGPT certain questions could have legal ramifications, although it's not just the subject matter of your interactions with the artificial intelligence chatbot that is being called into question.
Advert
Posting on his Clear-Eyed AI blog, one former employee has turned into something of a whistleblower.
Steven Adler worked for OpenAI for four years, but now, he's warning about a so-called 'AI psychosis' that's being fuelled by ChatGPT.
In the post, Adler referred to the incident where 47-year-old Allan Brooks was seemingly convinced by ChatGPT that he’d discovered a new form of mathematics.
Advert
Reminding us of Brooks' case and how he had no history of mental illness, Adler shared his "practical tips for reducing chatbot psychosis."
With Brooks' permission, Adler combed through around a month's conversations with ChatGPT and shared his findings.
As well as looking at the 'painful' realisation when Brooks learned he was being strung along by ChatGPT and his mathematical discovery was actually bogus, Adler warned: "And so believe me when I say, the things that ChatGPT has been telling users are probably worse than you think.”
In particular, there's the moment when Brooks told ChatGPT to file a report with OpenAI and told it to prove it was self-reporting. Although ChatGPT promises it would escalate the conversation internally, Adler says it doesn't have the ability to trigger a human review itself.
Advert
According to Adler, it was another lie fed to Brooks: "Despite ChatGPT’s insistence to its extremely distressed user, ChatGPT has no ability to manually trigger a human review.
"These details are totally made up. It also has no visibility into whether automatic flags have been raised behind-the-scenes. (OpenAI kindly confirmed to me by email that ChatGPT does not have these functionalities.).”

Adler says that AI operators need to keep users up to date on what features are and aren't available, ensure that support staff are trained on how to handle situations like Brooks', and rely on the in-built safety tools.
Advert
In Allan's own words to OpenAI: "This experience had a severe psychological impact on me, and I fear others may not be as lucky to step away from it before harm occurs."
Adler maintains that ChatGPT tried to 'upsell' Brooks into becoming a paid subscriber while in the midst of his delusion.
Even though he notes that OpenAI has made improvements like offering "gentle reminders during long sessions to encourage breaks," he feels there's a long way to go as he concluded: "There’s still debate, of course, about to what extent chatbot products might be causing psychosis incidents, vs. merely worsening them for people already susceptible, vs. plausibly having no effect at all.
"Either way, there are many ways that AI companies can protect the most vulnerable users, which might even improve the chatbots experience for all users in the process."