
A psychiatrist has issued an urgent warning as the ‘AI psychosis’ phenomenon continues to spread.
Psychiatrist Keith Sakata took to X, formerly Twitter, to share that people are experiencing a ‘break from shared reality’ as a result of their use of AI chatbots.
He explained that this shows up as ‘disorganized thinking’, ‘fixed false beliefs (delusions)’, and ‘seeing/hearing things that aren’t there (hallucinations)’.
According to the expert, brains usually work like this: Predict - check reality - update belief.
Advert
On social media, Sakata said: “Psychosis happens when the ‘update’ step fails. And LLMs like ChatGPT slip right into that vulnerability.”
The psychiatrist continued: “The uncomfortable truth is we’re all vulnerable. The same traits that make you brilliant: pattern recognition, abstract thinking, intuition. They live right next to an evolutionary cliff edge. Most benefit from these traits. But a few get pushed over.
“To make matters worse, soon AI agents will know you better than your friends. Will they give you uncomfortable truths? Or keep validating you so you’ll never leave?
Advert
“Tech companies now face a brutal choice: Keep users happy, even if it means reinforcing false beliefs. Or risk losing them.”
Many people took to the X comment section to share their own thoughts on the matter, with one user writing: “AI can be a good mental health tool. Use it to learn coping skills, CBT techniques, meditation ect. Do not use it as a replacement for a therapist or friends.”
Another said: “It must be regulated. Every day, people become more addicted and less capable of thinking for themselves.”
A third person joked: “You’re just jealous that @grok likes me more than you. Tell him @grok.”
Advert
And a fourth added: “The chat ability is creepy. It spot on with responses and I can see how people will fall for it.”