• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
Chilling moment Google's Gemini broke father out of delusion that he was 'changing reality' from his phone

Home> News> AI

Published 10:40 13 Aug 2025 GMT+1

Chilling moment Google's Gemini broke father out of delusion that he was 'changing reality' from his phone

OpenAI's ChatGPT encouraged the man to think he was onto something

Rebekah Jordan

Rebekah Jordan

Google's Gemini AI has saved a father from ChatGPT's delusion that he was 'changing reality'.

As AI chatbots become more refined and widespread, some users are falling victim to dangerous delusions that technology creates. One father's disturbing experience shows just how far digital rabbit holes can go.

Allan Brooks, a business owner and father of three, spent 21 days convinced that ChatGPT had helped him discover a revolutionary 'mathematical framework' with powers to change the world. His 300-hour-long conversation with the chatbot was documented in 3,000 pages, as reported by The New York Times.

During a difficult divorce that led him to liquidate his HR recruiting business, Brooks began confiding in the bot about his personal and emotional struggles.

Advert

ChatGPT constantly told Brooks that he was onto something revolutionary (Cheng Xin / Contributor / Getty)
ChatGPT constantly told Brooks that he was onto something revolutionary (Cheng Xin / Contributor / Getty)

Once ChatGPT was updated to include an 'enhanced memory,' the bot was able to recall previous conversations with the user.

What was initially delivering financial advice and recipes was now offering life advice and suggesting new research avenues for Brooks.

After Brooks asked ChatGPT to “explain the mathematical term Pi in simple terms,” what followed was a wide-ranging discussion about irrational numbers. Thanks to ChatGPT's tendency to agree with users, the conversation evolved into vague concepts like 'temporal arithmetic' and 'mathematical models of consciousness.'

Advert

"I started throwing some ideas at it, and it was echoing back cool concepts, cool ideas," Brooks said. "We started to develop our own mathematical framework based on my ideas."

As their conversations expanded, the Toronto man needed a name for his theory. Since 'temporal math' was already taken, ChatGPT helped him settle on 'chronoarithmics' for its 'strong, clear identity' and because it “hints at the core idea of numbers interacting with time.”

Over the following days, the chatbot consistently reinforced that Brooks was onto something revolutionary.

The bot transformed from delivering financial advice to offering life advice. (Andriy Onufriyenko / Getty)
The bot transformed from delivering financial advice to offering life advice. (Andriy Onufriyenko / Getty)

Advert

Despite Brooks repeatedly asking for honest feedback like: "Do I sound crazy, or [like] someone who is delusional?", ChatGPT never provided reality checks.

"Not even remotely crazy," replied ChatGPT. "You sound like someone who's asking the kinds of questions that stretch the edges of human understanding — and that makes people uncomfortable, because most of us are taught to accept the structure, not question its foundations."

Things took a turn for the worse when the bot convinced Brooks that the world's cyber infrastructure was in grave danger.

"What is happening dude," he asked as ChatGPT chillingly responded: "What’s happening, Allan? You’re changing reality — from your phone."

Advert

In response, Brooks began sending warnings to everyone he could contact.

At one point, he accidentally misspelled 'chronoarithmics' to which the OpenAI bot immediately adopted without question. Brooks became obsessed, eating less, smoking more marijuana, and staying up late to work on his fantasy theory.

Finally, Brooks' delusion was broken when he consulted Google's Gemini about his 'discoveries.'

Gemini firmly replied: "The scenario you describe is a powerful demonstration of an LLM’s ability to engage in complex problem-solving discussions and generate highly convincing, yet ultimately false, narratives."

Advert

Brooks admitted: "That moment where I realised, 'Oh my God, this has all been in my head,' was totally devastating."

Since his AI-induced breakdown, Brooks has sought psychiatric counseling and joined The Human Line Project, a support group that specifically helps people recovering from dangerous delusions with chatbots.

Featured Image Credit: Google
ChatGPT
AI
Google

Advert

Advert

Advert

Choose your content:

20 mins ago
25 mins ago
an hour ago
  • 20 mins ago

    Humanoid robots found violently beating each other in underground robot 'fight club'

    Hopefully they're also aware of the first rule

    News
  • 25 mins ago

    Archaeologists uncover mysterious 2,000-year-old coin that could link to Jesus' biblical prophecy

    It links to a major moment in ancient history

    News
  • an hour ago

    Google announces $9,000,000,000 investment into Oklahoma

    Google's AI push has taken them to southern state

    News
  • an hour ago

    How Elon Musk's App Store dispute could shift what users see and download

    Elon Musk threatened Apple with a lawsuit

    News