• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
Man hospitalized with rare 19th-century disease after taking health advice from ChatGPT

Home> News> AI

Updated 10:48 13 Aug 2025 GMT+1Published 10:40 13 Aug 2025 GMT+1

Man hospitalized with rare 19th-century disease after taking health advice from ChatGPT

The man turned to ChatGPT for medical guidance

Rebekah Jordan

Rebekah Jordan

google discoverFollow us on Google Discover

A man was hospitalised with a 19th-century disease after following ChatGPT's health advice.

Medical AI is rapidly being explored by hospitals and health experts around the world as a potential tool for improving patient care.

In 2024, the UK's National Health Service (NHS) was reportedly trialling a 'superhuman' version of AI that can supposedly predict the risk of disease.

Meanwhile, xAI CEO Elon Musk is urging people to use his Grok chatbot for medical guidance after a man claimed it helped to 'cure' his chronic illness.

Advert

However, a recent case took a turn for the worse and shows the serious dangers of relying on AI for health advice without proper medical supervision.

The case involved a 60-year-old man who decided to completely eliminate sodium chloride (table salt) from his diet after researching its adverse health effects.

ChatGPT suggested sodium bromide as a substitute for salt. (Cheng Xin/Contributor/Getty)
ChatGPT suggested sodium bromide as a substitute for salt. (Cheng Xin/Contributor/Getty)

While many articles advised reducing salt intake, the man wanted to eliminate it altogether. As such, he turned to ChatGPT for guidance on alternatives.

The OpenAI chatbot suggested sodium bromide as a substitute for salt.

After three months on his ChatGPT-recommended diet, the patient developed worrying symptoms such as facial acne, cherry angiomas, fatigue, insomnia, excessive thirst, poor coordination and skin rashes.

More troubling were the increasing paranoia and both auditory and visual hallucinations which he experienced as a result.

The man who had 'no past psychiatric or medical history' became convinced his neighbour was trying to poison him, which eventually led to him needing to be hospitalised. His attempt to escape medical care resulted in an 'involuntary psychiatric hold for grave disability.'

It was discovered that the man was suffering from bromism, chronic bromide poisoning that was common in the late 19th and early 20th centuries, roughly from the 1880s to the 1930s.

The man became convinced his neighbour was trying to poison him. (Witthaya Prasongsin/Getty)
The man became convinced his neighbour was trying to poison him. (Witthaya Prasongsin/Getty)

During that era, bromide was widely used in sedatives, sleep aids, and over-the-counter headache remedies because it was considered safer than many alternatives. The problem with bromide is that it's excreted extremely slowly from the body, so repeated use leads to toxic build-up.

So, even though ChatGPT's statement was technically correct, it's more relevant to industrial cleaning and definitely not something safe for humans to consume.

In a paper published in the Annals of Internal Medicine, researchers believe the man used an older ChatGPT model (3.5 or 4.0) for his enquiries. Since they couldn't access his chat history, the paper's authors recreated similar conversations with ChatGPT 3.5 to understand what went wrong.

They found that while the AI's response mentioned that 'context matters,' it failed to provide specific health warnings or ask follow-up questions about why the person wanted this information, which is something a medical professional would certainly do.

The paper authors added that the case showed 'how the use of artificial intelligence can potentially contribute to the development of preventable adverse health outcomes.'

Back in its heyday, bromism was often misdiagnosed as alcoholism or nervous breakdowns until doctors realised patients weren't drinking but were taking bromide-based 'nerve tonics' daily. Severe cases resulted in psychosis, tremors or even comas.

Recognising the dangers, the US government restricted bromide use in over-the-counter medicines in 1975, making cases like this extremely rare in modern times.

Featured Image Credit: Cheng Xin/Contributor/Getty)
AI
ChatGPT
Health
Ozempic

Advert

Advert

Advert

Choose your content:

16 hours ago
17 hours ago
18 hours ago
  • Anna Moneymaker/Getty Images
    16 hours ago

    Gunman shot dead on Trump's property has texts leaked showing messages days before death

    The man appeared at Mar-a-Lago brandishing a shotgun

    News
  • EAGLE VISION AGENCY / Contributor / Getty
    17 hours ago

    Tumbler Ridge school shooting suspect had ChatGPT account banned, new details reveal

    The AI giant confirmed details about Jesse Van Rootselaar's account

    News
  • Bloomberg / Contributor / Getty
    17 hours ago

    OpenAI projects $14,000,000,000 loss in 2026 as investors panic

    AI seems to still be far from a profit

    News
  • Aaron Schwartz/Getty Images
    18 hours ago

    What Americans really think about Trump's immigration crackdown and tariffs uncovered in surprising new poll

    This is the lowest Trump’s approval rating has been since the attack on the US Capitol on January 6, 2021

    News
  • Warning issued to Americans as OpenAI launches controversial ChatGPT Health app
  • Psychology professor issues warning after 28-year-old woman 'grooms' and falls in love with ChatGPT
  • Steve Jobs 'predicts' ChatGPT in fascinating footage from 1985
  • Family of teen who died by suicide hit back at response from ChatGPT