uniladtech homepage
  • News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Archive
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
Doctor issues warning over dangerous AI chatbot ‘hallucinations’ which can result in ‘misleading’ medical advice

Home> Science> News

Published 09:10 30 Apr 2026 GMT+1

Doctor issues warning over dangerous AI chatbot ‘hallucinations’ which can result in ‘misleading’ medical advice

One third of adults have admitted to turning to AI for health information and advice

Rikki Loftus

Rikki Loftus

google discoverFollow us on Google Discover
Featured Image Credit: J Studios/Getty Images
AI
Health
News
Science

Advert

Advert

Advert

Where people would once input their ailments into Google in hopes of a search result that didn’t declare they were terminal, users are now switching to AI for their medical advice.

A whopping one third of adults have admitted to turning to AI for health information and advice, according to a poll conducted by the Kaiser Family Foundation (KFF).

But this could pose potential dangers as one expert warns about the ‘misleading’ medical advice curated by AI bots.

Dr Fotodotis Malamas, who is an IVF Consultant at CREATE Fertility, has shared that there are some major differences between Googling systems and relying on AI that people should be aware of.

Advert

Speaking to UNILAD Tech, the doctor explained: “The main difference is that a search engine will give you a list of links to explore, whereas a chatbot or AI overview usually gives you a single synthesised answer. When you’re searching for health advice, that lack of choice is risky. It removes seeing different opinions, making the AI answer sound more like a definitive expert than it actually is.”

One third of adults have admitted to turning to AI for health information and advice (d3sign/Getty Images)
One third of adults have admitted to turning to AI for health information and advice (d3sign/Getty Images)

While Dr Malamas says that it is ‘understandable’ why so many people would opt to take their medical queries to AI, with chatbots being ‘remarkably swift, free to access, and speak with a level of confidence that can be very convincing’, he has warned that this doesn’t mean the advice is accurate.

He continued: “There is a world of difference between a doctor using a high-tech tool and a patient relying on a chatbot to make life-altering health decisions, especially given documented concerns about inaccurate or misleading medical advice.”

The expert went on to say: “The biggest trap is that these systems can feel personal without actually knowing you. A chatbot lacks access to your full medical history, your physical examination and test results, and the unique nuances that shape your individual fertility journey. Furthermore, these systems can generate incorrect or misleading information, sometimes called hallucinations, which can be incredibly misleading.”

AI bots could be giving users ‘misleading’ medical advice, according to an expert (J Studios/Getty Images)
AI bots could be giving users ‘misleading’ medical advice, according to an expert (J Studios/Getty Images)

Patients also might not be aware of which information they should give an AI bot in order for it to provide an accurate answer.

Dr Malamas believes this could result in the user receiving a ‘cocktail of advice’, and while some of it may be helpful, other parts could be ‘dangerously misleading’.

The expert added: “Ideally, we would see a move toward greater transparency regarding the source material these AI models use. It would be reassuring to know that when a patient asks a health-related question, the AI is prioritizing peer-reviewed medical data over general internet content or unvetted forums.

“Additionally, built-in prompts that encourage users to seek professional advice when discussing time-sensitive matters like fertility would be a helpful safety net. The goal shouldn’t be to restrict the technology, but to ensure it acts as a bridge to professional care rather than a substitute for it.”

Choose your content:

16 hours ago
17 hours ago
20 hours ago
22 hours ago
  • Varlay / Getty
    16 hours ago

    Woman describes 'miraculous' impact GLP-1 had on her health following life-changing car crash

    Medical professionals continue to be split on the use of GLP-1s

    Science
  • HBO
    17 hours ago

    Euphoria's X-rated 'mummification' scene leads to warning from psychotherapist over ‘brain damage and death’ concerns

    The experts warns of 'inexperienced' people trying out the kink without knowing the dangers

    Science
  • David Paul Morris/Bloomberg via Getty Images
    20 hours ago

    Elon Musk set for huge payday if he manages to pull off one mission that would change the world

    Elon Musk is attempting to establish the unimaginable

    Science
  • Witthaya Prasongsin / Getty
    22 hours ago

    Cardiologist issues stark warning over ‘liquid death’ drink consumed by 50% of Americans daily

    There are four things you should be avoid to keep your ticker ticking

    Science
  • RFK Jr. dragged for claim that AI can cure cancer after bizarre story
  • Microsoft AI boss issues major warning over 'uncontrollable' AI
  • 'Godfather of AI' issues major new warning about a dangerous trend pushed by AI companies
  • Doctor issues warning over simple supplement dubbed 'nature's Ozempic'