
Doctor issues warning over dangerous AI chatbot ‘hallucinations’ which can result in ‘misleading’ medical advice
One third of adults have admitted to turning to AI for health information and advice

Where people would once input their ailments into Google in hopes of a search result that didn’t declare they were terminal, users are now switching to AI for their medical advice.
A whopping one third of adults have admitted to turning to AI for health information and advice, according to a poll conducted by the Kaiser Family Foundation (KFF).
But this could pose potential dangers as one expert warns about the ‘misleading’ medical advice curated by AI bots.
Dr Fotodotis Malamas, who is an IVF Consultant at CREATE Fertility, has shared that there are some major differences between Googling systems and relying on AI that people should be aware of.
Advert
Speaking to UNILAD Tech, the doctor explained: “The main difference is that a search engine will give you a list of links to explore, whereas a chatbot or AI overview usually gives you a single synthesised answer. When you’re searching for health advice, that lack of choice is risky. It removes seeing different opinions, making the AI answer sound more like a definitive expert than it actually is.”

While Dr Malamas says that it is ‘understandable’ why so many people would opt to take their medical queries to AI, with chatbots being ‘remarkably swift, free to access, and speak with a level of confidence that can be very convincing’, he has warned that this doesn’t mean the advice is accurate.
He continued: “There is a world of difference between a doctor using a high-tech tool and a patient relying on a chatbot to make life-altering health decisions, especially given documented concerns about inaccurate or misleading medical advice.”
The expert went on to say: “The biggest trap is that these systems can feel personal without actually knowing you. A chatbot lacks access to your full medical history, your physical examination and test results, and the unique nuances that shape your individual fertility journey. Furthermore, these systems can generate incorrect or misleading information, sometimes called hallucinations, which can be incredibly misleading.”

Patients also might not be aware of which information they should give an AI bot in order for it to provide an accurate answer.
Dr Malamas believes this could result in the user receiving a ‘cocktail of advice’, and while some of it may be helpful, other parts could be ‘dangerously misleading’.
The expert added: “Ideally, we would see a move toward greater transparency regarding the source material these AI models use. It would be reassuring to know that when a patient asks a health-related question, the AI is prioritizing peer-reviewed medical data over general internet content or unvetted forums.
“Additionally, built-in prompts that encourage users to seek professional advice when discussing time-sensitive matters like fertility would be a helpful safety net. The goal shouldn’t be to restrict the technology, but to ensure it acts as a bridge to professional care rather than a substitute for it.”