Using AI for medical advice 'dangerous', study finds
Getty ImagesUsing artificial intelligence (AI) chatbots to help seek medical advice can be "dangerous", a new study has found.
The research found that using AI to make medical decisions presented risks to patients, due to its "tendency to provide inaccurate and inconsistent information".
It was led by researchers from the Oxford Internet Institute and the Nuffield Department of Primary Care Health Sciences at the University of Oxford.
Dr Rebecca Payne, who co-authored the study, said it found that "despite all the hype, AI just isn't ready to take on the role of the physician".
"Patients need to be aware that asking a large language model about their symptoms can be dangerous, giving wrong diagnoses and failing to recognise when urgent help is needed," Dr Payne, who is also a GP, added.
In the study, researchers asked nearly 1,300 participants to identify potential health conditions and a recommended course of action using different scenarios.
Some participants used large language model (LLM) AI softwares to receive a potential diagnosis and next steps, whereas others used more traditional methods - such as seeing a GP.
Researchers then evaluated the results, and found that the AI often provided a "mix of good and bad information" which users struggled to distinguish.
They found that, while the AI chatbots now "excel at standardised tests of medical knowledge", its use as a medical tool would "pose risks to real users seeking help with their own medical symptoms".
"These findings highlight the difficulty of building AI systems that can genuinely support people in sensitive, high-stakes areas like health," Dr Payne said.
The study's lead author, Andrew Bean - from the Oxford Internet Institute - said the study showed "interacting with humans poses a challenge" for even the top performing LLMs.
"We hope this work will contribute to the development of safer and more useful AI systems," he added.
You can follow BBC Oxfordshire on Facebook, X (Twitter), or Instagram.
