AI chatbots pose 'dangerous' risk when giving medical advice, study suggests

Laura CressTechnology reporter
News imageFiordaliso via Getty Images A woman looks at her phone lying on the bed in a low light.Fiordaliso via Getty Images

AI chatbots give inaccurate and inconsistent medical advice that could present risks to users, according to a study from the University of Oxford.

The research found people using AI for healthcare advice were given a mix of good and bad responses, making it hard to identify what advice they should trust.

In November 2025, polling by Mental Health UK found more than one in three UK residents now use AI to support their mental health or wellbeing.

Dr Rebecca Payne, lead medical practitioner on the study, said it could be "dangerous" for people to ask chatbots about their symptoms.

Researchers gave 1,300 people a scenario, such as having a severe headache or being a new mother who felt constantly exhausted.

They were split into two groups, with one using AI to help them figure out what they might have and decide what to do next.

The researchers then evaluated whether people correctly identified what might be wrong, and if they should see a GP or go to A&E.

They said the people who used AI often did not know what to ask, and were given a variety of different answers depending on how they worded their question.

The chatbot responded with a mixture of information, and people found it hard to distinguish between what was useful and what was not.

Dr Adam Mahdi, senior author on the study, told the BBC while AI was able to give medical information, people "struggle to get useful advice from it".

"People share information gradually", he said.

"They leave things out, they don't mention everything. So, in our study, when the AI listed three possible conditions, people were left to guess which of those can fit.

"This is exactly when things would fall apart."

Lead author Andrew Bean said the analysis illustrated how interacting with humans poses a challenge "even for top" AI models.

"We hope this work will contribute to the development of safer and more useful AI systems," he said.

Dr Amber W. Childs, an associate professor of psychiatry in the Yale School of Medicine, said since chatbots are trained on current medical practices and data, they also face a further problem of repeating biases which have been "baked into medical practices for decades".

"A chatbot is only as good a diagnostician as seasoned clinicians are, which is not perfect either," she said.

Meanwhile Dr Bertalan Meskó, editor of The Medical Futurist, which predicts tech trends in healthcare, said there were developments coming in the space.

He said two major AI developers, OpenAI and Anthropic, had released health-dedicated versions of their general chatbot recently, which he believed would "definitely yield different results in a similar study".

He said the goal should be to "to keep on improving" the tech, especially "health-related versions, with clear national regulations, regulatory guardrails and medical guidelines".

News imageA green promotional banner with black squares and rectangles forming pixels, moving in from the right. The text says: “Tech Decoded: The world’s biggest tech news in your inbox every Monday.”

Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.