Half of the health information provided by chatbots is incorrect or incomplete


Half of the health information obtained by consulting the chatbots is incorrect or incomplete. This is the main conclusion of a study recently published in the BMJ Open. Researchers warn of the high risk of increasing misinformation if these tools continue to be used without proper control and education.

“ChatGPT, Grok, Gemini, DeepSeek and Meta AI provide flawed health responses.”

The study was conducted with five chatbots: Gemini, DeepSeek, Meta AI, ChatGPT and Grok. Each was asked 250 questions from five categories: cancer, vaccines, stem cells, nutrition, and physical performance. Some questions were closed — a specific answer was asked — and others were open. They then analyzed whether these responses were “non-problematic,” “somewhat problematic,” or “very problematic,” where it was considered problematic when ordinary users followed these responses to select ineffective treatments or even when there was a risk of harm.

Well, they found that 30% of the answers given by the chatbots were quite problematic and 20% were very problematic. No significant differences were observed between the chatbots, but it was Grok who generated the most problematic responses, while Gemini obtained the best results. In terms of areas, responses to vaccines and cancer were the most reliable, while the greatest deficiencies in nutrition, sports performance and stem cells were detected.

“Remember that chatbots don’t reason and ignore evidence.”

In addition, the researchers emphasize that it is worrying how the chatbots present the answers: with total security and certainty, with few warnings or exceptions. Remember that chatbots don’t reason and ignore evidence. The data sources used for training include forums and social networks, and scientific research is often limited to open access publications (about 30-50% of published research).

As a result, experts have called for action: they believe that education and proper regulation of the population is necessary so that artificial intelligence works in favor of public health rather than harming it.

Buletina

Bidali zure helbide elektronikoa eta jaso asteroko buletina zure sarrera-ontzian

Bidali