Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Formative Research

Date Submitted: Sep 6, 2024
Date Accepted: Jan 29, 2025

The final, peer-reviewed published version of this preprint can be found here:

Medical Misinformation in AI-Assisted Self-Diagnosis: Development of a Method (EvalPrompt) for Analyzing Large Language Models

Zada T, Tam N, Barnard F, Rambhatla S, Bhat V

Medical Misinformation in AI-Assisted Self-Diagnosis: Development of a Method (EvalPrompt) for Analyzing Large Language Models

JMIR Form Res 2025;9:e66207

DOI: 10.2196/66207

PMID: 40063849

PMCID: 11913316

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Large Language Models for Self-Diagnosis: A New Front for Medical Information

  • Troy Zada; 
  • Natalie Tam; 
  • Francois Barnard; 
  • Sirisha Rambhatla; 
  • Venkat Bhat

ABSTRACT

Background:

Rapid integration of Large Language Models (LLMs) in healthcare is sparking global discussion about their potential to revolutionize healthcare quality and accessibility. At a time where improving healthcare quality and access remains a critical concern for countries worldwide, the ability of these models to pass medical exams has been used to argue in favour of their use in medical training and diagnosis. However, the impact of their inevitable use as a self-diagnostic tool and their role in spreading healthcare misinformation has not been evaluated.

Objective:

This study aims to assess the effectiveness of LLMs from the perspective of a general user self-diagnosing to better understand the clarity, accuracy, and robustness of the models.

Methods:

We develop a comprehensive testing methodology based on a medical licensing exam to evaluate LLM responses to open-ended questions to mimic real-world use cases.

Results:

We reveal that a) ChatGPT-4.0 is marked as being correct 36% of the time by non-experts and experts, with only 34% agreement between them. Interestingly, b) when prompted with sentence dropout on the correct responses from a), non-experts tend to rate 27% additional responses as correct, which indicates an increased risk of spreading medical misinformation.

Conclusions:

These results highlight the modest capabilities of LLMs since their responses are often unclear and inaccurate. A need exists to call the community to develop trustworthy solutions to reduce medical misinformation in LLMs.


 Citation

Please cite as:

Zada T, Tam N, Barnard F, Rambhatla S, Bhat V

Medical Misinformation in AI-Assisted Self-Diagnosis: Development of a Method (EvalPrompt) for Analyzing Large Language Models

JMIR Form Res 2025;9:e66207

DOI: 10.2196/66207

PMID: 40063849

PMCID: 11913316

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.