Accepted for/Published in: JMIR Formative Research
Date Submitted: Sep 6, 2024
Date Accepted: Jan 29, 2025
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Large Language Models for Self-Diagnosis: A New Front for Medical Information
ABSTRACT
Background:
Rapid integration of Large Language Models (LLMs) in healthcare is sparking global discussion about their potential to revolutionize healthcare quality and accessibility. At a time where improving healthcare quality and access remains a critical concern for countries worldwide, the ability of these models to pass medical exams has been used to argue in favour of their use in medical training and diagnosis. However, the impact of their inevitable use as a self-diagnostic tool and their role in spreading healthcare misinformation has not been evaluated.
Objective:
This study aims to assess the effectiveness of LLMs from the perspective of a general user self-diagnosing to better understand the clarity, accuracy, and robustness of the models.
Methods:
We develop a comprehensive testing methodology based on a medical licensing exam to evaluate LLM responses to open-ended questions to mimic real-world use cases.
Results:
We reveal that a) ChatGPT-4.0 is marked as being correct 36% of the time by non-experts and experts, with only 34% agreement between them. Interestingly, b) when prompted with sentence dropout on the correct responses from a), non-experts tend to rate 27% additional responses as correct, which indicates an increased risk of spreading medical misinformation.
Conclusions:
These results highlight the modest capabilities of LLMs since their responses are often unclear and inaccurate. A need exists to call the community to develop trustworthy solutions to reduce medical misinformation in LLMs.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.