Accepted for/Published in: JMIR Medical Informatics
Date Submitted: May 9, 2023
Open Peer Review Period: May 8, 2023 - Jul 3, 2023
Date Accepted: Sep 13, 2023
(closed for review but you can still tweet)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Diagnostic Accuracy of Chat Generative Pretrained Transformer-Generated Differential Diagnosis Lists for Case Report-Derived Clinical Vignettes
ABSTRACT
Background:
The diagnostic accuracy of differential diagnoses generated by artificial intelligence chatbots, including chat generative pretrained transformers (ChatGPTs), for difficult or complex clinical vignettes from case reports is unknown.
Objective:
This study aimed to evaluate the accuracy of the differential diagnosis lists generated by third-generation ChatGPT (ChatGPT-3) and fourth-generation ChatGPT (ChatGPT-4) for case vignettes from case reports published by the Department of General Internal Medicine (GIM).
Methods:
We searched PubMed for the case reports. After finding the case reports, physicians included diagnostic cases, assessed the final diagnosis, and displayed them as clinical vignettes. Physicians typed the determined text with the clinical vignettes in the ChatGPT prompt to generate the top ten differential diagnosis lists. The ChatGPTs were not specially trained and reinforced. Three GIM physicians from other medical institutions created differential diagnosis lists by reading the same clinical vignettes. We measured the rate of correct diagnosis within the top ten differential diagnosis lists, top five differential diagnosis lists, and top diagnosis.
Results:
In total, 52 case reports were analyzed. The rates of correct diagnosis by ChatGPT-4 within the top ten differential diagnosis lists, top five differential diagnosis lists, and top diagnosis were 43/52 (82.7%), 42/52 (80.8%), and 31/52 (59.6%), respectively. The rates of correct diagnosis by ChatGPT-3 within the top ten differential diagnosis lists, top five differential diagnosis lists, and top diagnosis were 38/52 (73.1%), 34/52 (65.4%), and 22/52 (42.3%), respectively. The rates of correct diagnosis by ChatGPT-4 were higher than those by ChatGPT-3 within the top ten (82.7% vs. 73.1%, P=.34) and five (80.8% vs. 65.4%, P=.12) differential diagnosis lists and top diagnosis (50.0% vs. 42.3%, P=.12), although the difference was not statistically significant. The rates of correct diagnosis by ChatGPT-4 were also higher than those by physicians within the ten (82.7% vs. 75.0%, P=.47) and five (80.8% vs. 67.3%, P=.18) differential diagnosis lists and top diagnosis (59.6% vs. 50.0%, P=.43), although the difference was not significant. Between open access or not and between the timing of publication prior to 2021 and in 2022, there were no statistical differences in the rates of correct diagnoses within the top ten and five differential diagnosis lists and top diagnosis generated by ChatGPTs.
Conclusions:
This study demonstrates the high diagnostic accuracy of differential diagnosis lists generated by ChatGPT for difficult or complex clinical vignettes from case reports. The rate of correct diagnoses within the ten and five differential diagnosis lists generated by ChatGPT-4 was >80%. Clinical Trial: Not applicable.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.