Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: May 22, 2024
Date Accepted: Jun 15, 2024
Performance of ChatGPT Across Different Versions in Medical Licensing Examinations Worldwide: A Systematic Review and Meta-Analysis
ABSTRACT
Background:
Over the past two years, researchers have used various medical licensing examinations to test whether ChatGPT possesses accurate medical knowledge. The performance of each version of ChatGPT on the medical Licensing Exam in multiple environments showed significant differences. At this stage, there is still a lack of a comprehensive understanding of the variability in ChatGPT's performance on different medical licensing exams.
Objective:
In this study, we reviewed all studies on ChatGPT performance in medical licensing examinations up to March 2024. This review aims to contribute to the evolving discourse on artificial intelligence (AI) in medical education by providing a comprehensive analysis of the performance of ChatGPT in various environments. The insights gained from this systematic review will guide educators, policymakers, and technical experts to effectively and judiciously utilize AI in medical education.
Methods:
We searched the literature published between January 1, 2022, and March 29, 2024, by searching query strings in WOS, PubMed, and Scopus. Two authors screened the literature according to the inclusion and exclusion criteria, extracted data, and independently assessed the quality of the literature concerning Quality Assessment of Diagnostic Accuracy Studies-2. We conducted both qualitative and quantitative analyses.
Results:
A total of 45 studies on the performance of different versions of ChatGPT in medical licensing examinations were included in this study. ChatGPT-4 achieved an overall accuracy rate of 81%, significantly surpassing ChatGPT-3.5, and, in most cases, passed the medical examinations, outperforming the average scores of medical students. Translating the exam questions into English improved ChatGPT-3.5's performance but did not affect ChatGPT-4. ChatGPT-3.5 showed no performance difference between exams from English-speaking and non-English-speaking countries, but ChatGPT-4 performed better on exams from English-speaking countries. ChatGPT-3.5 performed better on short-text questions than on long-text questions. The difficulty of the questions and the use of optimized prompts affected the performance of ChatGPT 3.5 and ChatGPT 4. In image-based multiple-choice questions (MCQ), ChatGPT's accuracy rate ranges from 13.1% to 100%. However, ChatGPT performed significantly worse on open-ended questions compared to MCQs.
Conclusions:
Thus, ChatGPT-4 demonstrates considerable potential for future use in medical education. However, due to its incomplete accuracy, inconsistent performance, and the challenges posed by differing medical policies and knowledge across countries, ChatGPT-4 is not yet suitable for use in medical education. Clinical Trial: This systematic review was registered in the International Prospective Register of Systematic Reviews (PROSPERO) database on February 1, 2024 (CRD42024506687).
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.