Accepted for/Published in: JMIR Medical Education
Date Submitted: Aug 25, 2023
Date Accepted: Nov 3, 2023
Performance Comparison of ChatGPT-4 and Japanese Medical Residents in the General Medicine In Training Examination: Comparison Study
ABSTRACT
Background:
The reliability of ChatGPT-4, a state-of-the-art expansive language model specializing in clinical reasoning and medical knowledge, remains largely unverified across non-English languages.
Objective:
In this study, we compared fundamental clinical competencies between Japanese residents and ChatGPT-4 by employing the General Medicine In Training Examination (GM-ITE).
Methods:
We employed the ChatGPT-4 model provided by OpenAI and the GM-ITE examination questions for the years 2020, 2021, and 2022 to conduct a comparative analysis. This analysis focused on evaluating the performance of individuals at the conclusion of their second year of residency in comparison to that of ChatGPT-4. Given the current abilities of ChatGPT-4, our study included only single-choice exam questions, excluding those involving audio, video, or image data. The assessment included four categories: general theory (professionalism and medical interviewing), symptomatology and clinical reasoning, physical examinations and clinical procedures, and individual diseases. Additionally, we categorized questions into seven specialty fields and three levels of difficulty, which were determined based on residents' correct response rates.
Results:
Upon examination of 137 GM-ITE questions in Japanese, ChatGPT-4 scores were significantly higher than the mean scores of residents (residents: 55.8%, ChatGPT-4: 70.1%, p<.001). In terms of specific disciplines, ChatGPT-4 scored 23.5 points significantly higher in the "disease-specific" sector, 30.9 points higher in "obstetrics and gynecology," and 26.1 points higher in "internal medicine." In contrast, ChatGPT-4 scores in "medical interviewing and professionalism," "general practice," and "psychiatry" were lower than those of the residents, although this discrepancy was not statistically significant. Upon analyzing scores based on question difficulty, ChatGPT-4 scores were 17.2 points lower for easy problems (p=.007) but were 25.4 and 24.4 points higher for normal and difficult problems, respectively (p<.001). In year-on-year comparisons, ChatGPT-4 scores were 21.7 and 21.5 points higher in the 2020 and 2022 examinations, respectively (p<.05), but only 3.5 points higher in the 2021 examination (no significant difference).
Conclusions:
In the Japanese language, ChatGPT-4 also outperformed the average medical residents in the GM-ITE test designed for them. Specifically, ChatGPT-4 demonstrated a tendency to score higher on difficult questions with low resident correct response rates and those demanding a more comprehensive understanding of diseases. However, ChatGPT-4 scored comparatively lower on questions that residents could readily answer, such as those testing attitudes towards patients and professionalism, as well as those necessitating an understanding of context and communication. These findings highlight the strengths and limitations of AI applications in medical education and practice.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.