Accepted for/Published in: JMIR Medical Education
Date Submitted: Apr 26, 2023
Open Peer Review Period: Apr 26, 2023 - Jun 21, 2023
Date Accepted: Dec 11, 2023
(closed for review but you can still tweet)
Performance of ChatGPT on Clinical Medicine Entrance Examination for Chinese Postgraduate in Chinese
ABSTRACT
Background:
ChatGPT, an Artificial intelligence (AI) based on large-scale language models, has sparked interest in the field of healthcare. Nonetheless, the capabilities of AI in text comprehension and generation are constrained by the quality and volume of available training data for a specific language, the performance of AI across different languages requires further investigation. While AI harbors substantial potential in medicine, it's imperative to tackle challenges such as the formulation of clinical care standards, facilitating cultural transitions in medical education and practice, and managing ethical issues including data privacy, consent, and bias.
Objective:
We aimed to evaluate ChatGPT's performance in processing Chinese Clinical Medicine Entrance Examination questions, assess its clinical reasoning ability, investigate potential limitations with the Chinese language, and explore its potential as a valuable tool for medical professionals in the Chinese context.
Methods:
We used a dataset of Clinical Medicine Entrance Examination for Chinese Postgraduate to assess the effectiveness of ChatGPT3.5's medical knowledge in the Chinese language, which has a dataset of 165 medical questions, which were divided into three categories: 1) Common Questions (n=90) assessing basic medical knowledge; 2) Case Analysis Questions (n=45) focusing on clinical decision-making through patient case evaluations; and 3) Multi-Choice Questions (n=30) requiring the selection of multiple correct answers. First of all, we assessed whether ChatGPT could meet the stringent cutoff score defined by the government agency, which requires a performance within the top 20% of candidates. Additionally, in our evaluation of ChatGPT's performance on both original and encoded medical questions, we utilized three primary indicators: accuracy, concordance (which validates the answer), and the frequency of insights.
Results:
Our evaluation revealed that ChatGPT scored 153.5/300 for original questions in Chinese, which signifies the minimum score set to ensure that at least 20% more candidates pass than the enrollment quota. However, ChatGPT had low accuracy in answering open-ended medical questions, with only 31.5% total accuracy. The accuracy for Common Questions, Multi-Choices Questions, and Case Analysis Questions, was 42%, 37%, and 17%, respectively. ChatGPT achieved a 90% concordance across all questions. Among correct responses, the concordance was 100%, significantly exceeding that of incorrect responses (50%) (p<0.001). ChatGPT provided innovative insights for 80% of all questions, with an average of 2.95 insights per accurate response.
Conclusions:
Although ChatGPT surpassed the passing threshold for the Clinical Medicine Entrance Examination for Chinese Postgraduates, its performance in answering open-ended medical questions was suboptimal. Nonetheless, ChatGPT exhibited high internal concordance and the ability to generate multiple insights in the Chinese language. Future research should investigate the language-based discrepancies in ChatGPT's performance within the healthcare context.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.