Accepted for/Published in: JMIR Medical Education
Date Submitted: Apr 26, 2023
Open Peer Review Period: Apr 26, 2023 - Jun 21, 2023
Date Accepted: Dec 11, 2023
(closed for review but you can still tweet)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Performance of ChatGPT on Clinical Medicine Entrance Examination for Chinese Postgraduate in Chinese
ABSTRACT
Background:
The ChatGPT, a Large-scale language models-based Artificial intelligence (AI), has fueled interest in medical care. However, the ability of AI to understand and generate text is constrained by the quality and quantity of training data available for that language. This study aims to provide qualitative feedback on ChatGPT's problem-solving capabilities in medical education and clinical decision-making in Chinese.
Objective:
This study aims to provide qualitative feedback on ChatGPT's problem-solving capabilities in medical education and clinical decision-making in Chinese.
Methods:
A dataset of Clinical Medicine Entrance Examination for Chinese Postgraduate was used to assess the effectiveness of ChatGPT3.5 in medical knowledge in Chinese language. The indictor of accuracy, concordance (explaining affirms the answer) and frequency of insights was used to assess performance of ChatGPT in original and encoding medical questions.
Results:
According to our evaluation, ChatGPT received a score of 153.5/300 for original questions in Chinese, which is slightly above the passing threshold of 129/300. Additionally, ChatGPT showed low accuracy in answering open-ended medical questions, with total accuracy of 31.5%. While ChatGPT demonstrated a commendable level of concordance (achieving 90% concordance across all questions) and generated innovative insights for most problems (at least one significant insight for 80% of all questions).
Conclusions:
ChatGPT's performance was suboptimal for medical education and clinical decision-making in Chinese compared with in English. However, ChatGPT demonstrated high internal concordance and generated multiple insights in Chinese language. Further research should investigate language-based differences in ChatGPT's healthcare performance.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.