Accepted for/Published in: JMIR Formative Research
Date Submitted: Apr 7, 2025
Open Peer Review Period: Apr 7, 2025 - Jun 2, 2025
Date Accepted: Nov 24, 2025
(closed for review but you can still tweet)
Comparing ChatGPT and DeepSeek in Evaluating Multiple-choice Questions for Orthopedic Medical Education: A Cross-sectional Study
ABSTRACT
Background:
With the advent of artificial intelligence (AI), large language models (LLMs), such as ChatGPT and DeepSeek, have emerged as potential tools for evaluating multiple-choice-question (MCQ) accuracy and efficiency.
Objective:
This study compared the performance of ChatGPT and DeepSeek in terms of correctness, response time, and reliability when answering multiple-choice questions (MCQs) from an orthopedic examination for medical students.
Methods:
This cross-sectional study included 209 orthopedic MCQs. ChatGPT (including the "Reason" function) and DeepSeek (including the "DeepThink" function) were used to identify the correct answers. Correctness and response times were recorded and compared using the chi-square test and Mann-Whitney U test where appropriate. The two AI models’ reliability was assessed using Cohen’s kappa coefficient. The MCQs for which all methods provided false answers were suspended for the next semester and reviewed by the orthopedic faculty.
Results:
ChatGPT achieved a correctness rate of 80.38%, while DeepSeek achieved 74.16% (p < 0.01). ChatGPT’s "Reason" function also outperformed DeepSeek’s "DeepThink" function (84.69% vs. 80.38%; p < 0.01). The average response time for ChatGPT was 10.40 ± 13.29 seconds, significantly shorter than DeepSeek’s 34.42 ± 25.48 seconds (p < 0.01). A completely false response by all methods was recorded in 7.66% of cases. Regarding reliability, ChatGPT demonstrated an almost perfect agreement (kappa = 0.81), whereas DeepSeek showed substantial agreement (kappa = 0.78).
Conclusions:
ChatGPT outperformed DeepSeek regarding correctness and response time, demonstrating its efficiency in evaluating orthopedic MCQs. This high reliability suggests its potential for integration into medical assessments. However, some MCQs required revisions to improve their clarity. Further studies are needed to evaluate AI’s role in other disciplines. Clinical Trial: Not applicable
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.