Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Formative Research

Date Submitted: Apr 7, 2025
Open Peer Review Period: Apr 7, 2025 - Jun 2, 2025
Date Accepted: Nov 24, 2025
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Comparing ChatGPT and DeepSeek for Assessment of Multiple-Choice Questions in Orthopedic Medical Education: Cross-Sectional Study

Anusitviwat C, Suwannaphisit S, Bvonpanttarananon J, Tangtrakulwanich B

Comparing ChatGPT and DeepSeek for Assessment of Multiple-Choice Questions in Orthopedic Medical Education: Cross-Sectional Study

JMIR Form Res 2025;9:e75607

DOI: 10.2196/75607

PMID: 41418321

PMCID: 12716854

Comparing ChatGPT and DeepSeek in Evaluating Multiple-choice Questions for Orthopedic Medical Education: A Cross-sectional Study

  • Chirathit Anusitviwat; 
  • Sitthiphong Suwannaphisit; 
  • Jongdee Bvonpanttarananon; 
  • Boonsin Tangtrakulwanich

ABSTRACT

Background:

With the advent of artificial intelligence (AI), large language models (LLMs), such as ChatGPT and DeepSeek, have emerged as potential tools for evaluating multiple-choice-question (MCQ) accuracy and efficiency.

Objective:

This study compared the performance of ChatGPT and DeepSeek in terms of correctness, response time, and reliability when answering multiple-choice questions (MCQs) from an orthopedic examination for medical students.

Methods:

This cross-sectional study included 209 orthopedic MCQs. ChatGPT (including the "Reason" function) and DeepSeek (including the "DeepThink" function) were used to identify the correct answers. Correctness and response times were recorded and compared using the chi-square test and Mann-Whitney U test where appropriate. The two AI models’ reliability was assessed using Cohen’s kappa coefficient. The MCQs for which all methods provided false answers were suspended for the next semester and reviewed by the orthopedic faculty.

Results:

ChatGPT achieved a correctness rate of 80.38%, while DeepSeek achieved 74.16% (p < 0.01). ChatGPT’s "Reason" function also outperformed DeepSeek’s "DeepThink" function (84.69% vs. 80.38%; p < 0.01). The average response time for ChatGPT was 10.40 ± 13.29 seconds, significantly shorter than DeepSeek’s 34.42 ± 25.48 seconds (p < 0.01). A completely false response by all methods was recorded in 7.66% of cases. Regarding reliability, ChatGPT demonstrated an almost perfect agreement (kappa = 0.81), whereas DeepSeek showed substantial agreement (kappa = 0.78).

Conclusions:

ChatGPT outperformed DeepSeek regarding correctness and response time, demonstrating its efficiency in evaluating orthopedic MCQs. This high reliability suggests its potential for integration into medical assessments. However, some MCQs required revisions to improve their clarity. Further studies are needed to evaluate AI’s role in other disciplines. Clinical Trial: Not applicable


 Citation

Please cite as:

Anusitviwat C, Suwannaphisit S, Bvonpanttarananon J, Tangtrakulwanich B

Comparing ChatGPT and DeepSeek for Assessment of Multiple-Choice Questions in Orthopedic Medical Education: Cross-Sectional Study

JMIR Form Res 2025;9:e75607

DOI: 10.2196/75607

PMID: 41418321

PMCID: 12716854

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.