Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Medical Education

Date Submitted: Mar 5, 2025
Open Peer Review Period: Mar 5, 2025 - Apr 30, 2025
Date Accepted: Oct 12, 2025
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Evaluating the Performance of DeepSeek-R1 and DeepSeek-V3 Versus OpenAI Models in the Chinese National Medical Licensing Examination: Cross-Sectional Comparative Study

Wang W, Zhou Y, Fu J, Hu K

Evaluating the Performance of DeepSeek-R1 and DeepSeek-V3 Versus OpenAI Models in the Chinese National Medical Licensing Examination: Cross-Sectional Comparative Study

JMIR Med Educ 2025;11:e73469

DOI: 10.2196/73469

PMID: 41237388

PMCID: 12663704

DeepSeek-R1 and DeepSeek-V3 Outperform OpenAI Models in the Chinese Medical Licensing Examination: A Cross-Sectional Comparative Study

  • Weiping Wang; 
  • Yuchen Zhou; 
  • Jingxuan Fu; 
  • Ke Hu

ABSTRACT

Background:

Deepseek-R1, an open-source large language model (LLM), has generated significant global interest in the past months.

Objective:

To compare the performance of DeepSeek, and OpenAI LLMs on the Chinese Medical Licensing Examination (CMLE) and evaluate their potential in medical education.

Methods:

This cross-sectional study assessed two DeepSeek models (DeepSeek-R1 and DeepSeek-V3), three OpenAI models (ChatGPT-o1 pro, ChatGPT-o3 mini, GPT-4o) and two additional Chinese LLMs (ERNIE 4.5 Turbo and Qwen 3) using the 2021 CMLE. Model performance was evaluated based on overall accuracy, accuracy across question types (A1, A2, A3/A4, B1), case/non-case analysis, medical specialties, and accuracy consensus between different model combinations.

Results:

All LLMs successfully passed the CMLE. DeepSeek-R1 achieved the highest accuracy (96.0%, 573/597), followed by DeepSeek-V3 (93.0%, 558/600), both of which significantly outperformed ChatGPT-o1 pro (75.0%, 450/600), ChatGPT-o3 mini (75.8%, 455/600), and GPT-4o (75.3% 452/600) (all comparisons: P<.001). Performance disparities were consistent across various question types (A1, A2, A3/A4, and B1), case analysis, non-case analysis, different types of case analysis, and medical specialties. The accuracy consensus between DeepSeek-R1 and DeepSeek-V3 reached 97.7% (544/557), significantly outperforming DeepSeek-R1 alone (P =.038). Two additional Chinese LLMs, ERNIE 4.5 Turbo (95.33%, 572/600) and Qwen 3 (92.5%, 555/600), also exhibited significantly better performance compared to the three OpenAI models.

Conclusions:

This study demonstrates that DeepSeek-R1 and DeepSeek-V3 significantly outperform OpenAI models on the CMLE. DeepSeek models show promise as tools for medical education and exam preparation in Chinese-language contexts.


 Citation

Please cite as:

Wang W, Zhou Y, Fu J, Hu K

Evaluating the Performance of DeepSeek-R1 and DeepSeek-V3 Versus OpenAI Models in the Chinese National Medical Licensing Examination: Cross-Sectional Comparative Study

JMIR Med Educ 2025;11:e73469

DOI: 10.2196/73469

PMID: 41237388

PMCID: 12663704

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.