Accepted for/Published in: JMIR AI
Date Submitted: Apr 15, 2025
Open Peer Review Period: Apr 28, 2025 - Jun 23, 2025
Date Accepted: Jul 30, 2025
(closed for review but you can still tweet)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Exceptional Performance of DeepSeek on Pediatric Board Examination Preparation Questions
ABSTRACT
Background:
The integration of artificial intelligence in medical education raises questions about large language models' (LLMs) capabilities in specialized medical knowledge domains. Limited research exists evaluating AI performance on standardized pediatric assessments.
Objective:
To evaluate and compare the performance of three leading LLMs on pediatric board examination preparation questions and contextualize their performance against human physician benchmarks.
Methods:
We conducted a comparative analysis of DeepSeek 7B v2.5, ChatGPT-4, and ChatGPT-4.5 using 266 multiple-choice questions from the 2023 PREPĀ® Self-Assessment (American Academy of Pediatrics). Each model was presented with identical questions covering the full spectrum of pediatric knowledge domains. Performance was measured by calculating the percentage of correct responses and compared to published first-time pass rates for the American Board of Pediatrics (ABP) examination.
Results:
DeepSeek exhibited the highest accuracy at 98.12% (261/266 correct responses), exceeding typical human performance metrics. ChatGPT-4.5 achieved 96.6% accuracy (257/266), performing at the upper threshold of human performance. ChatGPT-4 demonstrated 82.7% accuracy (220/266), comparable to the lower range of human pass rates. Error pattern analysis revealed that AI models most commonly struggled with questions requiring integration of complex clinical presentations with rare disease knowledge.
Conclusions:
Recent advancements in large language models have produced AI systems capable of performing at or above the level of board-certified pediatricians on standardized examination questions. These findings suggest potential applications in medical education, board examination preparation, and possibly clinical decision support. Further research should evaluate these AI systems on more complex clinical reasoning tasks and in simulated clinical scenarios.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.