Accepted for/Published in: JMIR Nursing
Date Submitted: Aug 22, 2025
Open Peer Review Period: Aug 22, 2025 - Oct 17, 2025
Date Accepted: Feb 3, 2026
(closed for review but you can still tweet)
Performance of Large Language Models in the Japanese Public Health Nurse National Examination: A Comparative Study
ABSTRACT
Background:
Large language models (LLMs) have shown promising results on Japanese national medical and nursing examinations. However, no study has evaluated LLM performance on the Japanese Public Health Nurse National Examination, which requires specialized knowledge in community health and public health nursing practice.
Objective:
This study compared the performance of multiple LLMs on this specialized examination.
Methods:
Three LLMs were evaluated: GPT-4o, Claude 4 Opus, and Gemini 2.5 Pro. All 110 questions from the 111th Public Health Nurse National Examination were administered using standardized prompts. Questions were classified by format (text vs. figure/calculation), content (general vs. situational), and selection type (single vs. multiple choice). Accuracy rates and 95% CIs were calculated, with statistical comparisons performed using chi-square tests.
Results:
All LLMs exceeded the passing criterion (60.0%). The accuracy rates were as follows: GPT-4o, 85.5% (95% CI: 77.5%–91.5%); Claude 4 Opus, 91.8% (95% CI: 85.0%–96.2%); and Gemini 2.5 Pro, 92.7% (95% CI: 86.2%–96.8%). No significant differences were found among the LLMs. However, all models showed lower accuracy on multiple-choice questions than on single-choice questions, with significant intra-model differences observed for GPT-4o (62.5% vs. 89.1%, P=.014) and Claude 4 Opus (75.0% vs. 94.6%, P=.026).
Conclusions:
LLMs demonstrated high performance on public health nursing examinations but showed limitations in complex reasoning requiring multiple-choice selections. These findings suggest the potential for LLM use as educational support tools while highlighting the need for cautious implementation in specialized nursing education.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.