Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Medical Education

Date Submitted: Aug 8, 2025
Open Peer Review Period: Aug 19, 2025 - Oct 14, 2025
Date Accepted: Oct 28, 2025
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Large Language Models for the National Radiological Technologist Licensure Examination in Japan: Cross-Sectional Comparative Benchmarking and Evaluation of Model-Generated Items Study

Ito T, Ishibashi T, Hayashi T, Kojima S, Sogabe K

Large Language Models for the National Radiological Technologist Licensure Examination in Japan: Cross-Sectional Comparative Benchmarking and Evaluation of Model-Generated Items Study

JMIR Med Educ 2025;11:e81807

DOI: 10.2196/81807

PMID: 41232030

PMCID: 12614397

Large Language Models for the National Radiological Technologist Licensure Examination in Japan: Cross-Sectional Comparative Benchmarking and Evaluation of Model-Generated Items Study

  • Toshimune Ito; 
  • Toru Ishibashi; 
  • Tatsuya Hayashi; 
  • Shinya Kojima; 
  • Kazumi Sogabe

ABSTRACT

Background:

Mock examinations are widely used in health professional education to assess learning and prepare candidates for national licensure. However, instructor-written multiple-choice items can vary in difficulty, coverage, and clarity. Recently, large language models (LLMs) have achieved high accuracy in medical examinations, highlighting their potential for assisting item-bank development; however, their educational quality remains insufficiently characterized.

Objective:

To (1) identify the most accurate LLM for the Japanese National Examination for Radiological Technologists and (2) use the top model to generate blueprint-aligned multiple-choice questions and evaluate their educational quality.

Methods:

Four LLMs—OpenAI o3, o4-mini, o4-mini-high (OpenAI), and Gemini 2.5 Flash (Google)—were evaluated on all 200 items of the 77th Japanese National Examination for Radiological Technologists in 2025. Accuracy was analyzed for overall items and for 173 non-image items. The best-performing model (o3) then generated 192 original items across 14 subjects by matching the official blueprint (image-based items were excluded). Subject-matter experts (≥5 years as coordinators and routine mock examination authors) independently rated each generated item on five criteria using a 5-point scale (1=unacceptable, 5=adoptable): item difficulty, factual accuracy, accuracy of content coverage, appropriateness of wording, and instructional usefulness. Cochran’s Q with Bonferroni-adjusted McNemar tests compared model accuracies, and one-sided Wilcoxon signed-rank tests assessed whether the median ratings exceeded 4.

Results:

OpenAI o3 achieved the highest accuracy overall (90.0%; 95% CI 85.1%–93.4%) and on non-image items (92.5%; 95% CI 87.6%–95.6%), significantly outperforming o4-mini on the full set (P=.02). Across models, accuracy differences on the non-image subset were not significant (Cochran’s Q, P=.101). Using o3, the 192 generated items received high expert ratings for item difficulty (mean, 4.29; 95% CI 4.11–4.46), factual accuracy (4.18; 95% CI 3.98–4.38), and content coverage (4.73; 95% CI 4.60–4.86). Ratings were comparatively lower for appropriateness of wording (3.92; 95% CI 3.73–4.11) and instructional usefulness (3.60; 95% CI 3.41–3.80). For these two criteria, the tests did not support a median rating >4 (one-sided Wilcoxon, P=.445 and P=.999, respectively). Representative low-rated examples (ratings 1–2) and the rationale for those scores—such as ambiguous phrasing or generic explanations without linkage to stem cues—are provided in the supplementary materials.

Conclusions:

OpenAI o3 can generate radiological licensure items that align with national standards in terms of difficulty, factual correctness, and blueprint coverage. However, wording clarity and the pedagogical specificity of explanations were weaker and did not meet an adoptable threshold without further editorial refinement. These findings support a practical workflow in which LLMs draft syllabus-aligned items at scale, while faculty perform targeted edits to ensure clarity and formative feedback. Future studies should evaluate image-inclusive generation, use API-pinned model snapshots to increase reproducibility, and develop guidance to improve explanation quality for learner remediation.


 Citation

Please cite as:

Ito T, Ishibashi T, Hayashi T, Kojima S, Sogabe K

Large Language Models for the National Radiological Technologist Licensure Examination in Japan: Cross-Sectional Comparative Benchmarking and Evaluation of Model-Generated Items Study

JMIR Med Educ 2025;11:e81807

DOI: 10.2196/81807

PMID: 41232030

PMCID: 12614397

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.