Accepted for/Published in: JMIR Medical Education
Date Submitted: Aug 8, 2025
Open Peer Review Period: Aug 19, 2025 - Oct 14, 2025
Date Accepted: Oct 28, 2025
(closed for review but you can still tweet)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Large Language Model-Based Generation and Assessment of Radiological Technologist Licensure Exam Items in Japan
ABSTRACT
Background:
For health professionals, mock examinations are an essential tool for assessing learning outcomes and reinforcing preparation for national licensure. However, traditionally written items are often inconsistent in difficulty, coverage, and clarity. Meanwhile, large language models (LLMs) have demonstrated high accuracy in medical exams and could potentially support the development of item banks, but the educational quality of artificial intelligence (AI)-generated questions remains underexplored.
Objective:
To identify the most accurate LLM for the Japanese National Examination for Radiological Technologists and, using that model, to generate and evaluate blueprint‑aligned multiple‑choice questions.
Methods:
Four LLMs, OpenAI o3, o4-mini, o4-mini-high (OpenAI), and Gemini 2.5 Flash (Google), were evaluated for their accuracy across all 200 items of the 77th Japanese National Examination for Radiological Technologists. The model with the highest accuracy (OpenAI o3) was then used to generate 192 multiple-choice items, adhering to the official blueprint. Expert reviewers rated these AI-generated items across five educational criteria: item difficulty, factual accuracy, content coverage, appropriateness of wording, and instructional usefulness. Statistical analyses were applied to compare model performance and item quality.
Results:
OpenAI o3 achieved the highest accuracy overall (90.0%) and on non-image items (92.5%), significantly outperforming o4-mini on the full set (P = 0.0234). Based on the expert reviewers’ scores, the AI-generated items strongly performed in terms of item difficulty (4.29), factual accuracy (4.18), and content coverage (4.73), whereas significantly lower scores were seen for appropriateness of wording (3.92) and instructional usefulness (3.60) (P < 0.05).
Conclusions:
OpenAI o3 can generate radiological licensure questions aligned with national standards in terms of difficulty and content accuracy. There are limitations in wording clarity and pedagogical feedback, although these can be addressed through editorial review. This approach can facilitate efficient collaboration between AI and faculty in developing mock exams and also holds promise for supporting scalable, syllabus-aligned assessment in health professional education.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.