Currently submitted to: JMIR Medical Education
Date Submitted: May 9, 2026
Open Peer Review Period: May 11, 2026 - Jul 6, 2026
(currently open for review)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Difficult but Not Yet Discriminating: A Blinded Psychometric Comparison of Human-Edited GenAI-Assisted and Educator-Crafted MCQs in Postgraduate Family Medicine
ABSTRACT
Background:
Generative artificial intelligence (GenAI) is increasingly used to draft multiple-choice questions (MCQs) for health professions education, but most evidence has focused on raw AI outputs, expert ratings, or whether AI-generated items are too easy. In practice, educators are more likely to use human-edited GenAI-assisted items. The unresolved question is whether such items are psychometrically ready for assessment use after routine review for clinical accuracy, phrasing, and relevance.
Objective:
This study compared human-edited GenAI-assisted and educator-crafted MCQs for postgraduate Family Medicine Applied Knowledge Test (AKT)-level assessment, focusing on difficulty, discrimination, reliability, distractor functioning, and participant post-item ratings.
Methods:
We conducted a blinded comparative psychometric evaluation of 60 best-of-five single-best-answer MCQs: 30 human-edited GenAI-assisted items and 30 educator-crafted items. Items were topic-matched in pairs and randomised across two assessment sets. Participants were postgraduate doctors preparing for the Family Medicine AKT in Singapore and were blinded to item origin. Outcomes included paired total scores, score correlation and agreement, Kuder-Richardson Formula 20 (KR-20) reliability, item difficulty index, corrected point-biserial discrimination, non-functioning distractors, negatively discriminating distractors, and participant ratings of perceived difficulty, clarity, and relevance.
Results:
Seventy-three participants completed both item sets. Participants scored significantly lower on GenAI-assisted items than educator-crafted items (mean 19.12, SD 2.83 vs mean 21.10, SD 3.42 out of 30; mean difference −1.97, 95% CI −2.72 to −1.23; p<.001; Cohen d=0.62), indicating that GenAI-assisted items were not empirically easier. GenAI-assisted and educator-crafted scores were moderately correlated (r=0.49, p<.001), but agreement was limited. KR-20 reliability was lower for GenAI-assisted than educator-crafted items (0.38 vs 0.60). Mean difficulty index did not differ significantly by item origin (0.64 vs 0.70; p=.290), and more GenAI-assisted items fell within the acceptable difficulty range (18/30, 60.0% vs 13/30, 43.3%). However, GenAI-assisted items showed a weaker discrimination pattern: mean corrected point-biserial correlation was lower (0.09 vs 0.18; p=.036), fewer items achieved good discrimination (2/30, 6.7% vs 4/30, 13.3%), and more had negative discrimination (6/30, 20.0% vs 2/30, 6.7%). Distractor analysis showed more GenAI-assisted items with three or four non-functioning distractors (17/30, 56.7% vs 13/30, 43.3%) and at least one negatively discriminating distractor (14/30, 46.7% vs 9/30, 30.0%), although these differences were not statistically significant. Participant ratings of perceived difficulty, clarity, and relevance did not differ significantly by item origin.
Conclusions:
Human-edited GenAI-assisted MCQs can achieve plausible postgraduate difficulty, challenging the assumption that such items are necessarily easy. However, difficulty and surface acceptability did not ensure assessment readiness. GenAI-assisted items showed weaker reliability, discrimination, and distractor functioning despite similar participant ratings. GenAI is best positioned as a rapid drafting adjunct within a human-in-the-loop workflow that prioritises key verification, distractor engineering, empirical item analysis, and conservative repair or removal of poorly functioning items before incorporation into question banks or use in.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.