Currently submitted to: JMIR Formative Research
Date Submitted: May 5, 2026
Open Peer Review Period: May 7, 2026 - Jul 2, 2026
(currently open for review)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Utilization of Artificial Intelligence in Delivering Objective Structured Clinical Examinations for Postgraduate Urology Training
ABSTRACT
Background:
Objective Structured Clinical Examinations (OSCEs) are widely used to assess clinical competence in postgraduate medical training, but their design, delivery, and evaluation require substantial faculty time, standardized patients, and institutional resources. Generative artificial intelligence (AI) may offer a scalable approach to creating, administering, and scoring OSCE stations, although its validity compared with human faculty assessment remains unclear, particularly in postgraduate surgical education.
Objective:
This study evaluated the feasibility and validity of creating a custom generative pre-trained transformer (GPT) to generate and deliver OSCE stations for postgraduate urology residents and compared AI-based scoring with human faculty grading.
Methods:
We conducted a prospective validation study. ChatGPT-4 generated and administered two OSCE stations for postgraduate year 3–5 urology residents. Stations simulated common urologic scenarios and were reviewed by faculty to ensure clinical accuracy. Performances were scored by ChatGPT using structured rubrics and independently graded by three blinded faculty examiners. Agreement between AI and human grading was assessed using correlation coefficients, intraclass correlation coefficients (ICC) and Bland–Altman analysis. Scores were also compared with other OSCE stations to assess construct validity.
Results:
Nine residents completed both stations. Mean human-graded and AI-graded scores were 51 ± 19% vs 65 ± 16% for Case 1 and 38 ± 11% vs 36 ± 12% for Case 2, respectively. Strong correlations were observed between AI and human graders (Case 1: r = 0.95, p < 0.001; Case 2: r = 0.83, p = 0.011), with moderate-to-high agreement (ICC = 0.70 and 0.83). Bland–Altman analysis demonstrated minimal bias. Over 80% of participants agreed the stations reflected appropriate realism and educational relevance.
Conclusions:
AI-assisted OSCE generation and evaluation using ChatGPT is feasible and demonstrates close alignment with faculty grading in postgraduate urology training. This approach may serve as a scalable adjunct to competency-based assessment, reducing examiner burden while maintaining validity, provided appropriate human oversight is maintained.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.