Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Currently submitted to: JMIR Medical Education

Date Submitted: Feb 3, 2026
Open Peer Review Period: Feb 6, 2026 - Apr 3, 2026
(closed for review but you can still tweet)

NOTE: This is an unreviewed Preprint

Warning: This is a unreviewed preprint (What is a preprint?). Readers are warned that the document has not been peer-reviewed by expert/patient reviewers or an academic editor, may contain misleading claims, and is likely to undergo changes before final publication, if accepted, or may have been rejected/withdrawn (a note "no longer under consideration" will appear above).

Peer review me: Readers with interest and expertise are encouraged to sign up as peer-reviewer, if the paper is within an open peer-review period (in this case, a "Peer Review Me" button to sign up as reviewer is displayed above). All preprints currently open for review are listed here. Outside of the formal open peer-review period we encourage you to tweet about the preprint.

Citation: Please cite this preprint only for review purposes or for grant applications and CVs (if you are the author).

Final version: If our system detects a final peer-reviewed "version of record" (VoR) published in any journal, a link to that VoR will appear below. Readers are then encourage to cite the VoR instead of this preprint.

Settings: If you are the author, you can login and change the preprint display settings, but the preprint URL/DOI is supposed to be stable and citable, so it should not be removed once posted.

Submit: To post your own preprint, simply submit to any JMIR journal, and choose the appropriate settings to expose your submitted version as preprint.

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Artificial Intelligence for Clinical Competency Assessment: A Scoping Review of Methods and Applications

  • Wenjia(Stella) Zhang; 
  • Benjamin Daniels; 
  • Carol Mita; 
  • Hoang Nguyen; 
  • David B Duong

ABSTRACT

Background:

Strengthening the global health workforce is central to achieving Universal Health Coverage, yet existing approaches to measuring clinical competency remain resource-intensive, episodic, and difficult to scale, especially in low- and middle-income contexts. Recent advances in large language models (LLMs) have enabled AI-led simulated standardized patients (SSPs) that may offer scalable alternatives to traditional assessments.

Objective:

This study aims to systematically map and characterize the existing scope, design features, and validation approaches of AI-led SSP tools used for clinical competency assessment.

Methods:

We conducted a scoping review following JBI guidelines, searching MEDLINE, Embase, CINAHL, Education Source, and Web of Science from inception through June 2025. Two reviewers independently screened studies and extracted data across five domains: study characteristics and populations; frontend platform and interface features; backend AI models and architectures; user interaction and automatic feedback mechanisms; and tool evaluation methods and outcomes.

Results:

Between 2008 and 2025, 1,185 studies were identified and 21 studies met the inclusion criteria. Most described single-site pilot evaluations or prototype systems were developed within academic institutions in high-income countries, primarily targeting pre-licensure medical or nursing students. SSPs most commonly supported text-based, web-hosted history-taking, while simulations of physical examination, laboratory tests, diagnostic reasoning, and management planning were less common. Backend architectures relied heavily on human-authored case scripts and manually defined scoring criteria, with LLMs primarily enhancing conversational fluency rather than automating clinical reasoning or evaluation. Automated feedback and scoring were reported in approximately half of the studies and showed moderate-to-high agreement with human raters when evaluated, though validation evidence was heterogeneous and limited.

Conclusions:

AI-led SSPs are emerging as accessible and realistic tools for clinical competency assessment, particularly across all levels of medical education. However, current implementations remain early-stage, human-dependent, and narrowly validated, constraining their widespread use as standardized or scalable instruments for health system workforce evaluation. Advancing SSPs toward end-to-end automated assessment tools will require integrated system designs, rigorous validation, and intentional development for deployment across diverse and resource-constrained settings.


 Citation

Please cite as:

Zhang W, Daniels B, Mita C, Nguyen H, Duong DB

Artificial Intelligence for Clinical Competency Assessment: A Scoping Review of Methods and Applications

JMIR Preprints. 03/02/2026:92826

DOI: 10.2196/preprints.92826

URL: https://preprints.jmir.org/preprint/92826

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.