Currently accepted at: JMIR Medical Education
Date Submitted: Oct 24, 2025
Date Accepted: Apr 5, 2026
This paper has been accepted and is currently in production.
It will appear shortly on 10.2196/85666
The final accepted version (not copyedited yet) is in this tab.
Exploring Ambient Artificial Intelligence to Enhance Learning and Feedback During OR-to-ICU Handoffs: A Co-Design and Simulation Study
ABSTRACT
Background:
Operating room–to–intensive care unit (OR-to-ICU) handoffs are among the most complex and high-risk communication events in perioperative care. Despite the implementation of structured checklists, trainees often receive limited feedback on their communication skills, and simulation-based education rarely provides objective data on communication performance and checklist adherence. This study explores how an ambient AI handoff assistant used during simulation-based training of OR-to-ICU handoff discussions can enhance clinical communication training and AI literacy by mapping spoken handoff discussions to handoff checklist items, enabling the development of a handoff note that functioned as a structured, feedback-rich learning artifact.
Objective:
To co-design and evaluate an ambient AI handoff assistant that captures spoken OR-to-ICU handoff communication, maps it to handoff checklist items, and provides immediate feedback on handoff completeness during simulated OR-to-ICU transitions in an educational setting.
Methods:
A two-phase mixed-methods study was conducted within the UCLA Department of Anesthesiology and Perioperative Care (July–October 2025). Phase 1 comprised co-design interviews with four clinician educators to identify limitations of current handoff training and inform AI feature development. Phase 2 involved an error analysis, as well as evaluations of usability, workload, and educational impact, conducted through ten 60-minute simulation sessions with pairs of medical students and first-year residents. Quantitative measures included Physician Task Load Index (PTL), System Usability Scale (SUS), and a post-simulation survey; qualitative data from co-design sessions and simulation debrief interviews were thematically analyzed.
Results:
Educators highlighted inconsistent checklist use and the absence of objective feedback on learners’ communication skills as key areas that could benefit from structured documentation of handoff discussions using AI. Error analysis of the ambient AI handoff assistant revealed a mean of 3.6 errors per note, with incorrect output being the most frequent error type. There was no statistically significant difference between the ambient AI handoff assistant and the paper checklist with respect to PTL and SUS measures. Trainees valued real-time transcripts and structured handoff notes for reflection of communication practices, and exposure to AI documentation errors enhanced critical thinking and awareness of AI technology limitations.
Conclusions:
The ambient AI handoff assistant mapped simulated handoff discussions to checklist items and generated a structured handoff note, facilitating reflection on team-based communication skills in handoff education. Imperfections in the AI’s output encouraged critical appraisal of its capabilities and prompted discussion about automation complacency, suggesting that AI-assisted simulations can foster both communication and digital literacy skills essential for future AI-enabled clinical practice.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.