Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Currently submitted to: JMIR Human Factors

Date Submitted: Mar 17, 2026
Open Peer Review Period: Mar 30, 2026 - May 25, 2026
(currently open for review)

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

An Evaluation of Clinician Trust, Perceptions, and Human Factors in AI-Enabled Clinical Decision Support for Acute Care: A Mixed-Methods Study

  • Meghana Darla; 
  • Danielle Miltz; 
  • Khushboo Chandnani; 
  • Saptarshi Purkayastha; 
  • John W. Diehl; 
  • Sivasubramanium V. Bhavani

ABSTRACT

Background:

Artificial intelligence (AI) has the potential to enhance clinical decision-making in high-acuity settings such as intensive care units (ICUs) and emergency departments (EDs). However, despite promising performance, many AI-driven clinical decision support systems (AI-CDSS) face poor adoption due to issues of trust, workflow disruption, and alert fatigue. Understanding the human factors that shape clinician acceptance is critical to guide safe and effective implementation of AI-CDSS in acute care. Theoretical frameworks including the Systems Engineering Initiative for Patient Safety (SEIPS) 2.0 model and the Technology Acceptance Model (TAM) suggest that successful adoption requires addressing sociotechnical interactions among clinician trust, system design, organizational readiness, and task complexity, yet few empirical studies have applied these frameworks to AI-CDSS in acute care settings.

Objective:

This study aimed to evaluate emergency medicine and critical care clinicians’ perceptions of AI-CDSS and to identify key factors influencing adoption, including trust, design preferences, and workflow integration.

Methods:

A SEIPS 2.0 informed mixed-methods study evaluated ICU and ED clinicians from Emory Healthcare on perceptions of AI in clinical practice. An expert-reviewed survey (N=57) assessed clinician perceptions, trust, and implementation preferences. Semi-structured interviews (N=11) included A/B testing of AI-CDSS and clinical sepsis scenario to explore decision-making in context. Transcripts were thematically analyzed using Braun and Clarke's framework in ATLAS.ti. Quantitative data were analyzed descriptively.

Results:

Trust in AI varied significantly by patient acuity (Cochran's Q=30.40, p<0.0001): stable patients (75.4%, 95% CI: 62.9-84.8%), deteriorating patients (47.4%, 95% CI: 35.0-60.1%), and ICU/ED patients (43.9%, 95% CI: 31.8-56.7%). Internal consistency was acceptable-to-good across three scales (Cronbach's alpha: AI Perception=0.891, Trust=0.743, Implementation=0.740). Barriers included over-reliance, insufficient training, and data quality concerns. For the CDSS-AI design, clinicians preferred opt-in alerts (90%), evidence-linked recommendations (63%), and avoiding overt mention of AI increased acceptance (73%). Thematic analysis yielded 36 themes across six domains: trust and transparency, alert usability, workflow fit, data concerns, training needs, and perceived clinical impact. Clinicians favored AI-CDSS that preserved autonomy, minimized disruption, and provided transparent rationale.

Conclusions:

Adoption of AI-CDSS in critical care is not solely a technical issue, but a human-factors challenge centered on trust, transparency, and workflow compatibility. Applying the SEIPS 2.0 framework, we propose a phased implementation approach: beginning with lower-acuity applications where clinician trust is highest, then gradually extending to higher-acuity scenarios with enhanced transparency and override mechanisms. This graduated strategy addresses the critical interdependencies among people (trust), tools (design), organization (training), and task (clinical complexity) identified in this study.


 Citation

Please cite as:

Darla M, Miltz D, Chandnani K, Purkayastha S, Diehl JW, Bhavani SV

An Evaluation of Clinician Trust, Perceptions, and Human Factors in AI-Enabled Clinical Decision Support for Acute Care: A Mixed-Methods Study

JMIR Preprints. 17/03/2026:95472

DOI: 10.2196/preprints.95472

URL: https://preprints.jmir.org/preprint/95472

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.