Accepted for/Published in: JMIR Formative Research
Date Submitted: Jul 12, 2024
Open Peer Review Period: Jul 12, 2024 - Sep 6, 2024
Date Accepted: Jan 5, 2025
(closed for review but you can still tweet)
Perceived Trust and Professional Identity Threat in AI-based Clinical Decision Support Systems: A Scenario-Based Experiment on the Influence of AI Process Design Features
ABSTRACT
Background:
Artificial Intelligence (AI)-based systems in medicine like Clinical Decision Support Systems (CDSSs) have shown promising results in healthcare, sometimes outperforming human specialists. However, the integration of AI may challenge medical professionals' identities and lead to limited trust in technology, resulting in healthcare professionals rejecting AI-based systems.
Objective:
This study explores the impact of AI process design features on physicians' trust in the AI solution and on perceived threats to their professional identity. These design features involve the explainability of AI-based CDSS decision outcomes, the integration depth of the AI-generated advice into the clinical workflow, and the physician’s accountability for the AI system-induced medical decisions.
Methods:
We conducted a three-factorial online between-subject scenario-based experiment with 292 medical students in their medical training and experienced physicians across different specialties. The participants were presented with an AI-based CDSS for sepsis prediction and prevention for use in a hospital. Each participant was given a scenario in which the three design features of the AI-based CDSS were manipulated in a 2 x 2 x 2 factorial design. SPSS PROCESS macro was used for hypothesis testing.
Results:
The results suggest that the explainability of the AI-based CDSS was positively associated with both the trust in the AI system (.51, p < .001) and professional identity threat perceptions (.35, p < .05). Trust in the AI system was found to be negatively related to professional identity threat perceptions (-.14, p < .05), indicating a partially mediated effect on professional identity threat through trust. A deep integration of the AI-generated advice into the clinical workflow was positively associated with trust in the system (.26, p < .01). The accountability of the AI-based decisions, i.e., the system required a signature, was found to be positively associated with professional identity threat perceptions among the respondents (.34, p < .05).
Conclusions:
Our research highlights the role of process design features of AI systems used in medicine in shaping professional identity perceptions, mediated through increased trust in AI. An explainable AI-based CDSS and an AI generated system advice, which is deeply integrated into the clinical workflow, reinforce trust, thereby mitigating perceived professional identity threats. However, explainable AI and individual accountability of the system directly exacerbate threat perceptions. Our findings illustrate the complex nature of the behavioral patterns of AI in healthcare and have broader implications for supporting the implementation of AI-based CDSSs in context where AI systems may impact professional identity.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.