Accepted for/Published in: JMIR Human Factors
Date Submitted: Mar 6, 2023
Date Accepted: Nov 20, 2023
Trust in and acceptance of Artificial Intelligence applications in medicine: a mixed methods study
ABSTRACT
Background:
Artificial intelligence (AI)-powered technologies have been increasingly used in almost all fields including medicine. However, to successfully implement medical AI applications, ensuring trust and acceptance towards such technologies is crucial for their successful spread and timely adoption worldwide. While AI applications in medicine provide advantages to the current healthcare system, there are also various associated challenges regarding, for instance, data privacy, accountability, and equity and fairness, that could hinder the medical AI implementation.
Objective:
The aim of this study was to identify factors related to trust in and acceptance of novel AI-powered medical technologies, and to assess the relevance of those factors among relevant stakeholders.
Methods:
This study used a mixed-methods design that included a rapid review of existing literature, followed by a stakeholder survey. The rapid review aimed to identify various factors related to trust in or acceptance of AI applications in medicine. Next, an electronic survey including the rapid review-derived factors related to trust in and acceptance of novel AI applications in medicine was disseminated among key stakeholder groups. Participants (N=22) were asked to assess to what extent they thought the various factors (N=19) were relevant for trust in and acceptance of novel AI applications in medicine on a 5-point Likert scale (1=irrelevant, 5=relevant).
Results:
The rapid review (N=32) yielded 110 factors related to trust and 77 factors related to acceptance towards AI technologies in medicine. Closely-related factors were assigned one of the 19 overarching ‘umbrella’ factors that were further grouped into four categories (human-related, technology-related, ethical and legal, and additional factors). The categorized 19 ‘umbrella’ factors were presented as survey statements that were evaluated by relevant stakeholders. Survey participants (N=22) represented researchers (18/22, 82%), technology providers (5/22, 23%), hospital staff (3/22, 14%), and policy makers (3/22, 14%). Of the 19 factors, 84% (16/19) were considered to be of high relevance for the trust in and acceptance of novel AI applications in medicine. Factors of high relevance were human-related factors (i.e., the type of institution AI professionals originate from), technology-related factors (i.e., the explainability and transparency of the AI application processes and outcomes), legal and ethical factors (i.e., data-use transparency), and additional factors (i.e., AI applications being environment-friendly). The patient’s gender, age, and education level were found to be of low relevance (3/19, 16%).
Conclusions:
The results of this study could help the implementers of medical AI applications to understand what drives trust and acceptance towards AI-powered technologies among key stakeholders in medicine. Consequently, this would allow the implementers to identify strategies that facilitate trust and acceptance towards medical AI among key stakeholders and potential users.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.