Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: Aug 20, 2024
Date Accepted: Jan 16, 2025
Trust And Acceptance Challenges In The Adoption Of AI Applications In Healthcare: Quantitative Survey Analysis
ABSTRACT
Background:
Artificial intelligence (AI) has significant potential to transform healthcare, but its successful implementation depends on trust and acceptance of consumers and patients. Understanding the factors that influence attitudes toward AI is crucial for effective adoption. Despite the growing integration of AI in medical and healthcare, consumer and patient acceptance remains a critical challenge. So far research has largely focused on specific applications or general attitudes, lacking a comprehensive analysis of how various individual factors such as demographics, personality traits, technology attitudes, and AI knowledge affect and interact across different healthcare AI contexts.
Objective:
This study aimed to investigate people's trust and acceptance of AI across various healthcare use-cases, and to determine how context and perceived risk affect individuals' propensity to trust and accept AI in specific healthcare scenarios.
Methods:
We collected and analyzed online survey data from 1100 Finnish participants, presenting them 8 AI use-cases in healthcare: five non-invasive applications (e.g., activity monitoring, mental health support) and three with physical interventions (e.g., AI-controlled robotic surgery). Respondents evaluated intention to use, trust, and willingness to trade off personal data for use cases. Gradient boosted tree regression models were trained to predict responses based on 33 demographic, personality, and technology-related variables. The SHAP method was used to interpret feature importance and interactions between variables.
Results:
Consumer attitudes towards technology, technology usage, and personality traits were the primary drivers of trust and intention to use AI in healthcare. Use cases were ranked by acceptance, with non-invasive monitors most preferred. However, the specific use case had less impact in general than expected. Nonlinear dependencies were observed, including an inverted 'U' shaped trend in positivity towards AI based on self-reported AI knowledge. Certain personality traits, such as being more disorganized and careless, were predictive of more positive attitudes towards AI in healthcare. Women tended to be more cautious about AI applications in healthcare compared to men.
Conclusions:
The findings highlight the complex interplay of factors influencing trust and acceptance of AI in healthcare. Consumer trust and intention to use AI in healthcare are primarily driven by overall technology attitudes and usage, rather than specific use cases. AI service providers need to consider demographic factors, personality traits, and technology attitudes when designing and implementing AI systems in healthcare. The study also demonstrates the potential of using predictive AI models as decision-making tools for the implementation and interaction with clients in healthcare AI applications.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.