Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: Dec 17, 2024
Date Accepted: Apr 11, 2025
Attitudes towards AI Usage in Patient Healthcare: Evidence from a Population Survey Vignette Experiment
ABSTRACT
Background:
The integration of artificial intelligence (AI) holds significant potential to alter diagnostics and treatment in healthcare settings. However, public attitudes towards AI, including trust and risk perception, are crucial for its ethical and effective implementation. Despite increasing attention, little empirical research addresses the factors influencing public support for AI in healthcare, especially in large-scale and representative contexts.
Objective:
This study investigates public attitudes toward AI in patient healthcare using a vignette experiment, focusing on how AI attributes – autonomy, costs, reliability, and transparency – shape perceptions of support, risk, and personalized care. Additionally, it examines the moderating role of socio-demographic characteristics in these evaluations.
Methods:
We conducted a factorial vignette experiment with a probability-based survey of 3,030 participants from Germany’s general population. Respondents were presented with hypothetical scenarios involving AI applications in diagnosis and treatment in a hospital setting. Linear regression models assessed the relative influence of AI attributes on the dependent variables (support, risk perception, and personalized care), with additional subgroup analyses to explore heterogeneity by socio-demographic characteristics.
Results:
Among the four dimensions, reliability emerges as the most influential factor. Respondents expect AI to not only avoid increasing errors but to surpass existing reliability standards, while transparency is also critical, with significant disapproval of non-traceable systems. Costs and autonomy show smaller but notable effects, with preferences favoring collaborative AI systems over autonomous ones, and higher costs generally leading to rejection. Heterogeneity analysis reveals limited socio-demographic differences, with education and migration background influencing attitudes towards transparency and autonomy, and gender differences primarily affecting cost-related perceptions. Attitudes do not substantially differ between AI applications in diagnosis vs. treatment.
Conclusions:
Our study provides critical insights into the factors that influence acceptance and trust in AI technologies, highlighting the importance of ethical considerations, transparency, and patient-centered approaches in the development and implementation of AI in healthcare settings. The findings underscore the need for policy and educational initiatives to address public concerns, particularly around trust and accountability in AI systems. The study contributes to the growing body of literature on AI in healthcare by offering evidence-based recommendations for policymakers, healthcare providers, and AI developers to enhance the effective use of AI in improving patient care.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.