Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: Sep 3, 2024
Open Peer Review Period: Sep 5, 2024 - Oct 31, 2024
Date Accepted: Apr 23, 2025
(closed for review but you can still tweet)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Americans' Distrust in AI-assisted Diagnosis Surpasses Partisanship and Demographic Lines
ABSTRACT
Background:
AI technologies are being increasingly integrated into medical practice. Among them, AI-assisted diagnosis has been a technology with promising perspectives, yet its acceptance (compared to human-only procedures) among patients is still understudied, especially after the release of ChatGPT.
Objective:
The current research is interested in the extent to which people would prefer doctors using AI assistance over traditional doctors relying solely on human expertise. Also, it studies the demographic, social, and experiential determinants of preferences over AI-assisted diagnosis.
Methods:
We conducted a four-group randomized survey experiment among a national sample representative of the US population on several demographic benchmarks (n = 1,762). In all four groups, participants saw the same doctor's information. The control group did not mention AI-related content. The "No AI" group explicated mentioned the doctor does not use AI. The "Moderate AI" group noted the doctor used AI moderately and the “Extensive AI” group noted that the doctor used AI extensively. Next, respondents reported their tendency to seek help, trust in the doctor as a person and a professional, knowledge of AI, frequency of using AI in their daily lives, demographics, and partisan identification. We analyzed the results with ordinary least squares regression (controlling for socio-demographic factors), mediation analysis, and moderation analysis. We also explored the effect of past AI experience.
Results:
Mentioning that the doctor uses AI to assist in diagnosis uniformly decreases trust and intention to seek help, regardless of age, gender, education, and party identification. Trust in the doctor and intention to seek help were highest when participants were explicitly told that the doctor does not use AI in their diagnosis. The trust and intention to seek help were similarly low when participants were told that the doctor uses AI assistance moderately or extensively. The largest "intention gap" was observed among those with the highest self-reported AI knowledge but the least AI experience, signaling that more AI experiences make people more open to AI-assisted diagnosis, but higher self-reported AI familiarity has the opposite effect.
Conclusions:
Our findings suggest that despite the increasing integration of AI in medical practice, there remains a strong preference for human-only expertise, underscoring the need for strategies to build trust in AI technologies in healthcare. Clinical Trial: https://osf.io/v8kzs/
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.