Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Apr 7, 2025
Date Accepted: Nov 8, 2025

The final, peer-reviewed published version of this preprint can be found here:

Key Information Influencing Patient Decision-Making About AI in Health Care: Survey Experiment Study

Zhu X, Stroud AM, Minteer SA, Yoo DW, Ridgeway JL, Mooghali M, Miller JE, Barry BA

Key Information Influencing Patient Decision-Making About AI in Health Care: Survey Experiment Study

J Med Internet Res 2026;28:e75615

DOI: 10.2196/75615

PMID: 41525463

PMCID: 12795307

Key information influencing patient decision-making about AI in healthcare: A survey experiment

  • Xuan Zhu; 
  • Austin M. Stroud; 
  • Sarah A. Minteer; 
  • Dong Whi Yoo; 
  • Jennifer L. Ridgeway; 
  • Maryam Mooghali; 
  • Jennifer E. Miller; 
  • Barbara A. Barry

ABSTRACT

Background:

Artificial Intelligence (AI)-enabled devices are increasingly used in healthcare. However, there has been limited research on patients’ informational preferences, including which elements of AI device labeling enhance patient understanding, trust, and acceptance. Clear and effective patient-facing communication is essential to address patient concerns and support informed decision-making regarding AI-enabled care.

Objective:

Using simulated AI device labels in a cardiovascular context, we evaluated three aims. First, we identified key information elements that influence patient trust and acceptance of an AI device. Second, we examined how these effects varied based on patient characteristics. Third, we explored how patients evaluated informational content of AI labels and their perceived effectiveness of the AI labels in informing decision-making about the use of AI device, building trust in the device, and shaping their intention to use it in their healthcare.

Methods:

We recruited 340 US patients from ResearchMatch.org to participate in a web-based survey that contained two experiments. In the discrete choice experiment (DCE), participants indicated preferences in terms of trust and acceptance regarding 16 pairs of simulated AI device labels that varied across eight types of information needs identified in our previous qualitative work. In the single profile factorial experiment (SPFE), participants evaluated four randomly assigned label prototypes regarding the label’s legibility, comprehensibility, information overload, credibility, and perceived effectiveness in informing about the AI device, as well as participants’ trust in the AI device and intention to use the device in their healthcare. Data was analyzed using mixed effects binary or ordinal logistic regression.

Results:

The DCE showed that information about regulatory approval, high device performance, provider oversight, and AI’s value added to usual care significantly increased the likelihood of patient trust by 14.1-19.3% and acceptance by 13.3-17.9%. Subgroup analyses revealed variations based on patient characteristics such as familiarity with AI, health literacy, and recency of last medical checkup. The SPFE showed that patients reported good label comprehension, and that information about provider oversight, regulatory approval, device performance, and AI’s added value improved perceived credibility and effectiveness of the AI label (odds ratios [ORs] range 1.35-2.05), reduced doubts in the AI device (ORs range 0.61- 0.77), and increased trust and intention to use the AI device (ORs range 1.47-1.73). However, information about data privacy and safety management protocols are less influential.

Conclusions:

Patients value information about an AI device’s performance, provider oversight, regulatory status, and added value during decision-making. Providing transparent, easily understandable information about these aspects is critical to support patient determinations of trust and acceptance of AI-enabled healthcare. Information elements’ impact on patient trust and acceptance varies by patient characteristics, highlighting the need for a tailored approach to address the concerns of diverse patient groups about AI in healthcare. Clinical Trial: N/A


 Citation

Please cite as:

Zhu X, Stroud AM, Minteer SA, Yoo DW, Ridgeway JL, Mooghali M, Miller JE, Barry BA

Key Information Influencing Patient Decision-Making About AI in Health Care: Survey Experiment Study

J Med Internet Res 2026;28:e75615

DOI: 10.2196/75615

PMID: 41525463

PMCID: 12795307

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.