Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Currently submitted to: Journal of Medical Internet Research

Date Submitted: Apr 26, 2026
Open Peer Review Period: Apr 27, 2026 - Jun 22, 2026
(currently open for review)

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

The Impact of Disagreements Between Algorithmic Recommendations and Clinical Expert Experience on Patients' Trust in Healthcare: A Scoping Review Using Mayer’s Integrative Model of Organizational Trust

  • Xilin Yang; 
  • Jinghong Li; 
  • Yue Xiang; 
  • Mingyuan Ju; 
  • Kaiwen Liang; 
  • Yunfeng He; 
  • Qinghua Zhao; 
  • Huanhuan Huang

ABSTRACT

Background:

With the rapid development and widespread application of artificial intelligence (AI) technology, AI has demonstrated high accuracy and reliability in medical practice, and patients' trust in algorithmic has gradually increased. However, in clinical practice, disagreements may still arise between algorithmic recommendations and clinical expert experience, and such disagreements can affect patients' trust. To date, however, the impact of these disagreements on patients’ medical trust and the strategies for addressing them have not been systematically reviewed.

Objective:

To systematically map the impact of disagreements between AI recommendations and clinical expert judgment on patients’ medical trust, identify influencing factors based on Mayer’s integrative model of organizational trust, and summarize strategies to enhance trust.

Methods:

Following Joanna Briggs Institute (JBI) scoping review methodology and Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) guideline, we systematically searched Web of Science, PubMed, Embase, Scopus, and EBSCO up to March 2026, limited to English-language literature. Studies focusing on patients' trust in the context of disagreements between AI and expert opinions were included. Data were charted using the Population, Concept, Context (PCC) framework. Guided by Mayer’s integrative model of organizational trust, influencing factors were analyzed through a framework synthesis approach across the dimensions of ability, benevolence, integrity, and trustor propensity. The protocol was pre-registered on OSF (Registration DOI: 10.17605/OSF.IO/AHSGD).

Results:

A total of 2,630 records were identified, and 26 studies were ultimately included after screening, including six qualitative studies, seven quantitative studies, three mixed-methods studies, five theoretical studies, and five review articles. These studies were conducted across 10 countries and were published mainly between 2022 and 2026. Disagreements were concentrated in clinical diagnosis and risk assessment, treatment planning and medication decision-making, clinician–patient communication and intelligent interaction, as well as emerging application scenarios. In situations of disagreement, patients commonly expressed skepticism toward both algorithms and experts; overall, however, patients tended to trust experts more than algorithms. Data security and privacy risks, insufficient communication, AI accuracy and reliability, demographic and socioeconomic characteristics, and patients’ disease and health status were identified as high-frequency factors influencing patients’ medical trust. Six trust-enhancing strategies were extracted: transparency and explainability, patient participation and shared decision-making, clinician–patient communication and role positioning, institutional regulation and governance, education and capacity building, and privacy protection and data security.

Conclusions:

In situations of disagreement between AI and clinical experts, patients’ medical trust is dynamically shaped by ability, benevolence, integrity, and individual-contextual multiple interacting factors. Strengthening transparency, communication, and governance is essential for fostering trust in human–AI collaborative healthcare.


 Citation

Please cite as:

Yang X, Li J, Xiang Y, Ju M, Liang K, He Y, Zhao Q, Huang H

The Impact of Disagreements Between Algorithmic Recommendations and Clinical Expert Experience on Patients' Trust in Healthcare: A Scoping Review Using Mayer’s Integrative Model of Organizational Trust

JMIR Preprints. 26/04/2026:99511

DOI: 10.2196/preprints.99511

URL: https://preprints.jmir.org/preprint/99511

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.