Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Currently submitted to: Journal of Medical Internet Research

Date Submitted: Apr 15, 2026
Open Peer Review Period: Apr 15, 2026 - Jun 10, 2026
(currently open for review)

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Trust in Generative AI for Health Information Consumption and the Effect of Learned Dependency: An Experimental Study

  • Arif Ahmed; 
  • Gondy Leroy; 
  • Agrim Sachdeva; 
  • Philip Harber; 
  • Stephen Rains; 
  • Seokjun Youn

ABSTRACT

Background:

Generative artificial intelligence (GenAI) is increasingly used by health information consumers to interpret medical content and support decision-making. While these systems provide accessible, timely information, they may also produce inaccurate or misleading outputs. Effective use of GenAI, therefore, depends on users’ ability to calibrate trust based on information accuracy. However, little is known about how learned dependency on GenAI influences trust calibration in health information contexts.

Objective:

This study examines how learned dependency on GenAI affects health information consumers’ trust calibration in AI-generated information and whether visual attention cues (e.g., highlighting critical information) mitigate overreliance on incorrect outputs.

Methods:

We conducted a randomized controlled experiment with 338 participants. The study employed a 2 × 2 design manipulating (1) information accuracy (correct vs incorrect) and (2) visual attention cues (highlight vs no highlight). Participants evaluated AI-generated health information presented alongside source text. Trust was measured using a multi-item scale, and learned dependency on GenAI was assessed using a validated self-reported measure. Linear regression models were used to examine main and interaction effects.

Results:

Information accuracy had a strong positive effect on trust (β = 2.107, 95% CI [1.337, 2.878], p < .001), indicating that participants generally trusted correct information more than incorrect information. Learned dependency on GenAI was also positively associated with trust (β = 0.277, 95% CI [0.033, 0.521], p = .026). Importantly, the interaction between information accuracy and learned dependency was negative and significant (β = −0.399, 95% CI [−0.695, −0.104], p = .008), suggesting that higher dependency reduces users’ ability to differentiate between accurate and inaccurate information. In contrast, visual attention cues did not significantly affect trust (β = 0.149, 95% CI [−0.622, 0.920], p = .704), nor did they moderate the effect of dependency (β = −0.009, 95% CI [−0.305, 0.287], p = .950).

Conclusions:

This study demonstrates that while users generally trust accurate AI-generated health information more than inaccurate information, learned dependency weakens trust calibration, increasing susceptibility to incorrect outputs. Visual attention cues alone are insufficient to mitigate this effect. These findings highlight the need for more effective design interventions to support critical evaluation and reduce overreliance on GenAI in health information environments. Keywords: Learned Dependency; GenAI; Trust Calibration; Attention Mechanism; Automation Bias; Health Information; Human–GenAI Interaction.


 Citation

Please cite as:

Ahmed A, Leroy G, Sachdeva A, Harber P, Rains S, Youn S

Trust in Generative AI for Health Information Consumption and the Effect of Learned Dependency: An Experimental Study

JMIR Preprints. 15/04/2026:98326

DOI: 10.2196/preprints.98326

URL: https://preprints.jmir.org/preprint/98326

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.