Currently submitted to: JMIR Human Factors
Date Submitted: Mar 29, 2026
Open Peer Review Period: Apr 10, 2026 - Jun 5, 2026
(currently open for review)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Pre-Existing Attitudes Toward AI as a Predictor of Clinician Trust in Diagnostic Decision Support: Randomized Experimental Study
ABSTRACT
Background:
As the integration of artificial intelligence (AI)-enabled tools expands within clinical practice, understanding contributors to trust and adoption is critical for successful implementation. While transparency mechanisms such as confidence scores and explanatory text are often proposed to promote trust in AI applications, the role of clinician’s baseline attitudes toward AI as a determinant of trust has not been well characterized.
Objective:
To examine the relationship between baseline attitudes toward AI and clinician trust in a simulated AI-enabled diagnostic assistance application, and to assess whether transparency mechanisms modify this relationship.
Methods:
In a randomized experiment, clinicians (including students and trainees) completed the Attitude Toward Artificial Intelligence (ATAI) scale. They were then presented with 6 case vignettes, followed by exposure to AI assistance in one of three application conditions: control (no transparency), numeric confidence score, or narrative explanation with citation. Participants provided updated diagnoses after assistance across all scenarios. After completion, trust was measured using the Trust and Acceptance of Artificial Intelligence Technology (TrAAIT) scale, including both a total score and application-specific core score. Associations between attitudes and trust were assessed using correlation and linear regression analysis, with transparency condition included as a covariate.
Results:
The study enrolled 220 participants, excluding 3 with missing trust data. Among analyzed participants, baseline attitudes toward AI were heterogeneous and only slightly positive on average. Attitudes were moderately associated with overall trust in AI (Pearson r = 0.45, P<.001) and remained significantly associated when trust was restricted to application-specific domains (Pearson r = 0.33, P<.001). In multivariable linear regression, higher ATAI scores were independently associated with application-specific trust (β = 0.13 (CI 0.08-0.18)per 1-point increase in ATAI, P< .001). Transparency condition was not independently associated with trust and did not meaningfully modify the relationship between attitudes and trust. Attitudes toward AI were only modestly correlated with self-reported technology adoption orientation.
Conclusions:
Baseline attitudes toward AI represent a meaningful antecedent to clinician trust in AI-enabled diagnostic tools, and this association persists at the application level. Transparency mechanisms alone were insufficient to overcome the influence of pre-existing attitudes. These findings suggest that efforts to promote trust and adoption of clinical AI may benefit from addressing clinician attitudes directly, rather than relying solely on interface-level transparency features. Clinical Trial: N/A, This is not a clinical trial.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.