Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Human Factors

Date Submitted: Jun 25, 2025
Open Peer Review Period: Oct 21, 2025 - Dec 16, 2025
Date Accepted: Nov 20, 2025
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Investigating How Clinicians Form Trust in an AI-Based Mental Health Model: Qualitative Case Study

Kelly A, Bhardwaj N, Holmberg Sainte-Marie TT, Van de Ven P, Melia R, Williams JE, Mathiasen K, Nielsen AS

Investigating How Clinicians Form Trust in an AI-Based Mental Health Model: Qualitative Case Study

JMIR Hum Factors 2025;12:e79658

DOI: 10.2196/79658

PMID: 41417472

PMCID: 12716233

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Investigating how clinicians form trust in an AI-based mental health model: A qualitative case study

  • Anthony Kelly; 
  • Niharika Bhardwaj; 
  • Trine Theresa Holmberg Sainte-Marie; 
  • Pepijn Van de Ven; 
  • Ruth Melia; 
  • John Eustis Williams; 
  • Kim Mathiasen; 
  • Amalie Søgaard Nielsen

ABSTRACT

Background:

Trust in AI remains a critical barrier to adoption of artificial Intelligence (AI) in mental health care. This study explores the formation of trust in an AI mental health model and its human–computer interface (HCI) among clinicians at an online mental health clinic in the Region of Southern Denmark.

Objective:

To explore clinicians’ perspectives on how trust is built in the context of an AI-supported mental health screening model and to identify the factors that influence this process.

Methods:

A qualitative case study using semi-structured interviews with clinicians involved in the pilot of a mental health AI model. Thematic analysis was used to identify key factors contributing to trust formation.

Results:

Clinicians' initial attitudes toward AI were shaped by prior positive experiences with AI and their perception of AI’s potential to reduce cognitive load in routine screening. Trust development followed a sequential pattern resembling a “trust journey”: (1) Sense-making, (2) Risk appraisal, and (3) Conditional decision to rely. Trust formation was supported by the explainability of the model, particularly through: i) visualisation of confidence and uncertainty through violin plots, aligning with the clinicians’ expectations of decision ambiguity; ii) feature attribution for and against predictions, which mirrored clinical reasoning; iv) use of pseudo-sumscores in the AI model, which increased interpretability by presenting explanations in familiar clinical formats. Trust was contextually bounded to low-risk clinical scenarios, such as pre-interview patient screening, and contingent on safety protocols (e.g., suicide risk flagging). The use of both structured and unstructured patient data was seen as key to expanding trust into more complex clinical contexts. Participants also expressed a need for ongoing evaluation data to reinforce and maintain trust.

Conclusions:

Clinicians’ trust in AI tools is contextually and sequentially constructed, influenced by both model performance and alignment with clinical reasoning. Interpretability features were essential in establishing intrinsic trust, particularly when presented in ways that resonate with clinical norms. For broader acceptance and responsible deployment, trust must be additionally supported by rigorous evaluation data and the inclusion of clinically relevant data types in model design.


 Citation

Please cite as:

Kelly A, Bhardwaj N, Holmberg Sainte-Marie TT, Van de Ven P, Melia R, Williams JE, Mathiasen K, Nielsen AS

Investigating How Clinicians Form Trust in an AI-Based Mental Health Model: Qualitative Case Study

JMIR Hum Factors 2025;12:e79658

DOI: 10.2196/79658

PMID: 41417472

PMCID: 12716233

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.