Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Jun 11, 2025
Date Accepted: Nov 3, 2025

The final, peer-reviewed published version of this preprint can be found here:

Stakeholder Criteria for Trust in Artificial Intelligence–Based Computer Perception Tools in Health Care: Qualitative Interview Study

Rai A, Hurley ME, Herrington J, Storch EA, Zampella CJ, Parish-Morris J, Sonig A, Lázaro-Muñoz G, Kostick-Quenet K

Stakeholder Criteria for Trust in Artificial Intelligence–Based Computer Perception Tools in Health Care: Qualitative Interview Study

J Med Internet Res 2025;27:e78757

DOI: 10.2196/78757

PMID: 41385782

PMCID: 12743233

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Stakeholder Criteria for Trust in Artificial Intelligence-Based Computer Perception Tools in Healthcare: A Qualitative Interview Study

  • Ansh Rai; 
  • Meghan E. Hurley; 
  • John Herrington; 
  • Eric Alan Storch; 
  • Casey J. Zampella; 
  • Julia Parish-Morris; 
  • Anika Sonig; 
  • Gabriel Lázaro-Muñoz; 
  • Kristin Kostick-Quenet

ABSTRACT

Computer perception (CP) technologies hold significant promise for advancing precision mental healthcare systems given their ability to leverage algorithmic analysis of continuous, passive sensing of data from wearables and smartphones (e.g., behavioral activity, geolocation, vocal features, device usage and ambient environmental data) to infer clinically meaningful behavioral and physiological states. However, their successful implementation critically depends on the cultivation of well-founded stakeholder trust. ​​To investigate the contingencies under which such trust might be established, we conducted 80 semi-structured interviews with a purposive sample of adolescents (n = 20) diagnosed with autism, Tourette syndrome, anxiety, obsessive-compulsive disorder (OCD), and/or attention-deficit/hyperactivity disorder (ADHD) and their caregivers (n = 20), practicing clinicians across psychiatry, psychology, and pediatrics (n = 20), and CP system developers (n = 20). Interview transcripts were coded by two independent coders and analyzed using multi-stage, inductive Thematic Content Analysis to identify prominent themes. Across stakeholder groups, five core criteria emerged as prerequisites for trust in CP outputs: (1) epistemic alignment—consistency between system outputs, personal experience, and existing diagnostic frameworks; (2) demonstrable rigor—training on representative data and validation in real-world contexts; (3) explainability—transparent communication of input variables, thresholds, and decision logic; (4) sensitivity to complexity—capacity to accommodate heterogeneity and comorbidity in symptom expression; and (5) a non-substitutive role—technologies must augment, rather than supplant, clinical judgment. A novel and cautionary finding was that epistemic alignment—whether outputs affirmed participants’ preexisting beliefs, diagnostic expectations, or internal states—was a dominant factor in whether the tool was perceived as trustworthy. Participants also expressed relational trust, placing confidence in CP systems based on endorsements from respected peers, academic institutions, or regulatory agencies. However, both of these trust strategies raise significant concerns: confirmation bias could lead users to overvalue outputs that agree with their assumptions, while surrogate trust could be misapplied in the absence of robust performance validation. This study advances empirical understanding of how trust is formed and calibrated around AI-based CP technologies. While trust is commonly framed as a function of technical performance, our findings show it is deeply shaped by cognitive heuristics, social relationships, and alignment with entrenched epistemologies. These dynamics can facilitate intuitive verification but may also constrain the transformative potential of CP systems by reinforcing existing beliefs. To address this, we recommend a dual strategy: (1) embedding CP tools within institutional frameworks that uphold rigorous validation, ethical oversight, and transparent design; and (2) providing clinicians with training and interface designs that support critical appraisal and minimize susceptibility to cognitive bias. Recalibrating trust to reflect actual system capacities, rather than familiarity or endorsement, is essential for ethically sound and clinically meaningful integration of CP technologies.


 Citation

Please cite as:

Rai A, Hurley ME, Herrington J, Storch EA, Zampella CJ, Parish-Morris J, Sonig A, Lázaro-Muñoz G, Kostick-Quenet K

Stakeholder Criteria for Trust in Artificial Intelligence–Based Computer Perception Tools in Health Care: Qualitative Interview Study

J Med Internet Res 2025;27:e78757

DOI: 10.2196/78757

PMID: 41385782

PMCID: 12743233

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.