Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: Jun 11, 2025
Date Accepted: Nov 3, 2025
Stakeholder Criteria for Trust in Artificial Intelligence-Based Computer Perception Tools in Healthcare: A Qualitative Interview Study
ABSTRACT
Background:
Computer perception (CP) technologies hold significant promise for advancing precision mental healthcare systems by leveraging algorithmic analysis of continuous, passive sensing of data (e.g., behavioral activity, geolocation, vocal features, device usage and ambient environmental data) from wearables and smartphones to infer clinically meaningful behavioral and physiological states.
Objective:
However, successful implementation of CP technologies critically depends on the cultivation of well-founded stakeholder trust. To investigate the contingencies under which such trust might be established, we conducted semi-structured interviews across four stakeholder groups.
Methods:
We conducted 80 semi-structured interviews via with a purposive sample of adolescents diagnosed with autism, Tourette syndrome, anxiety, OCD, or ADHD (n = 20) and their caregivers (n = 20), practicing clinicians across psychiatry, psychology, and pediatrics (n = 20), and CP system developers (n = 20). Interview transcripts were coded by two independent coders and analyzed using multi-stage, inductive Thematic Content Analysis to identify prominent themes.
Results:
Across stakeholder groups, five core criteria emerged as prerequisites for trust in CP outputs: (1) epistemic alignment—consistency between system outputs, personal experience, and existing diagnostic frameworks; (2) demonstrable rigor—training on representative data and validation in real-world contexts; (3) explainability—transparent communication of input variables, thresholds, and decision logic; (4) sensitivity to complexity—capacity to accommodate heterogeneity and comorbidity in symptom expression; and (5) a non-substitutive role—technologies must augment, rather than supplant, clinical judgment. A novel and cautionary finding was that epistemic alignment—whether outputs affirmed participants’ preexisting beliefs, diagnostic expectations, or internal states—was a dominant factor in whether the tool was perceived as trustworthy. Participants also expressed relational trust, placing confidence in CP systems based on endorsements from respected peers, academic institutions, or regulatory agencies.
Conclusions:
This study advances empirical understanding of how trust is formed and calibrated around AI-based CP technologies. While trust is commonly framed as a function of technical performance, our findings show it is deeply shaped by cognitive heuristics, social relationships, and alignment with entrenched epistemologies. These dynamics can facilitate intuitive verification but may also constrain the transformative potential of CP systems by reinforcing existing beliefs. Further, trust strategies presented by participants that focused on epistemic alignment and relational trust raise significant concerns, as confirmation bias could lead users to overvalue outputs that agree with their assumptions, while surrogate trust could be misapplied in the absence of robust performance validation. To address these concerns, we argue (1) that CP tools should be embedded within institutional environments that uphold accountability, ethical integrity, and rigorous review of CP tools before deployment, easing the burden on individual stakeholders to independently evaluate trustworthiness in the absence of the necessary resources, background knowledge, or skill sets; and (2) that educational initiatives move beyond calls for AI literacy to focus on design-based interventions that help users distinguish between intuitive and well-founded trust. Recalibrating trust to reflect actual system capacities, rather than familiarity or endorsement, is essential for ethically sound and clinically meaningful integration of CP technologies.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.