Accepted for/Published in: JMIR Mental Health
Date Submitted: Jun 17, 2025
Date Accepted: Nov 21, 2025
Stakeholder Perspectives on Humanistic Implementation of Computer Perception in Healthcare: A Qualitative Study
ABSTRACT
Background:
Computer perception (CP) technologies—including digital phenotyping, affective computing and related passive sensing approaches—offer unprecedented opportunities to personalize healthcare, especially mental healthcare, yet they also provoke concerns about privacy, bias and the erosion of empathic, relationship- centered practice. At present, it remains elusive what stakeholders who design, deploy, and experience these tools in real-world settings perceive as the risks and benefits of CP technologies.
Objective:
This study aimed to explore key stakeholder perspectives on potential benefits, risks and concerns surrounding the integration of CP technologies into patient care. Better understanding of these concerns is crucial for responding to and mitigating such concerns via design implementation strategies that augment, rather than compromise, patient-centered and humanistic care and associated outcomes.
Methods:
We conducted in-depth, semi-structured interviews with 102 stakeholders involved at key points in CP’s development and implementation: adolescent patients (n = 20) and their caregivers (n = 20), frontline clinicians (n = 20), technology developers (n = 21) and ethics, legal, policy or philosophy scholars (n = 21). Interviews (~ 45 min each) explored perceived benefits, risks and implementation challenges of CP in clinical care. Transcripts underwent thematic analysis by a multidisciplinary team; reliability was enhanced through double coding and consensus adjudication.
Results:
Stakeholders raised concerns across seven domains: (1) data privacy and protection (86%); (2) trustworthiness and data integrity (71%); (3) direct and indirect patient harms (64%); (4) utility and implementation challenges (59%); (5) patient-specific relevance (24%); (6) regulation and governance (17%); and (7) philosophical critiques of reductionism (13%). A cross cutting insight was the primacy of context and subjective meaning in determining whether CP outputs are clinically valid and actionable. Participants warned that without attention to these factors, algorithms risk misclassification and dehumanization of care.
Conclusions:
To operationalize humanistic safeguards, we propose “personalized roadmaps”: co-designed plans that predetermine which metrics will be monitored, how and when feedback is shared, thresholds for clinical action, and procedures for reconciling discrepancies between algorithmic inferences and lived experience. Roadmaps embed patient education, dynamic consent, and tailored feedback, thereby aligning CP deployment with patient autonomy, therapeutic alliance and ethical transparency. This multi-stakeholder study provides the first comprehensive, evidence-based account of relational, technical and governance challenges raised by CP tools in clinical care. By translating these insights into personalized roadmaps, we offer a practical framework for developers, clinicians and policymakers seeking to harness continuous behavioral data while preserving the humanistic core of care.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.