Currently accepted at: JMIR Formative Research
Date Submitted: Oct 1, 2025
Date Accepted: Mar 20, 2026
This paper has been accepted and is currently in production.
It will appear shortly on 10.2196/85138
The final accepted version (not copyedited yet) is in this tab.
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
An Exploratory Investigation of AI Psychotherapists in Depression Treatment from the Patient’s Perspective: Applicable Scenarios, Desired Features, and Risks
ABSTRACT
Background:
Depression is a pervasive global mental healthcare issue, yet access to trained professionals remains severely limited. With the rapid advancement of artificial intelligence (AI), digital tools are increasingly seen as a viable way to address this shortage. However, questions remain about how digital platforms for mental healthcare can be effectively designed.
Objective:
This study investigates, from an end-user (patient) perspective, the potential for AI psychotherapists to address the supply–demand imbalance in mental healthcare.
Methods:
A grounded theory approach was applied to analyze qualitative responses from 452 individuals with varying degrees of depression. This analysis specifically explored the participants’ perspectives regarding the potential use of AI in treating depression, the characteristics expected from an AI psychotherapist, and the associated perceived risks.
Results:
The findings reveal a diverse set of applicable scenarios for AI psychotherapists that span diagnostic support, treatment, consultation, self-management, and emotional companionship. Key desired features include professionalism, warmth, precision care, empathy, remote services, active listener, personalization, flexible treatment option, patience, trustworthiness, and basic alternative, while critical concerns include diagnostic inaccuracy, treatment errors, privacy breach, lack of human interaction, technical malfunctions, and lack of emotion engagement. Based on these findings, a general MoSCoW prioritisation framework was proposed to assist in AI psychotherapist design. Tailored MoSCoW frameworks were also discussed to adapt AI psychotherapist design for diverse user profiles while accounting for differences in stigma levels, depression severity, trust in AI, and privacy awareness.
Conclusions:
This study provides a patient-centred framework for designing AI psychotherapists and complements the existing literature by offering actionable guidance for designing AI mental healthcare tools that are both clinically effective and emotionally attuned to user needs.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.