Currently submitted to: JMIR mHealth and uHealth
Date Submitted: May 2, 2026
Open Peer Review Period: May 6, 2026 - Jul 1, 2026
(currently open for review)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
The Use of Therapeutic Conversational Agents (TCAs) for Depression Treatment in Cancer Patients: How to Guide Patients Who “Self-Prescribe” TCAs and Considerations for Potential Future Clinical Implementation
ABSTRACT
Depression is a common comorbidity among cancer patients that significantly worsens mortality, quality of life, and treatment adherence. Financial, geographic, and cultural barriers may limit access to conventional mental health care, and in response, many patients may self-prescribe digital mental health tools (DMHTs) without clinical guidance. Surveys indicate that roughly one in four Americans already uses large language models (LLMs) for mental health support, and this pattern almost certainly extends to cancer patients. LLM-based mental health chatbots fall under the broader category of therapeutic conversational agents (TCAs), which is the focus of this paper. This viewpoint argues that clinicians are in an analogous position to that of providers faced with unregulated dietary supplements: patients are using these tools regardless of physician endorsement, and they need informed clinical guidance. We first characterize the biological, psychological, and social dimensions of depression that are specific to cancer patients—including tumor type, disease stage, comorbidities, psychiatric history, personality factors, social support, and financial burden—and explain why these dimensions create heterogeneous risk profiles that must inform TCA deployment decisions. We then review current evidence on TCA capabilities and limitations. TCAs demonstrate efficacy for mild-to-moderate depression in short-course trials, and users form therapeutic bonds with them comparable to those formed with human therapists. However, critical limitations remain. TCAs fail to respond appropriately to simulated suicidality; are limited in cultural competence for non-Western and non-English-speaking populations; and may cause sycophancy-driven "delusional spiraling” among users. Drawing on this analysis, we offer ten clinical recommendations organized around technology assessment, depression severity, unhealthy use patterns, and future FDA-approved deployment. We recommend that TCAs serve only as adjuncts—never replacements—for patients with moderate-to-severe depression, high self-harm risk, or problematic technology use patterns. We also recommend that clinicians never delegate crisis monitoring to these tools. Finally, we argue that purely outcome-based frameworks for evaluating TCA integration risk undervaluing the intrinsic goods of human therapeutic relationships, particularly for cancer patients confronting isolation, existential distress, and mortality. Human-centered care ought to remain grounded in genuine vulnerability and reciprocity, and this approach should serve as the normative foundation guiding TCA adoption.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.