Currently submitted to: Journal of Participatory Medicine
Date Submitted: Mar 11, 2026
Open Peer Review Period: Mar 24, 2026 - May 19, 2026
(currently open for review)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Artificial Intelligence as Therapist, Companion, and Romantic Partner: Emerging Roles, Benefits, and Risks for Mental Health in Participatory Medicine
ABSTRACT
The line between tool and companion was once obvious, but conversational artificial intelligence (AI) is blurring it in ways few researchers anticipated. Large language model (LLM) chatbots and purpose-built AI companion agents are now used by millions of people every day. They are not being used to simply retrieve information, but instead to offer emotional support, help process personal distress, and sustain what many describe as genuine relationships. Research puts the scale of this shift in sharp relief as nearly half (48.7%) of individuals with self-reported mental health concerns report having used an LLM for mental health support or therapy-related purposes [1]. This Viewpoint argues that these uses are best understood through three unique but overlapping relational frames: AI as therapist substitute, AI as companion or confidant substitute, and AI as romantic partner substitute. Drawing on empirical literature across digital mental health, psychology, communication, and human–computer interaction, and grounded in the values of participatory medicine, this paper examines why people turn to AI for these intimate purposes, what they appear to gain, and what clinicians, designers, developers, and policymakers should examine more carefully as the practice evolves. The picture that emerges is neither straightforwardly optimistic nor dismissive. Therapeutic chatbots can produce real symptom reduction for users, AI companionship can ease loneliness in genuine, if bounded, ways, and the emotional relief some people experience in these interactions is not an artifact of naivety. But the same systems that lower the barriers to disclosure also lower the barriers to harm. AI chatbots regularly hallucinate clinical guidance, validate dysfunctional beliefs, handle crises without accountability, and may cultivate the very isolation they seek to relieve. Responsible integration requires something more demanding than a disclaimer. Instead, it requires transparent design, thoughtful escalation pathways, ongoing evaluation, and a commitment to the human connection that participatory medicine places at the center of good care.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.