Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Currently submitted to: JMIR Mental Health

Date Submitted: Jan 5, 2026
Open Peer Review Period: Feb 6, 2026 - Apr 3, 2026
(currently open for review)

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

AI Psychosis or Friendship? Rethinking Epistemic Risk in Human–AI Companionship

  • Charlotte Blease

ABSTRACT

Concerns about “AI psychosis” have intensified as conversational artificial intelligence systems become increasingly embedded in everyday life. While some AI-mediated interactions raise legitimate psychiatric and safety concerns, the term AI psychosis is currently applied to a heterogeneous set of phenomena without sufficient conceptual discrimination. This Viewpoint focuses on only one aspect of this umbrella term, and the more benign end of the scale. It argues that one influential but under-examined mechanism is the tendency of generative AI systems to function as socially responsive companions, eliciting patterns of affirmation and emotional alignment that resemble human friendship. Drawing on empirical research from social psychology and relationship science, the article challenges the assumption that epistemic unreliability in AI interaction is inherently pathological. Human friendships routinely involve motivated bias, selective affirmation, and softened epistemic challenge, and these features are widely recognised as psychologically protective rather than disordered. The paper argues that current debates often apply a double standard, tolerating epistemic indulgence in human relationships while pathologising analogous dynamics in human-AI interaction. This asymmetry reflects anthropocentric bias rather than a clear clinical distinction. The article further contends that the ethical response to AI-mediated mental health risk should not be framed in terms of prohibition or moral panic. Instead, it proposes that AI systems be deliberately designed to function as epistemically disciplined companions: supportive without collusive, responsive without undermining critical thinking. The concept of epistemic braking is introduced to capture design strategies promote epistemic humility while preserving psychological support. Implications are drawn for psychiatry, ethics, industry design, and regulation. In particular, the article calls for clearer distinctions between psychosis and epistemic vulnerability, and for regulatory approaches that prioritise epistemic safeguards. Properly constrained, AI systems need not undermine mental health; they should strive to model a form of companionship that is both supportive and epistemically responsible.


 Citation

Please cite as:

Blease C

AI Psychosis or Friendship? Rethinking Epistemic Risk in Human–AI Companionship

JMIR Preprints. 05/01/2026:90740

DOI: 10.2196/preprints.90740

URL: https://preprints.jmir.org/preprint/90740

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.