Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Currently accepted at: JMIR AI

Date Submitted: Oct 21, 2025
Date Accepted: Mar 31, 2026

This paper has been accepted and is currently in production.

It will appear shortly on 10.2196/86265

The final accepted version (not copyedited yet) is in this tab.

Suicidal Ideation in Online Spaces Through the Lens of Interpersonal Theory of Suicide: Exploratory Study of Self-Disclosure, Peer Support, and AI Responses

  • Soorya Ram Shimgekar; 
  • Violeta J. Rodriguez; 
  • Paul A. Bloom; 
  • Dong Whi Yoo; 
  • Koustuv Saha

ABSTRACT

Background:

Suicide is a critical global public health issue, with millions experiencing Suicidal Ideation (SI) each year. Online platforms, such as Reddit, provide spaces where individuals express suicidal thoughts and seek peer support. While prior computational research has leveraged machine learning and natural language analysis to detect SI, much of it lacks grounding in psychological theory, limiting interpretability and intervention design.

Objective:

This study applies the Interpersonal Theory of Suicide (IPTS) to understand the underlying psychosocial mechanisms driving high-risk suicidal intent in online spaces, analyze linguistic expressions of SI, and assess the role of AI systems in providing supportive responses.

Methods:

We analyzed 59,607 posts from Reddit’s r/SuicideWatch community. Posts were categorized into four SI dimensions: Loneliness, Lack of Reciprocal Love, Self-Hate, and Liability; and three IPTS-based RiskFactors: Thwarted Belongingness, Perceived Burdensomeness, and Acquired Capability for Suicide. High-risk posts were identified based on language markers of planning, attempts, and intent. We further conducted psycholinguistic and content analyses of supportive responses and evaluated AI chatbot-generated replies for structural coherence and empathy.

Results:

High-risk SI posts contained frequent references to planning and attempts (21.3%), methods and tools (18.6%), and expressions of weakness and pain (24.9%). Supportive peer responses varied significantly across SI stages (P < .001), with deeper empathy and self-disclosure emerging in replies to high-risk posts. AI chatbot responses demonstrated improved structural coherence (Cohen’s κ = 0.74) but were rated significantly lower on personalization and emotional depth (P < .001) by expert evaluators.

Conclusions:

Grounding computational analysis in IPTS provides richer theoretical insight into SI expressed online. While AI-based systems can enhance the structural and linguistic quality of supportive messages, they currently lack the nuanced empathy and contextual awareness needed for effective mental health support. These findings highlight the need for theory-driven, human-AI collaborative frameworks in suicide prevention research and interventions.


 Citation

Please cite as:

Shimgekar SR, Rodriguez VJ, Bloom PA, Yoo DW, Saha K

Suicidal Ideation in Online Spaces Through the Lens of Interpersonal Theory of Suicide: Exploratory Study of Self-Disclosure, Peer Support, and AI Responses

JMIR AI. 31/03/2026:86265 (forthcoming/in press)

DOI: 10.2196/86265

URL: https://preprints.jmir.org/preprint/86265

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.