This paper has been accepted and is currently in production.
It will appear shortly on 10.2196/86265
The final accepted version (not copyedited yet) is in this tab.
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Suicidal Ideation in Online Spaces Through the Lens of Interpersonal Theory of Suicide: Exploratory Study of Self-Disclosure, Peer Support, and AI Responses
ABSTRACT
Background:
Suicide is a critical global public health issue, with millions experiencing Suicidal Ideation (SI) each year. Online platforms, such as Reddit, provide spaces where individuals express suicidal thoughts and seek peer support. While prior computational research has leveraged machine learning and natural language analysis to detect SI, much of it lacks grounding in psychological theory, limiting interpretability and intervention design.
Objective:
This study applies the Interpersonal Theory of Suicide (IPTS) to understand the underlying psychosocial mechanisms driving high-risk suicidal intent in online spaces, analyze linguistic expressions of SI, and assess the role of AI systems in providing supportive responses.
Methods:
We analyzed 59,607 posts from Reddit’s r/SuicideWatch community. Posts were categorized into four SI dimensions: Loneliness, Lack of Reciprocal Love, Self-Hate, and Liability; and three IPTS-based RiskFactors: Thwarted Belongingness, Perceived Burdensomeness, and Acquired Capability for Suicide. High-risk posts were identified based on language markers of planning, attempts, and intent. We further conducted psycholinguistic and content analyses of supportive responses and evaluated AI chatbot-generated replies for structural coherence and empathy.
Results:
High-risk SI posts contained frequent references to planning and attempts (21.3%), methods and tools (18.6%), and expressions of weakness and pain (24.9%). Supportive peer responses varied significantly across SI stages (P < .001), with deeper empathy and self-disclosure emerging in replies to high-risk posts. AI chatbot responses demonstrated improved structural coherence (Cohen’s κ = 0.74) but were rated significantly lower on personalization and emotional depth (P < .001) by expert evaluators.
Conclusions:
Grounding computational analysis in IPTS provides richer theoretical insight into SI expressed online. While AI-based systems can enhance the structural and linguistic quality of supportive messages, they currently lack the nuanced empathy and contextual awareness needed for effective mental health support. These findings highlight the need for theory-driven, human-AI collaborative frameworks in suicide prevention research and interventions.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.