Accepted for/Published in: JMIR Mental Health
Date Submitted: Mar 7, 2025
Date Accepted: Jun 14, 2025
Generative Artificial Intelligence Responses to Suicide Inquiries
ABSTRACT
Background:
Generative artificial intelligence chatbots are an online source of information consulted by youth to gain insight into mental health and wellness behaviors. However, the accuracy and content of generative artificial intelligence responses to questions related to suicide have not been systematically investigated.
Objective:
Therefore, the present study investigated general (not counseling-specific) generative artificial intelligence chatbots’ responses to questions regarding suicide.
Methods:
A content analysis was conducted of the responses that generative artificial intelligence chatbots response so to questions about suicide.In phase one of the study, generative chatbots examined include: (a) Google Bard/Gemini, (b) Bing/Microsoft CoPilot, (c) ChatGPT 3.5, and (d) Claude. In phase two of the study, additional generative chatbots responses were analyzed including: Google Gemini, Claude 2, xAI, Grok 2, MistralAI, and MetaAI. The two phases occurred a year apart.
Results:
Results included a linguistic analysis utilizing the Linguistic Inquiry and Word Count program indicated evidence of authenticity and tone within the response of the chatbots. There was an increase in the depth and accuracy of the responses between phase one and phase two of the study. There is evidence that the responses by the generative artificial intelligencechatbots were more comprehensive and responsive during phase two than phase one. Specifically, the responses were found to provide more information regarding all aspects of suicide (e.g., signs of suicide, lethality, resources, and ways to support those in crisis). Another difference noted in the responses between the first and second phases was the emphasis on the #988 suicide hotline number.
Conclusions:
While this dynamic information may be helpful for youth in need, the importance of individuals seeking help from a trained mental health professional remains. Further, generative artificial intelligence algorithms related to suicide questions should be checked periodically to ensure best practices regarding suicide prevention are being communicated. Clinical Trial: n/a
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.