Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Mental Health

Date Submitted: Mar 7, 2025
Date Accepted: Jun 14, 2025

The final, peer-reviewed published version of this preprint can be found here:

An Examination of Generative AI Response to Suicide Inquires: Content Analysis

Campbell LO, Babb K, Lambie GW, Hayes BG

An Examination of Generative AI Response to Suicide Inquires: Content Analysis

JMIR Ment Health 2025;12:e73623

DOI: 10.2196/73623

PMID: 40811811

PMCID: 12371289

Generative Artificial Intelligence Responses to Suicide Inquiries

  • Laurie O. Campbell; 
  • Kathryn Babb; 
  • Glenn W Lambie; 
  • B. Grant Hayes

ABSTRACT

Background:

Generative artificial intelligence chatbots are an online source of information consulted by youth to gain insight into mental health and wellness behaviors. However, the accuracy and content of generative artificial intelligence responses to questions related to suicide have not been systematically investigated.

Objective:

Therefore, the present study investigated general (not counseling-specific) generative artificial intelligence chatbots’ responses to questions regarding suicide.

Methods:

A content analysis was conducted of the responses that generative artificial intelligence chatbots response so to questions about suicide.In phase one of the study, generative chatbots examined include: (a) Google Bard/Gemini, (b) Bing/Microsoft CoPilot, (c) ChatGPT 3.5, and (d) Claude. In phase two of the study, additional generative chatbots responses were analyzed including: Google Gemini, Claude 2, xAI, Grok 2, MistralAI, and MetaAI. The two phases occurred a year apart.

Results:

Results included a linguistic analysis utilizing the Linguistic Inquiry and Word Count program indicated evidence of authenticity and tone within the response of the chatbots. There was an increase in the depth and accuracy of the responses between phase one and phase two of the study. There is evidence that the responses by the generative artificial intelligencechatbots were more comprehensive and responsive during phase two than phase one. Specifically, the responses were found to provide more information regarding all aspects of suicide (e.g., signs of suicide, lethality, resources, and ways to support those in crisis). Another difference noted in the responses between the first and second phases was the emphasis on the #988 suicide hotline number.

Conclusions:

While this dynamic information may be helpful for youth in need, the importance of individuals seeking help from a trained mental health professional remains. Further, generative artificial intelligence algorithms related to suicide questions should be checked periodically to ensure best practices regarding suicide prevention are being communicated. Clinical Trial: n/a


 Citation

Please cite as:

Campbell LO, Babb K, Lambie GW, Hayes BG

An Examination of Generative AI Response to Suicide Inquires: Content Analysis

JMIR Ment Health 2025;12:e73623

DOI: 10.2196/73623

PMID: 40811811

PMCID: 12371289

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.