Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Research Protocols

Date Submitted: Jun 3, 2024
Open Peer Review Period: Jun 3, 2024 - Jul 29, 2024
Date Accepted: Nov 12, 2024
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Exploring the Credibility of Large Language Models for Mental Health Support: Protocol for a Scoping Review

Gautam D, Kellmeyer P

Exploring the Credibility of Large Language Models for Mental Health Support: Protocol for a Scoping Review

JMIR Res Protoc 2025;14:e62865

DOI: 10.2196/62865

PMID: 39879615

PMCID: 11822324

Models for exploring the credibility of large language models for mental health support: Protocol for a scoping review

  • Dipak Gautam; 
  • Philipp Kellmeyer

ABSTRACT

Background:

The rapid evolution of Large Language Models (LLMs), such as BERT and GPT, has introduced significant advancements in natural language processing. These models are increasingly integrated into various applications, including mental health support. However, the credibility of LLMs in providing reliable and explainable mental health information and support remains underexplored.

Objective:

This scoping review systematically maps the factors influencing the credibility of LLMs in mental health support, including reliability, explainability, and ethical considerations. The review is expected to offer critical insights for practitioners, researchers, and policymakers, guiding future research and policy development. These findings will contribute to the responsible integration of LLMs into mental health care, with a focus on maintaining ethical standards and user trust.

Methods:

The review follows PRISMA-ScR guidelines and the Joanna Briggs Institute (JBI) methodology. Eligibility criteria include studies that apply transformer-based generative language models in mental health support, such as BERT and GPT. Sources include PsycINFO, Medline via PubMed, Web of Science, IEEE Xplore, and ACM Digital Library. A systematic search of studies from 2019 onwards will be updated until October 2024. Data will be synthesized qualitatively. The Population-Concept-Context (PCC) framework will guide the inclusion criteria. Two independent reviewers will screen and extract data, resolving discrepancies through discussion. Data will be synthesized and presented descriptively.

Results:

The study is currently in progress, with the systematic search completed and the screening phase ongoing. We expect to complete data extraction by early November 2024 and synthesis by late November 2024.

Conclusions:

This scoping review will map the current evidence on the credibility of LLMs in mental health support. It will identify factors influencing the reliability, explainability, and ethical considerations of these models, providing insights for practitioners, researchers, policymakers, and users. These findings will fill a critical gap in the literature and inform future research, practice, and policy development, ensuring the responsible integration of LLMs in mental health services.


 Citation

Please cite as:

Gautam D, Kellmeyer P

Exploring the Credibility of Large Language Models for Mental Health Support: Protocol for a Scoping Review

JMIR Res Protoc 2025;14:e62865

DOI: 10.2196/62865

PMID: 39879615

PMCID: 11822324

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.