Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Research Protocols

Date Submitted: Jun 3, 2024
Open Peer Review Period: Jun 3, 2024 - Jul 29, 2024
Date Accepted: Nov 12, 2024
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Exploring the Credibility of Large Language Models for Mental Health Support: Protocol for a Scoping Review

Gautam D, Kellmeyer P

Exploring the Credibility of Large Language Models for Mental Health Support: Protocol for a Scoping Review

JMIR Res Protoc 2025;14:e62865

DOI: 10.2196/62865

PMID: 39879615

PMCID: 11822324

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Models for exploring the credibility of large language models for mental health support – Protocol for a scoping review

  • Dipak Gautam; 
  • Philipp Kellmeyer

ABSTRACT

Background:

The rapid evolution of Large Language Models (LLMs), such as BERT and GPT, has introduced significant advancements in natural language processing. These models are increasingly integrated into various applications, including mental health support. However, the credibility of LLMs in providing reliable and explainable mental health information and support remains underexplored.

Objective:

This scoping review aims to systematically explore and map the factors influencing the credibility of LLMs in mental health support. Specifically, the review will assess LLMs' reliability, explainability, and ethical implications in this context.

Methods:

The review will follow the PRISMA extension for scoping reviews (PRISMA-ScR) and the Joanna Briggs Institute (JBI) methodology. A comprehensive search will be conducted in databases such as PsycINFO, Medline via PubMed, Web of Science, IEEE Xplore, and ACM Digital Library. Studies published from 2019 onwards in English and peer-reviewed will be included. The Population-Concept-Context (PCC) framework will guide the inclusion criteria. Two independent reviewers will screen and extract data, resolving discrepancies through discussion. Data will be synthesized and presented descriptively.

Results:

The review will map the current evidence on the credibility of LLMs in mental health support. It will identify factors influencing the reliability and explainability of these models and discuss ethical considerations for their use. The findings will provide practitioners, researchers, policymakers, and users insights.

Conclusions:

This scoping review will fill a critical gap in the literature by systematically examining the credibility of LLMs in mental health support. The results will inform future research, practice, and policy development, ensuring the responsible integration of LLMs in mental health services.


 Citation

Please cite as:

Gautam D, Kellmeyer P

Exploring the Credibility of Large Language Models for Mental Health Support: Protocol for a Scoping Review

JMIR Res Protoc 2025;14:e62865

DOI: 10.2196/62865

PMID: 39879615

PMCID: 11822324

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.