Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Mental Health

Date Submitted: Jun 2, 2025
Date Accepted: Aug 21, 2025

The final, peer-reviewed published version of this preprint can be found here:

“It’s Not Only Attention We Need”: Systematic Review of Large Language Models in Mental Health Care

Bucher A, Egger S, Vashkite I, Wu W, Schwabe G

“It’s Not Only Attention We Need”: Systematic Review of Large Language Models in Mental Health Care

JMIR Ment Health 2025;12:e78410

DOI: 10.2196/78410

PMID: 41186978

PMCID: 12627976

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

“Is Attention All We Need?” - A Systematic Literature Review of LLMs in Mental Healthcare

  • Andreas Bucher; 
  • Sarah Egger; 
  • Inna Vashkite; 
  • Wenyuan Wu; 
  • Gerhard Schwabe

ABSTRACT

Background:

Mental healthcare systems worldwide face critical challenges, including limited access, shortages of clinicians, and stigma-related barriers. In parallel, Large Language Models (LLMs) have emerged as powerful tools capable of supporting therapeutic processes through natural language understanding and generation. While prior research has explored their potential, a comprehensive review assessing how LLMs are integrated into mental healthcare, particularly beyond technical feasibility, is still lacking.

Objective:

This systematic literature review investigates and conceptualizes the application of LLMs in mental healthcare by examining their technical implementation, design characteristics, and situational use across different touchpoints along the patient journey. It introduces a three-layer morphological framework to structure and analyze how LLMs are applied, with the goal of informing

Methods:

Following the methodology of vom Brocke et al. [1], a systematic literature review was conducted across PubMed, IEEE Xplore, JMIR, ACM, and AIS databases, yielding 807 studies. After multiple evaluation steps, 55 studies were included. These were categorized and analyzed based on the patient journey, design elements, and underlying model characteristics.

Results:

Most studies assessed technical feasibility, whereas only a few examined the impact of LLMs on therapeutic outcomes. LLMs were used primarily for classification and text generation tasks, with limited evaluation of safety, hallucination risks, or reasoning capabilities. Design aspects such as user roles, interaction modalities, and interface elements were often underexplored, despite their significant influence on user experience. Furthermore, most applications focused on single-user contexts, overlooking opportunities for integrated care environments, such as AI-blended therapy. The proposed three-layer framework, which consists of the L1: Situation-layer, the L2: Interface-layer, and the L3: LLM-layer, highlights critical design trade-offs and unmet needs in current research.

Conclusions:

LLMs hold promise for enhancing accessibility, personalization, and efficiency in mental healthcare. However, current implementations often overlook essential design and contextual factors that influence real-world adoption and outcomes. The review underscores that the “self-attention” mechanism, a key component of LLMs, alone is not sufficient. Future research must go beyond technical feasibility to explore integrated care models, user experience, and longitudinal treatment outcomes to responsibly embed LLMs into mental healthcare ecosystems.


 Citation

Please cite as:

Bucher A, Egger S, Vashkite I, Wu W, Schwabe G

“It’s Not Only Attention We Need”: Systematic Review of Large Language Models in Mental Health Care

JMIR Ment Health 2025;12:e78410

DOI: 10.2196/78410

PMID: 41186978

PMCID: 12627976

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.