Accepted for/Published in: JMIR Mental Health
Date Submitted: Jun 2, 2025
Date Accepted: Aug 21, 2025
“Is Attention All We Need?” - A Systematic Literature Review of LLMs in Mental Healthcare
ABSTRACT
Background:
Mental healthcare systems worldwide face critical challenges, including limited access, shortages of clinicians, and stigma-related barriers. In parallel, Large Language Models (LLMs) have emerged as powerful tools capable of supporting therapeutic processes through natural language understanding and generation. While prior research has explored their potential, a comprehensive review assessing how LLMs are integrated into mental healthcare, particularly beyond technical feasibility, is still lacking.
Objective:
This systematic literature review investigates and conceptualizes the application of LLMs in mental healthcare by examining their technical implementation, design characteristics, and situational use across different touchpoints along the patient journey. It introduces a three-layer morphological framework to structure and analyze how LLMs are applied, with the goal of informing
Methods:
Following the methodology of vom Brocke et al. [1], a systematic literature review was conducted across PubMed, IEEE Xplore, JMIR, ACM, and AIS databases, yielding 807 studies. After multiple evaluation steps, 55 studies were included. These were categorized and analyzed based on the patient journey, design elements, and underlying model characteristics.
Results:
Most studies assessed technical feasibility, whereas only a few examined the impact of LLMs on therapeutic outcomes. LLMs were used primarily for classification and text generation tasks, with limited evaluation of safety, hallucination risks, or reasoning capabilities. Design aspects such as user roles, interaction modalities, and interface elements were often underexplored, despite their significant influence on user experience. Furthermore, most applications focused on single-user contexts, overlooking opportunities for integrated care environments, such as AI-blended therapy. The proposed three-layer framework, which consists of the L1: Situation-layer, the L2: Interface-layer, and the L3: LLM-layer, highlights critical design trade-offs and unmet needs in current research.
Conclusions:
LLMs hold promise for enhancing accessibility, personalization, and efficiency in mental healthcare. However, current implementations often overlook essential design and contextual factors that influence real-world adoption and outcomes. The review underscores that the “self-attention” mechanism, a key component of LLMs, alone is not sufficient. Future research must go beyond technical feasibility to explore integrated care models, user experience, and longitudinal treatment outcomes to responsibly embed LLMs into mental healthcare ecosystems.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.