Accepted for/Published in: JMIR Mental Health
Date Submitted: Dec 27, 2024
Date Accepted: May 29, 2025
The application and ethical implication of generative AI in mental health: A systematic review
ABSTRACT
Background:
Mental health disorders affect an estimated one in eight individuals globally, yet traditional interventions often face barriers such as limited accessibility, high costs, and persistent stigma. Recent advancements in generative AI (GenAI) have introduced AI systems capable of understanding and producing human-like language in real time. These developments present new opportunities to enhance mental health care.
Objective:
This review aims to systematically examine the current applications of GenAI in mental health, focusing on three core domains: diagnosis and assessment, therapeutic tools, and clinician support. In addition, it identifies and synthesizes key ethical issues reported in the literature.
Methods:
Following the 2020 PRISMA guidelines, a comprehensive literature search was conducted in PubMed, ACM Digital Library, Scopus, Embase, PsycInfo, and Google Scholar to identify peer-reviewed studies published from October 1, 2019, to September 30, 2024. After screening 783 records, 79 studies met the inclusion criteria.
Results:
The number of studies on GenAI applications in mental health has grown substantially since 2023. Studies on diagnosis and assessment (n=37) primarily employed GenAI models to detect depression and suicidality through text data. Therapeutic applications (n=20) investigated GenAI-based chatbots and adaptive systems for emotional and behavioral support, reporting promising outcomes but revealing limited real-world deployment and safety assurance. Clinician support studies (n=24) explored GenAI’s role in clinical decision making, documentation and summarization, therapy support, training and simulation, and psychoeducation. Ethical concerns were consistently reported across domains. Based on these findings, we propose GenAI4MH, an integrative ethical framework comprising four core dimensions—data privacy and security, information integrity and fairness, user safety, and ethical governance and oversight—to guide the responsible use of GenAI in mental health contexts.
Conclusions:
GenAI show promise in addressing the escalating global demand for mental health services. They may augment traditional approaches by enhancing diagnostic accuracy, offering more accessible support, and reducing clinicians’ administrative burden. However, to ensure ethical and effective implementation, comprehensive safeguards—particularly around privacy, algorithmic bias, and responsible user engagement—must be established.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.