Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Mental Health

Date Submitted: May 10, 2024
Open Peer Review Period: May 10, 2024 - Jul 5, 2024
Date Accepted: Dec 23, 2024
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping Review

Rahsepar Meadi M, Sillekens T, Metselaar S, van Balkom AJ, Bernstein JS, Batelaan N

Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping Review

JMIR Ment Health 2025;12:e60432

DOI: 10.2196/60432

PMID: 39983102

PMCID: 11890142

Ethics of Conversational Artificial Intelligence in Mental Health: A Scoping Review

  • Mehrdad Rahsepar Meadi; 
  • Tomas Sillekens; 
  • Suzanne Metselaar; 
  • Anton J.L.M. van Balkom; 
  • Justin S. Bernstein; 
  • Neeltje Batelaan

ABSTRACT

Background:

Conversational artificial intelligence (CAI) emerges as a promising new digital technology for mental healthcare. CAI applications, like psychotherapeutic chatbots, are already available in app stores.

Objective:

This scoping review aims to provide a comprehensive overview of the ethical considerations surrounding the use of CAI as a therapist for individuals with mental health disorders. The secondary aim is to delineate future research directions in this evolving field.

Methods:

We conducted a systematic search in PubMed, Embase, APA PsycINFO, Web of Science, Scopus, The Philosopher’s Index, and ACM Digital Library. Our search comprised three elements concerning embodied AI, ethics, and mental health, separated by AND commands. We defined CAI as a conversational agent that interacts with a person and uses NLP to formulate output. We included articles discussing ethical challenges related to AI-driven conversational agents that are aimed at functioning as a therapist for individuals with mental health issues. We added additional articles through snowball searching. We only included articles in English or Dutch. Additionally, all types of articles were considered except abstracts of symposia . Screening for eligibility was done by two independent researchers (MRM and TS). An initial charting form was made based on the expected considerations and further revised and complemented during the charting process. The ethical challenges were divided into different themes. When a certain concern occurred in more than two articles, we identified it as a distinct theme.

Results:

We included 73 articles, of which 90% were published in 2018 or later. Most were reviews (27%) followed by articles that used empirical data collection methods such as surveys or other qualitative methods (14%). The following 10 themes were distinguished: (1) Harm (reduction) and safety (discussed in 52% of articles), the most common topics within this theme were suicidality and crisis management, harmful or wrong suggestions, and the risk of dependency on CAI; (2) Explicability, transparency, and trust (25%), including topics such as the effects of “black-box” algorithms on trust; (3) Responsibility and accountability (26%); (4) Empathy and humanness (21%); (5) Justice (33%), including themes such as health inequalities due to differences in digital literacy; (6) Anthropomorphisation and deception (18%); (7) Autonomy (11%); (8) Effectiveness (30%); (9) Privacy and confidentiality (64%); and (10) Concerns for healthcare workers’ jobs (12%). Other themes were discussed in 14% of articles.

Conclusions:

Our scoping review has comprehensively covered various ethical aspects of CAI in mental healthcare. However, certain themes, including the climate impact of AI, the responsibility gap, and especially the nuanced examination of therapeutic processes, are less explored . Additionally, the scarcity of qualitative studies and underrepresentation of key stakeholders highlight areas for future research to deepen our understanding of the ethical implications of CAI in mental health.


 Citation

Please cite as:

Rahsepar Meadi M, Sillekens T, Metselaar S, van Balkom AJ, Bernstein JS, Batelaan N

Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping Review

JMIR Ment Health 2025;12:e60432

DOI: 10.2196/60432

PMID: 39983102

PMCID: 11890142

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.