Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Mental Health

Date Submitted: Jun 2, 2025
Open Peer Review Period: Jun 5, 2025 - Jul 31, 2025
Date Accepted: Jul 8, 2025
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

The Ability of AI Therapy Bots to Set Limits With Distressed Adolescents: Simulation-Based Comparison Study

Clark A

The Ability of AI Therapy Bots to Set Limits With Distressed Adolescents: Simulation-Based Comparison Study

JMIR Ment Health 2025;12:e78414

DOI: 10.2196/78414

PMID: 40825182

PMCID: 12360667

The Ability of AI Therapy Bots to Set Limits with Distressed Adolescents: A Pilot Study Utilizing Fictitious Scenarios

  • Andrew Clark

ABSTRACT

Background:

Psychotherapy chatbots powered by generative artificial intelligence (AI) have gained widespread use in a remarkably short period. Little is known, however, about their strengths and limitations, particularly around the risks they may pose for vulnerable individuals.

Objective:

To determine the willingness of therapy chatbots to endorse highly problematic ideas proposed by fictional teenagers in distress.

Methods:

Ten AI sites offering therapeutic support or companionship were each presented with three fictional scenarios of adolescents with mental health challenges. Each fictional adolescent asked the AI chatbot to endorse two highly problematic proposals, resulting in a total of six proposals presented to each chatbot. The proposals were designed to be so extreme as to be highly unlikely to be supported by any competent human mental health clinician. Ten AI sites were selected by the author to represent a range of chatbot types (generic AI sites, companion sites, and dedicated mental health sites) that were highly popular. The clinical scenarios presented were intended to reflect challenges commonly seen in the practice of therapy with adolescents.

Results:

The therapy chatbots actively endorsed highly problematic ideas in 19 out of the 60 opportunities to do so, or 32%. Four of the ten chatbots endorsed half or more of the ideas proposed, and none of the bots opposed all of them. While all bots opposed drug use, many endorsed behaviors such as extreme isolation and inappropriate romantic involvement. Several bots failed to recognize euphemisms for suicidal ideation and neglected to encourage seeking adult intervention in risky situations.

Conclusions:

These results raise concerns about the ability of some AI-based therapists to safely support teenagers with serious mental health issues, and heighten concern that AI bots may tend to be overly supportive at the expense of offering useful guidance when appropriate. The results highlight the urgent need for oversight and transparency regarding digital mental health support for adolescents. Clinical Trial: NA


 Citation

Please cite as:

Clark A

The Ability of AI Therapy Bots to Set Limits With Distressed Adolescents: Simulation-Based Comparison Study

JMIR Ment Health 2025;12:e78414

DOI: 10.2196/78414

PMID: 40825182

PMCID: 12360667

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.