Accepted for/Published in: JMIR Formative Research
Date Submitted: May 28, 2025
Open Peer Review Period: May 28, 2025 - Jul 23, 2025
Date Accepted: Sep 25, 2025
(closed for review but you can still tweet)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Comparing Human and AI Therapists in Behavioral Activation Knowledge and Response: Pre-Post Comparative Evaluation Study
ABSTRACT
Background:
Large Language Models (LLMs) have rapidly advanced across numerous fields, including mental health care. A shortage of trained therapists and mental health care providers has driven informal use of LLMs for therapeutic support. However, their clinical utility remains poorly defined. This study aimed to systematically evaluate the capabilities and limitations of LLMs in single-turn therapeutic interactions compared to psychotherapists-in-training.
Objective:
To systematically evaluate and compare the therapeutic knowledge and single-turn response capabilities of LLMs versus psychotherapists-in-training in the context of Behavioral Activation therapy for depression, and to assess how both groups' performance changes when provided with structured therapeutic training materials.
Methods:
Participants (n=6 LLMs, n=8 human) completed a questionnaire on depression and Behavioral Activation with 20 multiple choice items, and 10 therapy scenarios with 3 open-ended items each that postulated empathic response, use of validation strategies, and Theory of Mind capabilities. Human participants completed the questionnaire prior and post to a 5-hour workshop and five-week period with learning material. LLMs received identical training content as context during the second test. All open-ended questions were rated on 5-point scales by two experts.
Results:
At baseline, LLMs demonstrated higher knowledge scores than human participants (61.0 vs. 52.0 out of 100 points) and were rated higher in empathy (U=2.0, P=0.005), validation quality (U=2.5, P=.006), anticipation of cognition (U=0.0, P=.002), and anticipation of emotion (U=0.0, P=.002). Following BA training, LLMs maintained their performance advantage across multiple choice and open-ended items.
Conclusions:
The results suggest that LLMs can generate high-quality therapeutic single-turn responses that integrate clinical knowledge with empathetic communication. The findings indicate LLMs' potential as valuable tools in mental health care, though further research should evaluate their performance in ongoing therapeutic relationships and clinical outcomes.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.