Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Currently submitted to: Interactive Journal of Medical Research

Date Submitted: Feb 9, 2026

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

AI-Augmented Psychotherapy Education: 16-Week Classroom Deployment Compared With Two Traditional Practice Methods

  • Ellen Converse; 
  • Ryan Louie; 
  • Debra Safer; 
  • Juan Pablo Pacheco; 
  • William Fang; 
  • Jamie Kent; 
  • Bruce Arnow; 
  • Diyi Yang

ABSTRACT

Background:

Efforts to enhance psychotherapy education may increasingly emphasize technology-assisted methods that extend practice beyond human supervision. Recent advances in artificial intelligence (AI) have enabled the automated assessment of therapeutic skills and the simulation of patient interactions. However, few studies have examined the feasibility, acceptability, and instructional integration of AI-augmented training tools in real-world training settings.

Objective:

The primary objective of this study was to evaluate the feasibility and perceived value of deploying an AI psychotherapy training platform (CARE) within a doctoral-level psychotherapy course sequence. Secondarily, we explore implementation barriers, ethical and pedagogical considerations, and factors influencing students’ acceptance and engagement with AI-assisted learning.

Methods:

Participants were 29 first-year clinical psychology doctoral students enrolled in a two-quarter introductory psychotherapy sequence at an APA-accredited clinical training program. In the first course, we developed an AI-augmented peer-roleplay assignment that supplemented peer feedback with AI feedback after each session. In the second course, students practiced with traditional video vignettes and voice-based AI-simulated patient scenarios, with AI feedback provided afterwards. Two web-based surveys—containing Likert-type and free-response items—were completed by 9 students (31%) in the first course and 25 (86%) in the second. Analyses included descriptive statistics, matched-pairs hypothesis testing, and thematic analysis, with memos of classroom interactions used to contextualize the data.

Results:

In the first course, many students were initially hesitant to use AI feedback due to privacy concerns; hence, only 9 students opted to use AI feedback. While this group valued AI feedback for offering alternative phrasing and immediate suggestions, peer feedback was rated more helpful and nuanced than AI; AI was limited by transcription errors and a narrow focus on empathy statements. In the second course, students opting to participate (n = 25) were assigned weekly structured video vignette practice, alongside voice-based AI-simulated patients to compare modalities. Students preferred video-based practice for realism and emotional expressiveness, though many noted that AI-simulated patients enabled interactive, back-and-forth practice not possible in video-vignette practice. Instructor feedback was added in the second half of the course in response to concerns that AI feedback focused too narrowly on microskills, rather than promoting therapeutic alliance during the session. Students expressed mixed views on the perceived value of AI in clinical training—while some endorsed its utility, most also raised concerns related to human relational dynamics being supplanted, non-consensual use of their data for improving AI, and the environmental impacts of the generative AI industry writ large.

Conclusions:

Results indicate that implementation was feasible; however, students consistently preferred human feedback over AI-generated feedback and favored video-based vignettes over voice-based AI interactions. Students also reported concerns about the quality of feedback, data transparency, and broader ethical issues. Findings suggest that perceptions of AI’s instructional utility are contingent on the degree of trust students place in the specific AI program and on more general views of AI impact. Accordingly, future deployments must consider the sample’s attitudes toward AI, trust in the technology, and value relative to existing training resources before making general claims about AI’s utility in psychotherapy education.


 Citation

Please cite as:

Converse E, Louie R, Safer D, Pacheco JP, Fang W, Kent J, Arnow B, Yang D

AI-Augmented Psychotherapy Education: 16-Week Classroom Deployment Compared With Two Traditional Practice Methods

JMIR Preprints. 09/02/2026:93199

DOI: 10.2196/preprints.93199

URL: https://preprints.jmir.org/preprint/93199

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.