Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Oct 17, 2018
Open Peer Review Period: Oct 25, 2018 - Dec 20, 2018
Date Accepted: Apr 2, 2019
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Development and Evaluation of ClientBot: Patient-Like Conversational Agent to Train Basic Counseling Skills

Tanana MJ, Soma CS, Srikumar V, Atkins DC, Imel ZE

Development and Evaluation of ClientBot: Patient-Like Conversational Agent to Train Basic Counseling Skills

J Med Internet Res 2019;21(7):e12529

DOI: 10.2196/12529

PMID: 31309929

PMCID: 6662153

Development and evaluation of ClientBot: A patient-like conversational agent to train basic counseling skills

  • Michael J Tanana; 
  • Christina S. Soma; 
  • Vivek Srikumar; 
  • David C Atkins; 
  • Zac E. Imel

ABSTRACT

Background:

Training therapists is both expensive and time consuming. Degree-based training can require tens of thousands of dollars and hundreds of hours of expert instruction. Counseling skills practice often involves role-plays, standardized patients, or practice with real clients. Performance-based feedback is critical for skill development and expertise, but trainee therapists often receive minimal and subjective feedback, which is distal to their skill practice.

Objective:

In this study, we developed and evaluated a patient-like neural conversational agent, which provides real-time feedback to trainees via chat-based interaction.

Methods:

The text-based conversational agent was trained on an archive of 2,354 psychotherapy transcripts and provided specific feedback on the use of basic interviewing/counseling skills (i.e., open questions and reflections - summary statements of what a client has said). One hundred fifty-one (151) non-therapists were randomized to either 1) immediate feedback on their use of open questions and reflections during practice session with ClientBot, or 2) initial education and encouragement on the skills.

Results:

Participants in the ClientBot condition used 91% more reflections during practice with feedback (p < .001) and 76% more reflections after feedback was removed (p < .001) relative to the control group. The treatment group used more open questions during training, but not after feedback was removed, suggesting that certain skills may not improve with performance based feedback. Finally, after feedback was removed the ClientBot group used 31% more listening skills overall (p < .001).

Conclusions:

This proof-of-concept study demonstrates that practice and feedback can improve trainee use of basic counseling skills using a method that can scale beyond the capacity of a single expert trainer.


 Citation

Please cite as:

Tanana MJ, Soma CS, Srikumar V, Atkins DC, Imel ZE

Development and Evaluation of ClientBot: Patient-Like Conversational Agent to Train Basic Counseling Skills

J Med Internet Res 2019;21(7):e12529

DOI: 10.2196/12529

PMID: 31309929

PMCID: 6662153

Per the author's request the PDF is not available.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.