Accepted for/Published in: JMIR Medical Informatics
Date Submitted: Mar 21, 2022
Date Accepted: Jun 27, 2022
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Developing An Automated Assessment of In-Session Patient Activation for Psychological Therapy: An Explainable Co-Development Approach
ABSTRACT
Background:
Patient activation is defined as a patient’s confidence and perceived ability to manage their own health. Patient activation has been a consistent predictor of long-term health and care costs, particularly for people with multiple long-term health conditions. However, there is currently no means of measuring patient activation from what is said in healthcare consultations. This may be particularly important for psychological therapy, because most current methods for evaluating therapy content cannot be used routinely due to time and cost restraints. Natural language processing (NLP) has been used increasingly to classify and evaluate the contents of psychological therapy. This aims to make routine, systematic evaluation of psychological therapy contents more accessible in terms of time and cost restraints. However, comparatively little attention has been paid to algorithmic trust and interpretability, with few studies in the field involving end-users or stakeholders in algorithm development.
Objective:
This study applied a responsible design to use NLP in the development of an AI model to automate the rating of a psychological therapy process measure: the Consultation Interactions Coding Scheme (CICS). The CICS assesses the level of patient activation observable from turn-by-turn psychological therapy interactions.
Methods:
With consent, 128 sessions of remotely delivered Cognitive Behavioral Therapy from 53 participants experiencing multiple physical and mental health problems were anonymously transcribed and rated by trained human CICS coders. Using participatory methodology, a multi-disciplinary team proposed candidate language features that they thought would discriminate between high and low patient activation. The team included service-user researchers; psychological therapists; applied linguists; digital research experts; AI ethics researchers, and NLP researchers. Identified language features were extracted from the transcripts alongside demographic features and machine learning was applied using k-nearest neighbors (KNN) and bagged trees classifiers to assess whether in-session patient activation and interaction-types could be accurately classified.
Results:
The KNN classifier obtained 82.7% accuracy (0.79 precision and 0.82 recall) in classifying CICS-rated interaction-types from a training dataset and 73% accuracy (0.82 precision and 0.80 recall) in a validation dataset. The bagged trees classifier obtained 85.7% accuracy for the training set (0.85 precision and 0.95 recall) and 81.2% accuracy for validation (0.87 and recall 0.75). in differentiating between interactions rated high in patient activation versus those rate low or neutral.
Conclusions:
Interpretable language features identified through a multi-disciplinary collaboration can be used to discriminate psychological therapy session contents based on patient activation among patients experiencing multiple long-term physical and mental health conditions.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.