Accepted for/Published in: JMIR Mental Health
Date Submitted: Oct 1, 2020
Date Accepted: Dec 1, 2021
Inferring psychiatric diagnoses utilizing machine learning on acoustic and facial features extracted from clinical interviews
ABSTRACT
Background:
In contrast to all other areas of medicine, psychiatry is still nearly entirely reliant on subjective patient self-report and clinical observation. The lack of objective information on which to base clinical decisions contributes to reduced quality of care. Behavioral health clinicians need objective and reliable patient data to support effective, and targeted interventions. Novel, technology-based solutions can support clinicians to improve outcomes.
Objective:
We aimed to investigate the extent to which psychiatric signs and symptoms are reliably inferred from audiovisual patterns, extracted from recorded evaluation interviews in participants with schizophrenia spectrum disorders (SSD) and bipolar disorder (BD).
Methods:
We obtained audiovisual data from 89 participants (mean age = 25.3, 53.9% male) with SSD (N = 41), BD (N = 21), and healthy volunteers (HV; N = 27), and developed machine learning models based on acoustic and facial movement features extracted from participant interviews to detect human-coded neuropsychiatric symptoms.
Results:
The models successfully predicted the presence of several psychiatric signs and symptoms with high degrees of accuracy including affective flattening (10-fold AUROC = 0.86), lack of vocal inflection (10-fold AUROC = 0.71), unusual thought content (10-fold AUROC = 0.65), helplessness (10-fold AUROC = 0.67), and anxiety (10-fold AUROC = 0.64). In addition, classifiers successfully differentiated SSD from HV (10-fold AUROC = 0.76), BD from HV (10-fold AUROC = 0.80), and SSD vs. BD (10-fold AUROC= 0.77) using audiovisual patterns alone.
Conclusions:
Audiovisual data holds promise for gathering objective, scalable, and easily accessed, indicators of psychiatric illness. This knowledge represents advancement in efforts to capitalize on digital data to improve symptom assessment procedures and supports the development of a new generation of innovative clinical tools by employing acoustic and facial data analysis. Clinical Trial: Not applicable
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.