Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Mental Health

Date Submitted: Oct 1, 2020
Date Accepted: Dec 1, 2021

The final, peer-reviewed published version of this preprint can be found here:

Acoustic and Facial Features From Clinical Interviews for Machine Learning–Based Psychiatric Diagnosis: Algorithm Development

Birnbaum M, Abrami A, Heisig S, Ali A, Arenare E, Agurto C, Lu N, Kane J, Cecchi G

Acoustic and Facial Features From Clinical Interviews for Machine Learning–Based Psychiatric Diagnosis: Algorithm Development

JMIR Ment Health 2022;9(1):e24699

DOI: 10.2196/24699

PMID: 35072648

PMCID: 8822433

Inferring psychiatric diagnoses utilizing machine learning on acoustic and facial features extracted from clinical interviews

  • Michael Birnbaum; 
  • Avner Abrami; 
  • Stephen Heisig; 
  • Asra Ali; 
  • Elizabeth Arenare; 
  • Carla Agurto; 
  • Nathaniel Lu; 
  • John Kane; 
  • Guillermo Cecchi

ABSTRACT

Background:

In contrast to all other areas of medicine, psychiatry is still nearly entirely reliant on subjective patient self-report and clinical observation. The lack of objective information on which to base clinical decisions contributes to reduced quality of care. Behavioral health clinicians need objective and reliable patient data to support effective, and targeted interventions. Novel, technology-based solutions can support clinicians to improve outcomes.

Objective:

We aimed to investigate the extent to which psychiatric signs and symptoms are reliably inferred from audiovisual patterns, extracted from recorded evaluation interviews in participants with schizophrenia spectrum disorders (SSD) and bipolar disorder (BD).

Methods:

We obtained audiovisual data from 89 participants (mean age = 25.3, 53.9% male) with SSD (N = 41), BD (N = 21), and healthy volunteers (HV; N = 27), and developed machine learning models based on acoustic and facial movement features extracted from participant interviews to detect human-coded neuropsychiatric symptoms.

Results:

The models successfully predicted the presence of several psychiatric signs and symptoms with high degrees of accuracy including affective flattening (10-fold AUROC = 0.86), lack of vocal inflection (10-fold AUROC = 0.71), unusual thought content (10-fold AUROC = 0.65), helplessness (10-fold AUROC = 0.67), and anxiety (10-fold AUROC = 0.64). In addition, classifiers successfully differentiated SSD from HV (10-fold AUROC = 0.76), BD from HV (10-fold AUROC = 0.80), and SSD vs. BD (10-fold AUROC= 0.77) using audiovisual patterns alone.

Conclusions:

Audiovisual data holds promise for gathering objective, scalable, and easily accessed, indicators of psychiatric illness. This knowledge represents advancement in efforts to capitalize on digital data to improve symptom assessment procedures and supports the development of a new generation of innovative clinical tools by employing acoustic and facial data analysis. Clinical Trial: Not applicable


 Citation

Please cite as:

Birnbaum M, Abrami A, Heisig S, Ali A, Arenare E, Agurto C, Lu N, Kane J, Cecchi G

Acoustic and Facial Features From Clinical Interviews for Machine Learning–Based Psychiatric Diagnosis: Algorithm Development

JMIR Ment Health 2022;9(1):e24699

DOI: 10.2196/24699

PMID: 35072648

PMCID: 8822433

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.