Accepted for/Published in: JMIR mHealth and uHealth
Date Submitted: Sep 15, 2020
Date Accepted: Jul 23, 2021
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Using speech patterns from smartphones to investigate mood disorders: A scoping review
ABSTRACT
Background:
Mood disorders are commonly underrecognized and undertreated as diagnosis is reliant on self-reporting and clinical assessments that are often not timely. Speech characteristics of those with mood disorders differs from healthy individuals. With the wide use of smartphones, and the emergence of machine learning approaches, smartphones can be used to monitor speech patterns to help the diagnosis and monitoring of mood disorders.
Objective:
The aim of this review is to synthesize research on using speech patterns from smartphones to diagnose and/or monitor mood disorders.
Methods:
Literature searches of major databases, Medline, PsycInfo, EMBASE and CINAHL initially identified 440 relevant articles using the search terms ‘mood disorders’, ‘smartphone’, ‘voice analysis’ and their variants. Only 14 studies met inclusion criteria: Use of a smartphone for capturing voice data, focus on diagnosing or monitoring a mood disorder(s) (not exclusive to clinical populations), capturing voice data, and in the English language only. Articles were assessed by two reviewers, and data extracted included data type, classifiers used, methods of capture and study results. Studies were analyzed using a narrative synthesis approach.
Results:
Studies showed that voice data alone had reasonable accuracy in predicting mood states and mood fluctuations based on objectively monitored speech patterns. While a fusion of different sensor modalities revealed the highest accuracy (97.4%), nearly 80% of included studies were pilot trials or feasibility studies without control groups and had small sample sizes ranging from 1 to 73 participants. Studies were also carried out over short or varying timeframes and had significant heterogeneity of methods in terms of the types of audio data captured, environmental contexts, classifiers, and measures to control for privacy and ambient noise.
Conclusions:
Approaches that allow smartphone-based monitoring of speech patterns in mood disorders is rapidly growing. The current body of evidence supports the value of speech patterns to monitor, classify and predict mood states in real-time. However, many challenges remain around the robustness, cost-effectiveness and acceptability of such an approach and further work is required to build on current research and reduce heterogeneity of methodologies as well as clinical evaluation of the benefits and risks of such approaches.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.