Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Mental Health

Date Submitted: Sep 5, 2025
Open Peer Review Period: Sep 5, 2025 - Oct 31, 2025
Date Accepted: Oct 27, 2025
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Using Digital Phenotypes to Identify Individuals With Alexithymia in Posttraumatic Stress Disorder: Cross-Sectional Study

Meaney TW, Yadav V, Galatzer-Levy I, Bryant R

Using Digital Phenotypes to Identify Individuals With Alexithymia in Posttraumatic Stress Disorder: Cross-Sectional Study

JMIR Ment Health 2025;12:e83575

DOI: 10.2196/83575

PMID: 41232100

PMCID: 12661231

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Utilizing Digital Phenotypes to Identify Individuals with Alexithymia in Posttraumatic Stress Disorder

  • Tomas William Meaney; 
  • Vijay Yadav; 
  • Isaac Galatzer-Levy; 
  • Richard Bryant

ABSTRACT

Background:

Alexithymia, defined by difficulty identifying and describing one’s emotions, has been identified as a transdiagnostic emotion process that impacts the course, severity and treatment outcomes of psychiatric conditions such as posttraumatic stress disorder (PTSD). As such, it is an important process to accurately measure and identify in clinical contexts. However, research identifying the relationship between the experience of alexithymia and psychopathology has been limited by an over-reliance on self-report scales, which have restricted utility for measuring constructs that involve deficits in self-awareness such as alexithymia. Hence, more suitable and effective methods of measuring and identifying those experiencing alexithymia in clinical samples is needed.

Objective:

In this cross-sectional study (N = 96), we aimed to determine if facial, vocal and language phenotypes extracted from one-minute recordings of war veterans with PTSD describing a traumatic event could be utilized to identify those experiencing alexithymia.

Methods:

Specialized software was used to extract facial, vocal and language features from the recordings. These features were then integrated into machine learning (Extreme Gradient Boost (XGBoost)) classification models that were trained and tested within a five-fold nested cross-validation pipeline for their capacity to classify veterans scoring above the cutoff for alexithymia on the Toronto Alexithymia Scale-20.

Results:

The best performing XGBoost classification model trained in the nested cross-validation pipeline was able to classify those experiencing alexithymia with a good level of accuracy (average F1-score = 0.78, average AUC score = 0.87). Consistent with theoretical models and past research into phenotypes of alexithymia, language, vocal and facial features all contributed to the accuracy of the XGBoost classification model.

Conclusions:

These findings indicate that facial, vocal, and language phenotypes incorporated in machine learning models could represent a promising alternative to identifying individuals with PTSD who are experiencing alexithymia. The further validation and use of this approach could facilitate more tailored and effective allocation of treatment resources to individuals experiencing alexithymia in clinical settings.


 Citation

Please cite as:

Meaney TW, Yadav V, Galatzer-Levy I, Bryant R

Using Digital Phenotypes to Identify Individuals With Alexithymia in Posttraumatic Stress Disorder: Cross-Sectional Study

JMIR Ment Health 2025;12:e83575

DOI: 10.2196/83575

PMID: 41232100

PMCID: 12661231

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.