Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Mental Health

Date Submitted: Dec 30, 2025
Date Accepted: Feb 24, 2026

The final, peer-reviewed published version of this preprint can be found here:

Predicting Momentary Suicidal Ideation From Smartphone Screenshots Using Vision-Language Models: Prospective Machine Learning Study

Jacobucci R, Shao W, Kobrinsky V, Ammerman B

Predicting Momentary Suicidal Ideation From Smartphone Screenshots Using Vision-Language Models: Prospective Machine Learning Study

JMIR Ment Health 2026;13:e90581

DOI: 10.2196/90581

PMID: 41950375

Predicting Momentary Suicidal Ideation from Smartphone Screenshots Using Vision-Language Models

  • Ross Jacobucci; 
  • Wenpei Shao; 
  • Veronika Kobrinsky; 
  • Brooke Ammerman

ABSTRACT

Background:

Passive smartphone sensing shows promise for suicide prevention, but behavioral metadata (GPS, screen time, accelerometry) often lacks the contextual information needed to detect acute psychological distress. Analyzing what people actually see, read, and type on their phones—rather than just usage patterns—may provide more proximal signals of risk.

Objective:

We tested whether vision-language models (VLMs) applied to passively captured smartphone screenshots can predict momentary suicidal ideation.

Methods:

Seventy-nine adults with past-month suicidal thoughts or behaviors completed ecological momentary assessments (EMA) over 28 days while screenshots were captured every 5 seconds during active phone use. We fine-tuned open-source VLMs (Qwen2.5-VL, LFM2-VL) and text-only models (Qwen3) to predict suicidal ideation from screenshots captured in the 2 hours preceding each EMA. To avoid data leakage, we used within-person temporal holdouts (70/30 split) and between-person subject holdouts (50/50 split).

Results:

The analytic sample comprised 2.5 million screenshots from 70 participants. Within-person models achieved strong discrimination at the EMA level (AUC=0.83, AUPRC=0.77), with image-based models outperforming text-only models (AUC=0.83 vs 0.79; P<.001). Between-person generalization was near chance (AUC≈0.50), though a simple lexical screening method retained modest discrimination (AUC=0.62). Smaller models performed comparably to larger models, supporting feasible on-device deployment.

Conclusions:

Screen content predicts short-term suicidal ideation with clinically meaningful accuracy when models are personalized, but does not generalize across individuals. These findings support a two-stage clinical architecture: coarse lexical screening for new patients, with personalized VLM-based monitoring after a calibration period. On-device inference may enable privacy-preserving deployment.


 Citation

Please cite as:

Jacobucci R, Shao W, Kobrinsky V, Ammerman B

Predicting Momentary Suicidal Ideation From Smartphone Screenshots Using Vision-Language Models: Prospective Machine Learning Study

JMIR Ment Health 2026;13:e90581

DOI: 10.2196/90581

PMID: 41950375

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.