Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Currently submitted to: JMIR Formative Research

Date Submitted: Nov 7, 2025

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Advancing Mobile Digital Phenotyping with Parallel, In-Memory Data Fusion and Processing: The Stanford Screenomics Platform

  • Ian Kim; 
  • Thomas N. Robinson; 
  • Byron B. Reeves; 
  • Nick Haber; 
  • Nilàm Ram

ABSTRACT

Digital phenotyping—the use of continuous data streams from digital devices such as smartphones to assess behavioral, psychological, and physiological states—holds transformative potential for health monitoring and personalized care. However, the rapid fusion and analysis of large volumes of multimodal data required to deliver real-time actionable insights often exceeds the limited computational resources of mobile devices. As a result, most existing platforms rely on sequential processing and offline computation. We propose and present a modular architecture with parallel and in-memory processing that may better enable scalable, real-time digital phenotyping. In the Stanford Screenomics platform, each data source is managed by an independent module, and modality-specific fusion rules synchronize and integrate data across streams. A unified fusion standard minimizes preprocessing needs and ensures cross-modality compatibility. The fusion process is encapsulated in a dedicated module, which is invoked by each data source module as needed. System performance was tested and evaluated using both simplified modules and full prototype implementations that collected real-time multimodal data. In 48-hour experiments, the Screenomics architecture demonstrated substantial improvements over traditional approaches, achieving up to 2.9× higher resource utilization, 1.6× improved energy efficiency, 9.4× greater data fidelity, and 42.7× faster processing speeds. These results confirm the feasibility of using parallel and in-memory processing for on-device, real-time digital phenotyping and expand the potential for platforms to generate timely insights and support adaptive, personalized interventions.


 Citation

Please cite as:

Kim I, Robinson TN, Reeves BB, Haber N, Ram N

Advancing Mobile Digital Phenotyping with Parallel, In-Memory Data Fusion and Processing: The Stanford Screenomics Platform

JMIR Preprints. 07/11/2025:87320

DOI: 10.2196/preprints.87320

URL: https://preprints.jmir.org/preprint/87320

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.