Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Advancing Mobile Digital Phenotyping with Parallel, In-Memory Data Fusion and Processing: The Stanford Screenomics Platform
ABSTRACT
Digital phenotyping—the use of continuous data streams from digital devices such as smartphones to assess behavioral, psychological, and physiological states—holds transformative potential for health monitoring and personalized care. However, the rapid fusion and analysis of large volumes of multimodal data required to deliver real-time actionable insights often exceeds the limited computational resources of mobile devices. As a result, most existing platforms rely on sequential processing and offline computation. We propose and present a modular architecture with parallel and in-memory processing that may better enable scalable, real-time digital phenotyping. In the Stanford Screenomics platform, each data source is managed by an independent module, and modality-specific fusion rules synchronize and integrate data across streams. A unified fusion standard minimizes preprocessing needs and ensures cross-modality compatibility. The fusion process is encapsulated in a dedicated module, which is invoked by each data source module as needed. System performance was tested and evaluated using both simplified modules and full prototype implementations that collected real-time multimodal data. In 48-hour experiments, the Screenomics architecture demonstrated substantial improvements over traditional approaches, achieving up to 2.9× higher resource utilization, 1.6× improved energy efficiency, 9.4× greater data fidelity, and 42.7× faster processing speeds. These results confirm the feasibility of using parallel and in-memory processing for on-device, real-time digital phenotyping and expand the potential for platforms to generate timely insights and support adaptive, personalized interventions.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.