Accepted for/Published in: JMIR Medical Informatics
Date Submitted: Sep 3, 2025
Date Accepted: Feb 5, 2026
Scalable and Privacy-Conscious End-to-End Processing of Large-Scale Clinical Data for Precision Medicine: Empirical Evaluation Study
ABSTRACT
Background:
Background:
In large-scale clinical data analysis, CSV and traditional RDBMS–based approaches are widely used but impose substantial storage and processing constraints that delay research preparation and hinder multicenter collaboration. Although columnar storage formats such as Apache Parquet have gained attention in data science, systematic end-to-end evaluations in clinical environments remain limited, particularly regarding efficiency and scalability.
Objective:
Objective:
This study aimed to empirically evaluate whether a Parquet-based end-to-end pipeline can improve computational efficiency and scalability in large-scale clinical data analysis while preserving predictive performance and protecting privacy.
Methods:
Methods:
Electronic health record data comprising 13.76 million rows from a large academic medical center in Korea were analyzed using Parquet, CSV, PostgreSQL, and DuckDB environments. Standardized SQL workloads and multi-label classification models were applied to evaluate storage efficiency, time-to-analysis, and predictive performance. Statistical equivalence testing with prespecified clinical margins and bootstrap resampling ensured rigorous comparison, while privacy risks were assessed through multiple membership inference attacks.
Results:
Results:
Compared with CSV, Parquet reduced storage requirements approximately sevenfold and lowered I/O latency by over 95%. End-to-end processing latency was substantially reduced, while classification performance was statistically equivalent across AUROC, AUPRC, accuracy, and F1-score, with all observed differences falling within prespecified clinical equivalence margins (p<.001). Calibration was generally reliable, with only minor deviations in imbalanced cohorts. Membership inference attacks performed at chance level (AUC≈0.50), suggesting no measurable increase in privacy risk.
Conclusions:
Conclusions:
By enabling faster cohort construction and secure data integration, Parquet provides clinically meaningful infrastructure for timely evidence generation in precision medicine.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.