Currently submitted to: JMIR Medical Informatics
Date Submitted: Nov 25, 2025
Open Peer Review Period: Dec 10, 2025 - Feb 4, 2026
(closed for review but you can still tweet)
NOTE: This is an unreviewed Preprint
Warning: This is a unreviewed preprint (What is a preprint?). Readers are warned that the document has not been peer-reviewed by expert/patient reviewers or an academic editor, may contain misleading claims, and is likely to undergo changes before final publication, if accepted, or may have been rejected/withdrawn (a note "no longer under consideration" will appear above).
Peer review me: Readers with interest and expertise are encouraged to sign up as peer-reviewer, if the paper is within an open peer-review period (in this case, a "Peer Review Me" button to sign up as reviewer is displayed above). All preprints currently open for review are listed here. Outside of the formal open peer-review period we encourage you to tweet about the preprint.
Citation: Please cite this preprint only for review purposes or for grant applications and CVs (if you are the author).
Final version: If our system detects a final peer-reviewed "version of record" (VoR) published in any journal, a link to that VoR will appear below. Readers are then encourage to cite the VoR instead of this preprint.
Settings: If you are the author, you can login and change the preprint display settings, but the preprint URL/DOI is supposed to be stable and citable, so it should not be removed once posted.
Submit: To post your own preprint, simply submit to any JMIR journal, and choose the appropriate settings to expose your submitted version as preprint.
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Unsupervised Calibration for Phenotyping and Association Studies: Learning with Noisy Labels in Electronic Health Records
ABSTRACT
Background:
Electronic health record (EHR) based phenotyping algorithms are typically trained and/or validated using a small set of gold-standard labels manually annotated via medical chart review by domain experts. To reduce the time and labor cost, silver-standard labels such as self-reported outcomes that are highly predictive of the true disease statuses have been used as a proxy of gold standard labels, which are unfortunately subject to misclassification due to insufficient documentation or human error.
Objective:
Ignoring such labeling errors may lead to biases in both the estimated classification model and its accuracy for predicting the true underlying phenotype status. The objective of this study is to develop a calibration algorithm for both phenotyping classification and downstream association analysis using noisy silver-standard labels, where the true gold-standard disease status is not observable.
Methods:
In this paper, we propose an imperfectly supervised calibrated algorithm for phenotyping and regression (SCAPER) that simultaneously produces calibrated phenotyping classifications and association regression modeling by utilizing a small number of noisy silver-standard labels and a large set of unlabeled observations on predictive features and biomarkers (possibly genetic data). The proposed approach yields an improved predicted probability of phenotype for each patient, a threshold for classifying participants with phenotype yes/no, and bias corrected regression coefficient for predictors in the downstream association study. The algorithm was validated by both phenotyping and genetic association studies for rheumatoid arthritis from a large tertiary care center, and that for type II diabetes from two large tertiary care centers.
Results:
When validating against the gold-standard labels for phenotyping performance, the proposed algorithm achieved higher AUC compared to the International Classification of Diseases (ICD) codes and existing unsupervised phenotyping algorithms. The downstream association study suggests that the proposed approach detected previously validated associations with higher power when compared to the standard association studies based on ICD codes.
Conclusions:
The proposed unsupervised calibration increased the accuracy of phenotype definition and corrected the biases in the downstream association studies such as genetic association studies.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.