Accepted for/Published in: Journal of Medical Internet Research
Date Submitted: Jul 26, 2022
Date Accepted: Feb 19, 2023
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
A Virtual Reading Center Model: Using Crowdsourcing to Grade Photographs for Trachoma
ABSTRACT
Background:
As trachoma is eliminated, skilled field graders become less adept at correctly identifying active disease (trachomatous inflammation—follicular [TF]). Deciding if trachoma has been eliminated from a district or if treatment strategies need to be continued or reinstated is of critical public health importance. Telemedicine solutions require both connectivity, which can be poor in the resource-limited regions of the world in which trachoma occurs, and accurate grading of the images.
Objective:
Our purpose was to develop and validate a cloud-based “Virtual Reading Center” (VRC) using crowdsourcing for image interpretation.
Methods:
The Amazon Mechanical Turk (AMT) platform was used to recruit lay graders to interpret 2,299 gradable images from a prior field trial of a smartphone-based camera system. Each image received 7 grades for $0.05 USD per grade in this VRC. The resultant data set was divided into training and test sets to internally validate the VRC. In the training set, crowdsourcing scores were summed and the optimal raw score cut-off was chosen to optimize kappa agreement and resulting prevalence of TF. The best method was then applied to the test set and the sensitivity, specificity, kappa and TF prevalence were calculated.
Results:
In this trial, over 16,000 grades were rendered in just over 60 minutes for $1,098 USD including AMT fees. Choosing an AMT raw score cut-point to optimize kappa near the WHO-endorsed level of 0.7 (with a simulated 40% prevalence TF), crowdsourcing was 95% sensitive and 87% specific for TF in the training set with a kappa of 0.797. All 196 positive images received a skilled overread to mimic a tiered reading center and specificity improved to 99% while sensitivity remained above 78%. Kappa for the entire sample improved from 0.162 to 0.685 with overreads, and the skilled grader burden was reduced by over 80%. This tiered VRC model was then applied to the test set and produced a sensitivity of 99%, specificity of 76% with a kappa of 0.775 in the entire set. The prevalence estimated by the VRC was 2.7% compared to the ground truth prevalence of 2.9%.
Conclusions:
A VRC model using crowdsourcing as first pass with skilled grading of positive images was able to identify TF rapidly and accurately. Findings from this study support further validation of a VRC and crowdsourcing for image grading and estimation of trachoma prevalence from field-acquired images, though further prospective field testing is required to determine if diagnostic characteristics are acceptable in real-world surveys with low prevalence of disease.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.