Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Dec 15, 2020
Date Accepted: May 13, 2021

The final, peer-reviewed published version of this preprint can be found here:

Periodic Manual Algorithm Updates and Generalizability: A Developer’s Response. Comment on “Evaluation of Four Artificial Intelligence–Assisted Self-Diagnosis Apps on Three Diagnoses: Two-Year Follow-Up Study”

Gilbert S, Fenech M, Idris A, Türk E

Periodic Manual Algorithm Updates and Generalizability: A Developer’s Response. Comment on “Evaluation of Four Artificial Intelligence–Assisted Self-Diagnosis Apps on Three Diagnoses: Two-Year Follow-Up Study”

J Med Internet Res 2021;23(6):e26514

DOI: 10.2196/26514

PMID: 34132641

PMCID: 8277354

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Letter to the Editor - Evaluation of Four Artificial Intelligence–Assisted Self-Diagnosis Apps on Three Diagnoses: Two-Year Follow-Up Study

  • Stephen Gilbert; 
  • Matthew Fenech; 
  • Anisa Idris; 
  • Ewelina Türk

ABSTRACT

We have several comments on the recent publication of [1], in which repeated testing of four symptom assessment applications with clinical vignettes was carried out to look for “hints of ‘non-locked learning algorithms’”. As the developer of one of the symptom assessment applications studied by [1], we are supportive of studies evaluating app performance, however there are important limitations in the methodology of the study. Most importantly, the methodology used in this study is not capable of addressing its main objective. The approach used to look for evidence of non-locked algorithms was the quantification of differences in performance using three ophthalmology vignettes, first in 2018 then in 2020. This methodology, although highly limited due to the use of only three vignettes in one medical specialism, could be used to detect changes in app performance over time. It however cannot be used to distinguish between non-locked algorithms and the manual updating of the apps’ medical intelligence, through the normal process of manual release of updated app versions.


 Citation

Please cite as:

Gilbert S, Fenech M, Idris A, Türk E

Periodic Manual Algorithm Updates and Generalizability: A Developer’s Response. Comment on “Evaluation of Four Artificial Intelligence–Assisted Self-Diagnosis Apps on Three Diagnoses: Two-Year Follow-Up Study”

J Med Internet Res 2021;23(6):e26514

DOI: 10.2196/26514

PMID: 34132641

PMCID: 8277354

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.