Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Feb 6, 2020
Date Accepted: Oct 30, 2020

The final, peer-reviewed published version of this preprint can be found here:

Evaluation of Four Artificial Intelligence–Assisted Self-Diagnosis Apps on Three Diagnoses: Two-Year Follow-Up Study

Ćirković A

Evaluation of Four Artificial Intelligence–Assisted Self-Diagnosis Apps on Three Diagnoses: Two-Year Follow-Up Study

J Med Internet Res 2020;22(12):e18097

DOI: 10.2196/18097

PMID: 33275113

PMCID: 7748958

Evaluation of four AI-assisted self-diagnosis apps - test on three diagnoses with two-year follow-up

  • Aleksandar Ćirković

ABSTRACT

Background:

Consumer-oriented mobile self-diagnosis apps have been developed using undisclosed algorithms, presumably based on machine learning (ML) and other artificial intelligence (AI) technologies. The FDA now discerns apps with learning AI algorithms from those with stable ones and treats them as medical devices. No self-diagnosis app testing has been performed in the field of Ophthalmology so far.

Objective:

To test apps that were previously mentioned in scientific literature on a set of diagnoses in a deliberate time interval, comparing the results and looking for differences that hint to “non-locked” learning algorithms. A set of ophthalmologic diagnoses was used to simultaneously test their diagnostic efficiency and treatment recommendations in this specialty.

Methods:

Four apps from literature were chosen (Ada, Babylon, Buoy and Your.MD). Three ophthalmic diagnoses in three levels of urgency were used (Glaucoma, Retinal tear, Dry eye syndrome). Two years was chosen as a time interval between the tests (2018 and 2020).

Results:

Two apps (Ada, Buoy) asked significantly more questions than the other two (P=.00, respectively). The number of questions asked differed for none of the four apps between 2018 and 2020. For all four, the diagnostic efficacy and treatment recommendations differed strongly between 2018 and 2020, indicating “non-locked” learning algorithms using AI technologies. None of the apps provided correct diagnoses and treatment recommendations for all three diagnoses.

Conclusions:

Systematic studies on a wider scale are necessary for healthcare providers and patients to correctly assess the safety and efficacy of such apps and for correct classification by healthcare regulating authorities.


 Citation

Please cite as:

Ćirković A

Evaluation of Four Artificial Intelligence–Assisted Self-Diagnosis Apps on Three Diagnoses: Two-Year Follow-Up Study

J Med Internet Res 2020;22(12):e18097

DOI: 10.2196/18097

PMID: 33275113

PMCID: 7748958

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.