Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Oct 20, 2020
Date Accepted: Mar 16, 2021

The final, peer-reviewed published version of this preprint can be found here:

Use of Endoscopic Images in the Prediction of Submucosal Invasion of Gastric Neoplasms: Automated Deep Learning Model Development and Usability Study

Bang CS, Lim H, Jeong HM, Hwang SH

Use of Endoscopic Images in the Prediction of Submucosal Invasion of Gastric Neoplasms: Automated Deep Learning Model Development and Usability Study

J Med Internet Res 2021;23(4):e25167

DOI: 10.2196/25167

PMID: 33856356

PMCID: 8085753

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Customized automated deep-learning models and endoscopist-artificial intelligence interaction for the prediction of submucosal invasion of gastric neoplasms in endoscopic images: development and usability study

  • Chang Seok Bang; 
  • Hyun Lim; 
  • Hae Min Jeong; 
  • Sung Hyeon Hwang

ABSTRACT

Background:

Authors previously examined deep-learning models to classify the invasion depth (mucosa-confined vs. submucosa-invaded) of gastric neoplasms using endoscopic images. The external-test accuracy reach 77.3%. However, model establishment is labor-intense, requiring high performance. Automated deep-learning (AutoDL), which enable fast searching of optimal neural architectures and hyperparameters without complex coding, have been developed.

Objective:

To establish AutoDL models in classifying the invasion depth of gastric neoplasms. Additionally, endoscopist-artificial intelligence interactions were explored.

Methods:

The same 2,899 endoscopic images, which were employed to establish the previous model, were used. A prospective multicenter validation using 206 and 1597 novel images was conducted. The primary outcome was external-test accuracy. “Neuro-T,” “Create ML-Image Classifier,” and “AutoML-Vision” were used in establishing the models. Three doctors with different levels of endoscopy expertise were analyzed for each image without AutoDL’s support, with faulty AutoDL’s support, and with best performance AutoDL’s support in sequence.

Results:

Neuro-T-based model reached 89.3% (95% confidence interval: 85.1–93.5%) external-test accuracy. For the model establishment time, Create ML-Image Classifier showed the fastest time of 13 minutes while reaching 82% external-test accuracy. Expert endoscopist decisions were not influenced by AutoDL. The faulty AutoDL has misled the endoscopy trainee and the general physician. However, this was corrected by the support of the best performance AutoDL. The trainee gained the highest benefit from the AutoDL’s support.

Conclusions:

AutoDL is deemed useful for the on-site establishment of customized deep-learning models. An inexperienced endoscopist with at least a certain level of expertise can benefit from AutoDL support.


 Citation

Please cite as:

Bang CS, Lim H, Jeong HM, Hwang SH

Use of Endoscopic Images in the Prediction of Submucosal Invasion of Gastric Neoplasms: Automated Deep Learning Model Development and Usability Study

J Med Internet Res 2021;23(4):e25167

DOI: 10.2196/25167

PMID: 33856356

PMCID: 8085753

The author of this paper has made a PDF available, but requires the user to login, or create an account.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.