Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Medical Informatics

Date Submitted: Aug 12, 2020
Date Accepted: Oct 15, 2021
Date Submitted to PubMed: Dec 3, 2021

The final, peer-reviewed published version of this preprint can be found here:

Deep Learning–Assisted Burn Wound Diagnosis: Diagnostic Model Development Study

Chang CW, Lai F, Christian M, Chen YC, Hsu C, Chen YS, Chang DH, Roan TL, Yu YC

Deep Learning–Assisted Burn Wound Diagnosis: Diagnostic Model Development Study

JMIR Med Inform 2021;9(12):e22798

DOI: 10.2196/22798

PMID: 34860674

PMCID: 8686480

Deep Learning Assisted Burn Wound Diagnosis

  • Che Wei Chang; 
  • Feipei Lai; 
  • Mesakh Christian; 
  • Yu Chun Chen; 
  • Ching Hsu; 
  • Yo Shen Chen; 
  • Dun Hao Chang; 
  • Tyng Luen Roan; 
  • Yen Che Yu

ABSTRACT

Background:

Accurate assessment of the percentage of total body surface area (%TBSA) of burn wounds is crucial in the management of burn patients. The resuscitation fluid and nutritional needs of burn patients, their need for intensive unit care, and probability of mortality are all directly related to %TBSA. It is difficult to estimate a burn area of irregular shape by inspection. Many articles have reported discrepancy in estimating %TBSA by different doctors.

Objective:

We propose a method, based on deep learning, for burn wound detection, segmentation and calculation of % TBSA on a pixel-to-pixel basis.

Methods:

A two-step procedure was used to convert burn wound diagnosis into %TBSA. In the first step, images of burn wounds were collected and labeled by burn surgeons and the dataset was then input into two deep learning architectures, U-Net and Mask R-CNN, each configured with two different backbones, to segment the burn wounds. In the second step, we collected and labeled images of hands to create another dataset, which was also input into U-Net and Mask R-CNN to segment the hands. The percentage of TBSA of the burn wounds was then calculated by comparing the pixels of mask areas on the images of the burn wound and hand of the same patient according to the rule of hand, which says that one’s hand accounts for 0.8% of TBSA.

Results:

A total of 2591 images of burn wounds were collected and labeled to form the burn-wound dataset. The dataset was randomly split into a ratio of 8:1:1 to form the training, validation, and testing sets. Four hundred images of volar hands were collected and labeled to form the hand dataset, which was also split into three sets using the same method. For the images of burn wounds, Mask R-CNN with ResNet101 had the best segmentation result with a Dice coefficient (DC) of 0.9496, while U-Net with ResNet101 had a DC of 0.8545. For the hand images, U-Net and Mask R-CNN had similar performance with a DC of 0.9920 and 0.9910, respectively. Lastly, we conducted a test diagnosis in a burn patient. Mask R-CNN with ResNet-101 had on average less deviation (0.115% TBSA) from the ground truth than burn surgeons.

Conclusions:

This is one of the first studies to diagnose all depths of burn wounds and convert the segmentation results into %TBSA using different deep learning models. We aimed to assist medical staff in estimating burn size more accurately and thereby helping to provide precise care to burn victims.


 Citation

Please cite as:

Chang CW, Lai F, Christian M, Chen YC, Hsu C, Chen YS, Chang DH, Roan TL, Yu YC

Deep Learning–Assisted Burn Wound Diagnosis: Diagnostic Model Development Study

JMIR Med Inform 2021;9(12):e22798

DOI: 10.2196/22798

PMID: 34860674

PMCID: 8686480

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.