Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Medical Informatics

Date Submitted: Jun 3, 2024
Date Accepted: Jan 11, 2025

The final, peer-reviewed published version of this preprint can be found here:

Convolutional Neural Network Models for Visual Classification of Pressure Ulcer Stages: Cross-Sectional Study

Lei C, Jiang Y, Xu K, Liu S, Cao H, Wang C

Convolutional Neural Network Models for Visual Classification of Pressure Ulcer Stages: Cross-Sectional Study

JMIR Med Inform 2025;13:e62774

DOI: 10.2196/62774

PMID: 40135412

PMCID: 11962570

Convolutional neural network models for visual classification of pressure ulcer stages

  • Changbin Lei; 
  • Yan Jiang; 
  • Ke Xu; 
  • Shanshan Liu; 
  • Hua Cao; 
  • Cong Wang

ABSTRACT

Background:

Pressure ulcers (PUs), also called pressure injuries (PIs) bring a negative health impact on patients and pose a substantial economic burden on society. Accurate staging is the key to the treatment of PUs. The deep learning (DL) algorithm using convolutional neural networks(CNNs) of images have achieved good classification performance in complicated of skin diseases, which also has the potential to improve diagnostic accuracy in staging PUs.

Objective:

We explored the potential of applying different CNNs algorithms, namely AlexNet, VGGNet16, GoogLeNet, ResNet 18, to PUs staging aiming to provide an effective tool to assist in evaluation.

Methods:

PU images from patients, including stageⅠ, stageⅡ, stage Ⅲ, stage Ⅳ, unstageable, and suspected deep tissue injury (SDTI), were collected at a tertiary hospital in China. To ensure sample balance, we randomly selected an equal number of images in each stage to form image dataset. Additionally, we augmented the sample size through data enhancement. The collected images were then divided into training, validation, and test sets in a ratio of 6:2:2. Subsequently, we trained them using AlexNet, GoogLeNet, VGGNet16, and ResNet 18 to develop staging models.

Results:

We collected 821 raw PU images and with the following distribution across stages: stage Ⅰ (113), stage Ⅱ (113), stage Ⅲ (186), stage Ⅳ (108), unstageable (118), and suspected deep tissue injury (SDTI) (113). In addition, 100 images for each stage were selected and a total of 3000 images were obtained after augmentation. The training, validation, and test sets were didvided in a ratio of 6:2:2. Among all the CNN models, ResNet 18 demonstrated the highest accuracy (0.9333), precision (0.987), recall (0.933), and F1 score (0.959). The classification performance of AlexNet, GoogLeNet, and VGGNet16 exhibited accuracies of 0.896, 0.75, and 0.625, respectively. The precision values were 0.97, 0.95, and 0.953, while the recall values were 0.896, 0.75, and 0.953, and the F1 scores were 0.935, 0.83, and 0.953, respectively.

Conclusions:

The CNNs-based models demonstrated a strong classification ability of images of PUs, which might promote high efficient, low-cost PU diagnosis and staging.


 Citation

Please cite as:

Lei C, Jiang Y, Xu K, Liu S, Cao H, Wang C

Convolutional Neural Network Models for Visual Classification of Pressure Ulcer Stages: Cross-Sectional Study

JMIR Med Inform 2025;13:e62774

DOI: 10.2196/62774

PMID: 40135412

PMCID: 11962570

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.