Accepted for/Published in: JMIR Medical Informatics
Date Submitted: Jun 3, 2024
Date Accepted: Jan 11, 2025
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Convolutional neural networks models for visual classification of pressure ulcer stages
ABSTRACT
Background:
Pressure ulcers (PUs), also called pressure injuries (PIs) bring a negative health impact on patients and pose a substantial economic burden on society. Accurate staging is the key to the treatment of PUs. The deep learning (DL) algorithm using convolutional neural networks(CNNs) of images have achieved good classification performance in complicated of skin diseases, which also has the potential to improve diagnostic accuracy in staging PUs.
Objective:
We explored the potential of applying different CNNs algorithms, namely AlexNet, VGGNet16, GoogLeNet, ResNet 18, to PUs staging aiming to provide an effective tool to assist in evaluation.
Methods:
PU images from patients, including stageⅠ, stageⅡ, stage Ⅲ, stage Ⅳ, unstageable, and suspected deep tissue injury (SDTI), were collected at a tertiary hospital in China. To ensure sample balance, we randomly selected an equal number of images in each stage to form image dataset. Additionally, we augmented the sample size through data enhancement. The collected images were then divided into training, validation, and test sets in a ratio of 6:2:2. Subsequently, we trained them using AlexNet, GoogLeNet, VGGNet16, and ResNet 18 to develop staging models.
Results:
We collected 821 raw PU images and with the following distribution across stages: stage Ⅰ (113), stage Ⅱ (113), stage Ⅲ (186), stage Ⅳ (108), unstageable (118), and suspected deep tissue injury (SDTI) (113). In addition, 100 images for each stage were selected and a total of 3000 images were obtained after augmentation. The training, validation, and test sets were didvided in a ratio of 6:2:2. Among all the CNN models, ResNet 18 demonstrated the highest accuracy (0.9333), precision (0.987), recall (0.933), and F1 score (0.959). The classification performance of AlexNet, GoogLeNet, and VGGNet16 exhibited accuracies of 0.896, 0.75, and 0.625, respectively. The precision values were 0.97, 0.95, and 0.953, while the recall values were 0.896, 0.75, and 0.953, and the F1 scores were 0.935, 0.83, and 0.953, respectively.
Conclusions:
The CNNs-based models demonstrated a strong classification ability of images of PUs, which might promote high efficient, low-cost PU diagnosis and staging.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.