Currently submitted to: Journal of Medical Internet Research
Date Submitted: Jan 13, 2026
Open Peer Review Period: Jan 14, 2026 - Mar 11, 2026
(currently open for review)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Intelligent Identification of Pressure Injuries Using Multi-modal Deep Learning: A Scoping Review
ABSTRACT
Background:
The global prevalence of pressure injuries is high and can cause severe infections, or death. Accurate staging is vital for effective intervention. Deep learning streamlines pressure injury assessment, enhances efficiency, and yields practical, accurate results. This scoping review summarized research on multi-modal deep learning for intelligent pressure ulcer recognition.
Objective:
It systematized models, training methods, and outcomes to identify the best systems for rapid detection and automated staging of pressure ulcers. Enhancing the timeliness, accuracy, and objectivity of diagnosis is the goal.
Methods:
We searched the following databases and sources: PubMed, the Cochrane Library, IEEE Xplore, and Web of Science. The scoping review was conducted in accordance with the JBI Scoping Review Methodology Group’s guidance and reported following Preferred Reporting Items for Systematic Reviews and Meta-Analyses—Extension for Scoping Reviews guidelines. The study protocol was registered with the International Prospective Registry of Systematic Reviews (PROSPERO) on 12 December 2025 (registration number: CRD420251251573).
Results:
15 articles were included: 26 models were involved, including AlexNet; VGG16; ResNet18; DenseNet121; SE-Swin Transformer; Cascade R-CNN; vision transformer (ViT); ConvNextV2; EfficientNetV2; Meta Former; TinyViT; CCM; BCM; ResNext + wFPN; SE-Inception; Mask-R-CNN; SE-ResNext101; Faster R-CNN; ResNet50; ResNet152; DenseNet201; EfficientNet-B4; YOLOv5; Inception-ResNet-v2; InceptionV3; MobilNetV2. The training methodology for intelligent pressure ulcer recognition models involves establishing an image database, processing images, and constructing the recognition model. Different models exhibit varying accuracy rates in staging pressure ulcers, with overall accuracy fluctuating between 54.84% and 93.71%. The DenseNet121 model achieved the highest recognition accuracy of 93.71%, while VGG16 was the most widely applied. The same model demonstrated significant variations in recognition accuracy across different studies.
Conclusions:
The multi-modal and deep learning-based intelligent recognition model for pressure injuries demonstrates high overall accuracy, enabling rapid automated staging of such injuries. Future research may explore optimized intelligent assistance systems to enhance the accuracy, objectivity, and efficiency of pressure injury diagnosis.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.