Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Currently submitted to: JMIR Medical Informatics

Date Submitted: Jun 5, 2025
Open Peer Review Period: Jun 25, 2025 - Aug 20, 2025
(closed for review but you can still tweet)

NOTE: This is an unreviewed Preprint

Warning: This is a unreviewed preprint (What is a preprint?). Readers are warned that the document has not been peer-reviewed by expert/patient reviewers or an academic editor, may contain misleading claims, and is likely to undergo changes before final publication, if accepted, or may have been rejected/withdrawn (a note "no longer under consideration" will appear above).

Peer review me: Readers with interest and expertise are encouraged to sign up as peer-reviewer, if the paper is within an open peer-review period (in this case, a "Peer Review Me" button to sign up as reviewer is displayed above). All preprints currently open for review are listed here. Outside of the formal open peer-review period we encourage you to tweet about the preprint.

Citation: Please cite this preprint only for review purposes or for grant applications and CVs (if you are the author).

Final version: If our system detects a final peer-reviewed "version of record" (VoR) published in any journal, a link to that VoR will appear below. Readers are then encourage to cite the VoR instead of this preprint.

Settings: If you are the author, you can login and change the preprint display settings, but the preprint URL/DOI is supposed to be stable and citable, so it should not be removed once posted.

Submit: To post your own preprint, simply submit to any JMIR journal, and choose the appropriate settings to expose your submitted version as preprint.

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

An intelligent segmentation system and online platform for renal tumor CT images based on the GAM-DeepLabV3+ network

  • Yueyan Zhao; 
  • Jianqiang Liu; 
  • Lingyu Shao; 
  • Lin Li; 
  • Zhaoqing Liu; 
  • Yujie Liu; 
  • Jiaxin Wen; 
  • Xinyao Hao; 
  • Shuyan Li; 
  • Jianhong Zhao; 
  • Boming Song

ABSTRACT

Background:

Renal tumors represent one of the most common malignancies worldwide, with incidence rates continuing to rise. Early detection and precise treatment are crucial for effective disease management. Accurate segmentation of renal tumors in CT images plays a critical role in lesion localization and radiotherapy planning. However, current segmentation methods largely depend on manual delineation by radiologists, which is both time-consuming and subject to inter-observer variability due to tumor heterogeneity, posing significant challenges for automation.

Objective:

This study aims to develop and validate an automated renal tumor segmentation algorithm that addresses the challenges of blurred tumor boundaries and false positives in CT imaging, thereby enhancing clinical decision-making and treatment planning.

Methods:

We propose GAM-DeepLabV3+, an automatic segmentation model built upon the DeepLabV3+ encoder-decoder framework. The architecture integrates three key innovations. First, an enhanced MobileNetV2 backbone combined with a spatial pyramid pooling layer is employed to extract comprehensive low-level features and critical tumor information from CT scans. Second, a Global Attention Mechanism (GAM) module is incorporated into the decoder to efficiently fuse deep and shallow features, improving boundary delineation. Third, multi-scale feature integration enables the network to adaptively focus on tumor regions with varying sizes and shapes. The model was trained and evaluated on both a private dataset and the publicly available KiTS19 dataset.

Results:

Experimental evaluation demonstrates that GAM-DeepLabV3+ achieves superior segmentation performance, with Dice coefficients of 0.92 on the private dataset and 0.98 on the KiTS19 dataset. These results significantly outperform conventional methods, particularly in cases with complex tumor morphology. Furthermore, we developed a freely accessible online platform (http://www.cppdd.cn/KAI) to facilitate clinical application and support preoperative planning.

Conclusions:

The proposed GAM-DeepLabV3+ model provides accurate, efficient, and fully automated renal tumor segmentation, reducing dependence on manual annotation while maintaining clinical-grade precision. By addressing key challenges in renal tumor imaging, our approach offers valuable support for surgical planning and treatment, and holds promise for broader integration into clinical workflows.


 Citation

Please cite as:

Zhao Y, Liu J, Shao L, Li L, Liu Z, Liu Y, Wen J, Hao X, Li S, Zhao J, Song B

An intelligent segmentation system and online platform for renal tumor CT images based on the GAM-DeepLabV3+ network

JMIR Preprints. 05/06/2025:78523

DOI: 10.2196/preprints.78523

URL: https://preprints.jmir.org/preprint/78523

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.