ESTRO 2025 - Abstract Book
S3384
Physics - Machine learning models and clinical applications
ESTRO 2025
1558
Digital Poster Deep-learning-based body-cavity segmentation to accurately preserve sliding motion in DIR-based super resolution reconstruction of time-resolved 4DMRI Harjinder Pawar 1 , Sharif Elguindi 1 , Xingyu Nie 2 , Guang Li 1 1 Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, USA. 2 Department of Radiology, Kentucky University College of Medicine, Lexington, USA Purpose/Objective: Sliding motion artifact is a major problem in the deformation image registration (DIR) of thoracic and abdominal images due to motion discontinuity across the body cavity interface. The problem can be resolved via body-cavity segmentation by separating internal and external organs into two DIRs. However, automatic segmentation cannot be achieved with conventional tools and manual segmentation is not practical. This study aims to develop and evaluate an automatic body-cavity segmentation tool based on a deep-learning approach. segmentation 1,2 . Sixteen lung cancer patients were scanned for T1-weight 3Dcine coronal scans at breath-hold (BH) under an IRB-approved protocol. The scanning field of view was large, including the full thorax and most of the abdomen. Manual segmentation of the body cavity of all 16 patients was performed to establish the ground truth: nine patients’ contours were used for training and the remaining seven patients were used for testing. The MIM (MIM Maestro) was used to draw body-cavity contours manually. Workflows were created to submit requests for deep-learning network training and predictions outside MIM. The model-generated body-cavity contour was automatically loaded into MIM and compared with manual contours as the ground truth using the mean distance to agreement (MDA) and DICE similarity index for evaluation. Separated DIRs for internal organs and external body shell were applied, followed by an integrated final DIR finetuning, and assessed near the interface around the diaphragm and spine, compared with the single DIR results. Results: The time to train the deep- learning model using nine patients’ body -cavity manual contours takes about 18.6 hours, while the time to generate a contour from a BH image using the trained model takes only 1.5 minutes. The average MDA is 2.1±0.8mm and DICE is 0.96±0.01 between the predicted and manual body-cavity contours among the seven testing patients. The auto-contours are much smoother than manual body-cavity contours without zigzags in all three views (Cor/Sag/Axi), facilitating the 2-DIR approach. The 2-DIR results illustrate reduced sliding-motion artifacts, including the better-matched diaphragm near the cavity interface and motion-less spine without distortions. Conclusion: The preliminary results demonstrate that deep-learning-based segmentation can accurately define the interface of the body cavity, facilitating two-separated DIRs to minimize sliding motion artifacts. A larger training dataset may be needed to further improve the reliability and applicability of the deep-learning segmentation model in a large population of patients. Material/Methods: A published deep-learning network was applied in this study to build a model for body-cavity
Keywords: Deep-learning autocontour, Deformable registration
References: 1. Wang C, Tyagi N, Rimner A, et al. Segmenting lung tumors on longitudinal imaging studies via a patient specific adaptive convolutional neural network. Radiother Oncol. 2019;131:101-107. 2. Jiang J, Hong J, Tringale K, Reyngold M, Crane C, Tyagi N, Veeraraghavan H. Progressively refined deep joint registration segmentation (ProRSeg) of gastrointestinal organs at risk: Application to MRI and cone-beam CT. Med Phys. 2023;50(8):4758-4774.
Made with FlippingBook Ebook Creator