ESTRO 2023 - Abstract Book

S268

Saturday 13 May

ESTRO 2023

In addition to the developed network that tackled the very complex task of automatically defining target volumes from MRI images, we presented an interpretation of the generalizability of automatic GTV segmentation in LACC trained on heterogeneous MR images, which is of primary interest for real-world applications. In our future work, we will extract complementary quality-based features to enhance the identification of segmentation failures. PD-0334 Techniques to optimize auto-segmentation of small OARs in pediatric patients undergoing CSI J. Tsui 1 , M. Popovic 2 , O. Ates 3 , C. Hua 3 , J. Schneider 4 , S. Skamene 1 , C. Freeman 1 , S. Enger 2 1 McGill University Health Centre, Radiation Oncology, Montreal, Canada; 2 McGill University, Medical Physics Unit, Montreal, Canada; 3 St. Jude Children’s Research Hospital, Radiation Oncology, Memphis, USA; 4 Jewish General Hospital, Radiation Oncology, Montreal, Canada Purpose or Objective Organs at risk (OAR) auto-segmentation can decrease inter-observer variability and help with the quality assurance of contouring. The development and training of Deep Learning (DL) algorithms are highly complex, particularly in pediatric cases requiring Cranial Spinal Irradiation (CSI) that involves multiple OARs exhibiting significant differences in size and in Hounsfield Unit (HU). The DL model nnUNet (Isensee et al. 2021) can obviate many difficulties associated with preprocessing, choice of network architecture, and model training due to its self-configuring capability. It is relatively easy to implement but requires extensive computing power and lengthy training time. Its performance may also be affected in situations where large high-contrast and small (relative to background) low-contrast structures such as the lungs and optic chiasm are segmented in the same task and on the same CT scan. We hypothesize that performance can be improved by the following: 1) breaking the task into subtasks that contour structures of similar size, location, and contrast level; 2) implementing different HU windowing schemes for different subtasks; 3) implementing a loss function that better accounts for class imbalance. We focused on optic structures due to the relatively poorer performance compared to other structures We collected the planning CT scans of pediatric patients undergoing CSI and reviewed all the contours. Of 36 total patients, 29 were used for training and 7 for validation. We cropped the images to exclude structures outside the body (mask, couch, etc.) and kept only the image slices containing optic structures. We first implemented the 2D nnUNet framework to auto segment 7 structures: eyes, lenses, optic nerves, and optic chiasm. We then compared the 2D nnUNet results with a basic 2D UNet that incorporates two changes: 1) preprocessing the images by clipping the HU units within the range of the target structures; 2) implementing a Unified Focal Loss (UFL; Yeung et al. 2021) to account for class imbalance. We trained the models and inferred the output labels on the validation dataset. We then computed the Dice similarity coefficient (DICE) between the predicted labels and ground truths and compared performance between the two models. Results The following mean (std) DICE scores were obtained for the two models. cited in the literature. Materials and Methods

nnUNet

windowing + UNet + UFL

Eye_L Eye_R Lens_L Lens_R

0.47 (±0.41) 0.85 (±0.15) 0.49 (±0.40) 0.85 (±0.13) 0.42 (±0.40) 0.77 (±0.24)

0.42 (±0.40) 0.80 (±0.18) Optic_Nerve_L 0.35 (±0.37) 0.68 (±0.19) Optic_Nerve_R 0.34 (±0.38) 0.65 (±0.24) Chiasm 0.27 (±0.30) 0.49 (±0.24)

Conclusion Adjusting the contrast window and using UFL as a loss function drastically improve the segmentation performance. Future work includes extending the nnUNet to incorporate these two important changes to auto-segment all the OARs of pediatric patients undergoing CSI. PD-0335 A comparison between 2D and 3D GAN as a supporting tool for rectum segmentation on 0.35 T MR images M. Vagni 1 , H.E. Tran 1 , A. Romano 1 , L. Boldrini 1 , G. Chiloiro 1 , G. Landry 2 , C. Kurz 2 , S. Corradini 2 , M. Kawula 2 , E. Lombardo 2 , M.A. Gambacorta 1 , L. Indovina 1 , C. Belka 2,3 , V. Valentini 1 , L. Placidi 1 , D. Cusumano 1 1 Fondazione Policlinico Universitario "A. Gemelli" IRCCS, Department of Radiation Oncology , Rome, Italy; 2 LMU Munich, Department of Radiation Oncology , Munich, Germany; 3 German Cancer Consortium (DKTK), Department of Radiation Oncology , Munich, Germany Purpose or Objective Manual recontouring of targets and OARs is a particularly time-consuming and operator-dependent task, which today represents a limiting factor in the online MR-guided radiotherapy (MRgRT) workflow. In particular, rectum contouring may be challenging due to its morphology and the presence of adjacent structures with similar intensities, thus making the delineation in correspondence of the sigma- and anorectal junction difficult. In this study, we explored the potential of two supporting neural networks, able to automatically perform the rectum segmentation, once its apical and caudal anatomical limits are indicated by the clinician. Materials and Methods 0.35 T 3D simulation MR scans from 72 prostate cancer patients treated with a MR-Linac were collected. The rectum delineation used in clinical practice and validated by two radiation oncologists represented ground truth. Patients’ volumes were resampled at the same spatial resolution (1.5 mm ³ ) and corrected by removing the bias field artefacts through a dedicated image pre-processing pipeline. A 3D Generative Adversarial Network (GAN) (composed of a UNet and a PatchGAN) and its modified 2D version were trained on 53 patients, validated on 10 patients, and then tested on the remaining 9 cases.

Made with FlippingBook flipbook maker