ESTRO 2024 - Abstract Book

S3107

Physics - Autosegmentation

ESTRO 2024

3. Sandgren, K., et al., Registration of histopathology to magnetic resonance imaging of prostate cancer. Physics and Imaging in Radiation Oncology, 2021. 18: p. 19-25.

4. Nyholm, T., et al., EP-1533: ICE-Studio-An Interactive visual research tool for image analysis. Radiotherapy and Oncology, 2015. 115: p. S837.

2464

Digital Poster

Automatic AI-based segmentation of liver cancer and organs-at-risk for MR-guided radiotherapy

Dominik Langner 1 , Cihan Gani 2,3 , Maximilian Niyazi 2,3 , Daniela Thorwarth 1,3

1 Section for Biomedical Physics, Department of Radiation Oncology, University Hospital Tübingen, Tübingen, Germany. 2 Department of Radiation Oncology, University Hospital Tübingen, Tübingen, Germany. 3 German Cancer Consortium (DKTK), partner site Tübingen; and German Cancer Research Center (DKFZ), Heidelberg, Germany

Purpose/Objective:

Accurate delineation and identification of organs at risk (OAR) as well as gross tumor volume (GTV) are a prerequisite for high-precision radiotherapy. However, the typical manually performed delineation procedure is resource- and time-intensive and prone to inter- and intra-observer variations. Hence, the development of automatic segmentation methods using machine learning algorithms seems crucial for online adaptive radiotherapy applications. While auto segmentation of OAR has already been integrated into the clinical treatment planning workflow of MR-guided radiotherapy (MRgRT) [1, 2], currently no deep learning-based model supports a co-annotation of GTV and OAR. Therefore, the aim of this study was to develop deep learning models operating on different image modalities and compare the accuracy of OAR and GTV delineations to expert planning contours.

Material/Methods:

Clinical datasets from 127 patients with liver cancer or liver metastases treated at the 1.5 T MR-Linac (Unity, Elekta AB) between February 2019 and July 2023 were retrospectively included into the study. All patients were part of a clinical trial, approved by the local institutional review board (NCT04172753) and gave written informed consent. For all patients planning computed tomography (CT) images, associated clinical contours identifying heart, stomach, left/right kidney, spinal cord, liver, and singular or multiple GTVs were available in addition to T2-weighted magnetic resonance images (MRI). CT images and respective contours were rigidly registered to each corresponding MRI. In total, the training dataset comprised 133 patient-specific CT, MRI and modality-associated contour datasets. Discrepancies between numbers of test cases and patient population can be attributed to re-presentation or the presence of multiple lesions. Based on different input configurations (CT-/MRI-only and combined), three 3D U-Net models that automatically segment all investigated anatomical structures were trained using the adapted self configuring nnU-Net framework [3]. Task-specific and performance-enhancing modifications were implemented by using a divergent loss function, a different normalization scheme, region-based training, more aggressive data augmentation, as well as several minor changes to the nnU-Net baseline implementation. For model testing, MRI and/or CT data of ten patients were randomly selected, which were not part of the training dataset. The gold standard

Made with FlippingBook - Online Brochure Maker