ESTRO 2024 - Abstract Book
S3121
Physics - Autosegmentation
ESTRO 2024
Zahra Khodabakhshi 1 , Yixing Huang 2 , Rainer Fietkau 2 , Matthias Guckenberger 1 , Stephanie Tanadini-Lang 1 , Florian Putz 2 , Nicolaus Andratschke 1 1 University Hospital Zurich, Department of Radiation Oncology, Zurich, Switzerland. 2 University Hospital Erlangen, Department of Radiation Oncology, Erlangen, Germany
Purpose/Objective:
Stereotactic radiosurgery, a crucial treatment modality for patients with brain metastases, hinges on the precise identification of these lesions during the treatment planning stage. This step, while indispensable, poses a substantial challenge due to its demanding and time-consuming nature, particularly in cases involving multiple brain metastases. In recent times, there has been a notable surge in the development of deep learning models customized for the automated detection and delineation of brain metastases. These advancements hold significant promise for expediting the treatment planning process and offer potential solutions to the aforementioned difficulties. However, many models suffer from a considerable rate of false positives, which undermines their reliability. Currently, the developed segmentation methods do not consider the location information of brain metastases. Several studies have indicated that the occurrence of metastases is not by coincidence and some brain regions are with high risk of metastases occurrence [1]. The overall aim of this study was to investigate whether adding spatial information of metastases to a deep learning model can improve the segmentation/detection performance, specifically in terms of reducing false positives. Three different datasets with T1 contrast-enhanced MRI scans were used in this study including datasets from University Hospital Zurich, Erlangen University Hospital, and the public UCSF brain metastases dataset [2]. Table 1 summarizes the number of scans used in each center for training and testing. The preprocessing pipeline included resampling to an isotropic voxel size of 1 mm, N4 bias field correction, skull stripping, and intensity normalization to zero mean and standard deviation of unit. The baseline model in our study was DeepMedic trained with a volume level sensitivity-specificity loss and the binary cross entropy loss [3]. To incorporate the parcellation with deep learning model, each subject was registered to the MNI space using affine registration then the individual labels of the Harvard Oxford atlas were mapped to each subject. The parcellation volume consists of 21 integer class labels representing different anatomical regions and is concatenated with each MRI scan as an additional channel of the network input. For each dataset, the model was trained and tested two times, first without brain parcellation and second with brain parcellation. Finally, the results in terms of dice score, false positive, and false negative rate were compared before and after applying brain parcellation. The Pytorch DeepMedic network modified from [4] was used for training and testing. Material/Methods:
Table1 . Summary of the data used in this study
Data
Train
Test
Validation
Erlangen
600
103
57
UCSF
410
323
0
Made with FlippingBook - Online Brochure Maker