ESTRO 2021 Abstract Book

S1385

ESTRO 2021

Conclusion When applying relatively large NNs on datasets in the order of a few thousands, transfer learning can improve the final performance of NNs substantially. However, contrary to previous findings (Romero et al. 2020), our results show that the size of the source dataset is more important than its similarity to the target dataset. For datasets smaller than 1000 images, a similar source dataset might be beneficial.

PO-1663 Interactive deep-learning based tumor segmentation Z. Wei 1 , J. Ren 2 , J.G. Eriksen 3 , S.S. Korreman 2 , J.A. Nijkamp 1

1 Aarhus University, Department of Clinical Medicine - DCPT - Danish Center for Particle Therapy, Aarhus, Denmark; 2 Aarhus University, Department of Clinical Medicine - The Department of Oncology, DCPT - Danish Center for Particle Therapy, Aarhus, Denmark; 3 Aarhus University, Department of Clinical Medicine - Department of Experimental Clinical Oncology, Aarhus, Denmark Purpose or Objective Due to substantial inter-observer variation (IOV), manual tumor segmentation is one of the weakest links in radiotherapy. Much research is focused on developing automatic segmentation tools, especially using deep- learning (DL), to limit the IOV and also the segmentation time. For supervised learning, the IOV and the inter- patient variation in tumor appearance in the training data will directly limit the achievable segmentation accuracy. Sub-optimal segmentations need substantial corrections from physicians and limit clinical implementation. Instead of aiming for the perfect segmentation tool, we aimed to develop an interactive DL segmentation tool, which limits the need for corrections by learning from them while being performed. Materials and Methods In this study, we used CT, PET, and MRI (T1 and T2-weighted) data from 153 HNC patients, all deformably registered the CT. The clinical tumor and pathologic lymph node delineations were added together as our ground truth segmentation target. We chose a 2D UNet convolutional neural network architecture (CNN), to enable slice by slice updates of our network in the interactive stage. The CNN was first trained with a DICE loss function, using a 70/15/15% split of the data for training/validation/testing to obtain our baseline auto- segmentation tool. The interactive tool was subsequently developed and evaluated on the test set (n=24, on average 24 slices containing tumor per patient, range 12-49), using the baseline auto-segmentation as the starting point. Manual adaptation of the auto-segmentation was simulated by replacing the slice with the largest area with the ground truth delineation. This slice was then augmented, 10 or 20 times, and fed into our baseline CNN, for 5 or 10 iterations, to make the CNN both patient and observer specific. Subsequently, the segmentation accuracy of the entire tumor was re-evaluated. In the next interactive round, a second slice was replaced by the ground truth, augmented, and fed into the CNN together with the first adapted slice. We simulated up to 10 interactive rounds per patient. Performance of the interactive CNN was evaluated using DICE, and the amount of time needed for each interactive round. Results The average baseline segmentation accuracy was 0.73 (range 0.48 ~ 0.86) DICE. Best results were obtained with 10 iterations and making 20 augmentations per adapted slice (Fig. 1). The DICE improved for all, but two patients. On average DICE improved by 12% (range -13% ~ 42% per patient) reaching 0.85 at interactive round 10. With these parameters, 37 seconds (range 15 ~ 62) were needed for each adaptation.

Made with FlippingBook Learn more on our blog