ESTRO 2023 - Abstract Book

S80

Saturday 13 May

ESTRO 2023

experts but significantly better for the PCM_low and Oral Cavity. In addition, the nnU-net was better than experts in the delineation of the Lips. The oesophagus is not shown in the figures; both algorithms segmented its entire length, whereas the experts only segmented the oesophagus in the head and neck region. As a result, the MSD differences were substantial even though they were very similar in the region relevant to HN cancer radiotherapy.

Conclusion Even though both networks were trained on only 50 patients, they segmented all organs with a precision similar to clinical experts. Following publication, both networks will be made publicly available, the nnU-net network will be freely available for all to download, and MIM will distribute to the customers that are eligible to Protege AI after regulatory approval. OC-0119 Interactive deep-learning for tumour segmentation in head and neck cancer radiotherapy Z. Wei 1 , J. Ren 1 , J.G. Eriksen 2 , S.S. Korreman 3 , J.A. Nijkamp 1 1 Aarhus University, Department of Clinical Medicine - DCPT - Danish Center for Particle Therapy, Aarhus, Denmark; 2 Aarhus University, Department of Clinical Medicine - Department of Experimental Clinical Oncology, Aarhus, Denmark; 3 Aarhus University, Department of Clinical Medicine - The Department of Oncology, Aarhus, Denmark Purpose or Objective With deep-learning, tumour (GTV) auto-segmentation has substantially been improved, but still substantial manual corrections are needed. With interactive deep-learning (iDL), manual corrections can be used to update a deep-learning tool while delineating to minimise the input to achieve acceptable segmentations. We developed an iDL tool for GTV segmentation that took annotated slices as input and simulated its performance on a head and neck cancer (HNC) dataset. We aimed to achieve clinically acceptable segmentations within five annotated slices. Materials and Methods Multi-modal imaging data of 204 HNC patients with clinical tumour (GTVt) and lymph node (GTVn) delineations were randomly split into training (n=107), validation (n=22), test (n=24) and independent test (n=51) sets. We used 2D UNet++ as our convolutional neural network (CNN) architecture. First, a baseline CNN was trained using the training and validation set. Subsequently, we simulated oncologist annotations on the test set by replacing a predicted tumour contour on selected slices with the ground truth contour. The simulations were used to optimise the iDL hyperparameters and to systematically assess how the selection of slices affected the segmentation accuracy. For each simulated patient, we started with the baseline CNN, meaning that iDL was only used for patient-specific optimisation. Subsequently, iDL performance was evaluated with simulations on the independent test set using the optimised hyperparameters and slice selection strategy. Finally, one radiation oncologist performed real-time iDL-supported GTVt segmentation on three cases. For evaluation, dice similarity coefficient (DSC), mean surface distance (MSD), and 95% Hausdorff distance (HD95%) were assessed at baseline and after every iDL update. iDL re-training time was also assessed. Results Median baseline segmentation accuracy on the independent test set was DSC=0.65, MSD=4.4mm, HD95%=27.3mm (Fig.1). The best performing slice selection strategy was first to annotate three slices, equidistantly divided over the craniocaudal axis of the baseline prediction, followed by two iDL rounds with one (largest target area) slice annotated. With this strategy, segmentation accuracy improved to DSC=0.82, MSD=1.4mm, and HD95%=8.9mm after only five slices annotated Fig.1). iDL retraining took 30 seconds per update. An example case is depicted in Fig. 2.

Made with FlippingBook flipbook maker