ICHNO-ECHNO 2022 - Abstract Book
S14
ICHNO-ECHNO 2022
OC-0026 Automatic delineation in radiation treatment of head and neck cancer using multimodal information
H. Bollen 1 , S. Nuyts 1 , F. Maes 2 , S. Willems 2
1 University Hospitals Leuven, Laboratory of Experimental Radiotherapy, Leuven, Belgium; 2 KU Leuven, Processing Speech and Images (PSI), Leuven, Belgium Purpose or Objective Accurate radiotherapy treatment (RT) of head and neck cancer (HNC) requires precise delineation of target volumes (TV). Delineation is performed manually using several image modalities such as CT and PET. Since delineation is highly experience and perception dependent, there’s a growing interest in automation of the delineation process. However, literature for automation in head and neck cancer is limited and only reports the performance of unimodal networks. The goal of this research was to create a 3D convolutional neural network (CNN) that uses the information from multiple modalities to improve the overall performance of the segmentation compared to unimodal approaches. Materials and Methods The dataset consists of 70 patients with oropharyngeal cancer. For each patient, planning CT image (pCT) and PET imaging were available, just as the manual delineation of the primary (GTVp) and nodal gross tumor volume (GTVn), acquired by two radiation oncologists. PET was rigidly registered to the planning CT image using Eclipse (Varian medical systems, Palo Alto, CA). TVs strongly varied in size, with a volume range of 3ml – 120ml. A 3D CNN was developed with two separate input pathways, one for each modality, such that each pathway may focus on learning patterns for that specific modality. At certain points in the model, a connecting layer is implemented to transfer information between both pathways. At the end of the model, the pathways are concatenated, and a final classifier layer uses the received info to predict the final segmentation label. The performance of this approach was compared to unimodal approaches (pCT model and a PET model) using the Dice similarity coefficient (DSC), the mean surface distance (MSD) and the 95% Hausdorff distance (HD95). Results The multimodal approach performs best for all metrics for both the GTVp and GTVn, as shown in Table 1. The DSC improves from 48.0% (for pCT model) and 48.9% (for PET model) to 59.1% (for pCT+PET model) for the GTVp while the GTVn reaches an average DSC of 62.8%. Furthermore, adding the PET information reduces the small false positive spots in the delineation result compared to pCT model and PET model. An example for GTVp and GTVn delineation for all models is visualized in Figure 1.
Figure 1: example of automatic delineation for the unimodal approaches (pCT model in blue, PET model in yellow) and multimodal approach (pCT + PET in green)
Table 1: 5-fold cross validation results for the pCT model, PET model and the multimodal approach
Conclusion Adding functional PET information improves the overall segmentation result compared to a unimodal network based on pCT input only. Automation of segmentation in HNC offers the possibility of implementing more advanced RT techniques, e.g. adaptive RT and proton therapy. However, performance of existing unimodal networks has been insufficient for clinical implementation. The introduction of multimodality networks could identify a solution for automated delineation of TVs in HNC.
OC-0027 Consistency in high dose CTV generation using geometric margins for radiotherapy in HNSCC patients
R. Zukauskaite 1,4 , J. Grau Eriksen 2,3 , E. Andersen 5 , J. Johansen 1 , E. Samsøe 5,6 , S. Long Krogh 7 , J. Overgaard 8 , C. Grau 9,9 , C. Rønn Hansen 9,10,7 1 Odense University Hospital, Department of Oncology, Odense , Denmark; 2 Aarhus University Hospital, Department of Oncology, Aarhus, Denmark; 3 Aarhus University Hospital, Department of Experimental Clinical Oncology, Aarhus, Denmark; 4 University of Southern Denmark, Department of Clinical Research, Odense , Denmark; 5 Copenhagen University Hospital
Made with FlippingBook - Online magazine maker