ESTRO 2021 Abstract Book

S1374

ESTRO 2021

Fig. 1. CBCT axial slices of Lucy phantom: a) distortion grid b) irregular air volumes. The same phantom was also imaged using an insert with irregular air cavities (Fig. 1b) of known volume to quantify geometric accuracy. These images were analyzed using the automatic segmetation tool of RayStation ( RaySearchLab ) TPS. Images of centered CATPHAN 600 phantom were also acquired, along with AutoQALite (QA Benchmark) analysis software, to assess basic image quality parameters. Results For the CBCT, the distance observed between the four radiopaque markers differed less than 0,1 mm from their real value. Neither did the grid analysis show appreciable distortion. The mean value (µ) and standard deviation (σ) of the grid size were 0,34 + 0,24 mm and 0,36 + 0,36 mm for horizontal and vertical direction, respectively. It should be noted that the image in the upper area of the grid was diffuse, probably due to the off-axis location of the phantom and the conical geometry of the beam. The volumes obtained by automatic segmentation differed between 1,6 and 5,6% from the nominal values provided by the manufacturer. The results of the comparison between the CBCT angiography and the CT simulator, based on the analysis of the CATPHAN phantom images are shown in Fig. 2.

Fig. 2. CBCT vs CT-simulator results comparison with CATPHAN phantom. The limitation given by the CBCT pixel size (0,98 mm) prevented meeting the usual tolerance for head CT examinations of 6 lp/cm at MTF 2%. Although the low-contrast resolution of the CBCT was worse, both 3D imaging modalities met the usual tolerances: 3% and 0,8% for 3,5 and 8 mm diameter objects, respectively. Conclusion The results obtained from the quality assurance of the CBCT angiography showed acceptable values, although its spatial resolution is at the limit of what is recommended for its use in delimiting volumes for stereotactic treatments. PO-1655 Tuning deep learning models for automatic segmentation of head and neck cancers in PET/CT images B.N. Huynh 1 , A.R. Groendahl 1 , Y.M. Moe 2 , O. Tomic 1 , E. Dale 3 , E. Malinen 4,5 , C.M. Futsaether 1 1 Norwegian University of Life Sciences, Faculty of Science and Technology, Ås, Norway; 2 University of Oslo, Department of Mathematics, Oslo, Norway; 3 Oslo University Hospital, Department of Oncology, Oslo, Norway; 4 University of Oslo, Department of Physics, Oslo, Norway; 5 Oslo University Hospital, Department of Medical Physics, Oslo, Norway Purpose or Objective This study aims to improve automatic segmentation of head and neck (HN) tumors and involved lymph nodes in PET/CT images by tuning 2D and 3D U-net convolutional neural networks. Materials and Methods The impact of 2D U-net architecture complexity on automatic tumor segmentation was assessed by using different numbers of filters (32, 48, 64) in the first layer and three or four up/down-sampling layers (depth, in total 15 and 19 convolutional layers, respectively). As tumors are 3D structures, 2D models were compared to a 3D U-net with depth four and 32 filters in the first layer. All models used the Dice loss function and the Adam optimizer with a 10 -4 learning rate. Input data were baseline 18F-FDG-PET/CT scans from 197 HN cancer patients. The data was split into training (142 patients), validation (15) and test (40) sets stratified by TNM tumor stage. Manual gross tumor

Made with FlippingBook Learn more on our blog