ESTRO 2022 - Abstract Book

S1567

Abstract book

ESTRO 2022

Twenty of the 25 most important ADA features all involved the change in radiomic feature values within clinical target volumes. Twenty-three of the 25 features were derived using wavelet filtered or edge enhanced ADC images. Conclusion Gradient-boosting based ML algorithms showed encouraging results in differentiating healthy brain tissue from metastases when using diffusion images prior to the lesion detection on conventional T1 MRI. ADA and XGB performed better than RF and SVM-based models when detecting malignant tissue based on changes in diffusion weighted imaging radiomic features. Future work will use these findings to further refine the model by adding more patients and test cases, improving classification accuracy, and increasing the model’s overall clinical applicability.

PO-1763 Radiomics and Deep Learning for the 2-Year Overall Survival Prediction in Lung Cancer

A. Braghetto 1 , A. Bettinelli 2 , F. Marturano 2 , M. Paiusco 2 , M. Baiesi 1

1 University of Padua, Physics and Astronomy Department ”Galileo Galilei”, Padua, Italy; 2 Veneto Institute of Oncology - IOV IRCCS, Medical Physics Department, Padua, Italy Purpose or Objective To exhaustively test and compare the performance of several models based either on radiomic or on deep learning approaches for the prediction of the 2-year overall survival (OS) in patients with non-small cell lung cancer (NSCLC). Materials and Methods The chest CT examinations of the 417 NSCLC patients included in the public LUNG1 dataset were used in the study. For the radiomic approach, handcrafted features, extracted from the whole 3D tumour volume, were fed to 24 different pipelines formed by the combination of 4 feature selectors/reducers (i.e., ANOVA f-value, random forest, principal component analysis and feature agglomeration) and 6 classifiers (i.e., support vector machines, feed forward neural networks, nearest neighbours, bagging, random forest and extreme gradient boosting) to predict the 2-year OS. For deep learning, both the deep features extracted from the 2D tumour slices by a convolutional auto-encoder and the inner features learnt by a pre-trained convolutional neural network (CNN) were fed to the same 24 pipelines. In addition, the direct classification of the images with 3 different CNN architectures was tested, by considering both the original CT images and the synthetic images generated with a common data augmentation technique. The classification workflow and the total number of pipelines implemented for the three approaches are depicted in Figure 1. Finally, for each pipeline and approach, the performance with and without the inclusion of clinical parameters within the feature set was also evaluated in a cross-validation scheme.

Made with FlippingBook Digital Publishing Software