ESTRO 2021 Abstract Book
S1516
ESTRO 2021
Total Dose (-0.609, p = 0.040) was significantly associated with pCR probability on binary logistic regression. The model's overall accuracy in predicting pathological response was 65.9% (Hosmer & Lemeshow test of significance = 0.348). The results of binary logistic regression are shown below.
The model-based estimates of α/β and Dose lost per day were 3.03 and 0.38 Gy. The radiotherapeutic dose equivalence of Cisplatin (Delivered Intensity of 35mg/m 2 ) was 4.04 Gy.
Conclusion The estimate of α/β based on our analysis is lower than expected, though within the range of previously reported estimates (Geh JI, et al. Radiother Oncol , 78:236-44). The contribution of chemotherapy in Gray equivalence appears substantial, though the results did not approach significance.
PO-1791 Scaling Down Deep Learning Architectures for Medical Datasets G. Valdes 1 , Y. Interian 2 , O. Morin 1 , W. Arbelo 1 1 UCSF, Radiation Oncology, San Francisco, USA; 2 USF, Data Science, San Francisco, USA
Purpose or Objective Commonly used convolutional networks (CNN) have millions of parameters and have been manly designed for ImageNet - a task with millions of training images and 1000 classes. Training these networks is computationally intensive and applying them to medical physics datasets can lead to overfitting. Here we propose a methodology to significantly reduce the size of CNNs without reducing accuracy in medical physics image classification problems. Materials and Methods We studied two popular CNN architectures: ResNet and MobileNet. In order to reduce the number of parameters of these networks we investigated two methods: 1) inclusion of grouped convolutions and depth- wise separable convolutions layers 2) width and depth scaling of different layers. The datasets analyzed were: MURA: abnormal and normal radiographic images from different parts of the body (N=40,561); RSNA Intracranial Brain Hemorrhage: annotated cranial CT exams (N = 25,000); Chexpert: chest radiographs for 14 different diseases (N = 224,316). Model performance was measured using Area Under the Curve (AUC) on the test set (20% of the images). Computational complexity of the networks was quantified using the total number of parameters (in Millions) and training time. All networks were pre-trained with ImageNet. To guarantee that different architectures can be compared they were trained under similar conditions i.e fixed number of epochs and learning rates. Results For all 3 datasets, if we pre-train on ImageNet, we obtain the following results: (1) we were able to reduce the MobileNet architecture from 2.25M parameters to 0.25M without any reduction in AUC and (2) we were able to reduce Resnet18 architecture from 11M to 0.17M without reduction in AUC. The reduced networks will underperform on ImageNet but not in our problems. Further reduction decreases the accuracy of the networks in the problems studied. The 100 times lighter networks had significant gain in training speed and memory reduction.
Made with FlippingBook Learn more on our blog