ESTRO 38 Abstract book
S1052 ESTRO 38
work was to investigate the feasibility of applying a pre- trained CNN to a set of medical T2 MRI images with the intent to identify areas of disease in the prostate using texture analysis as a baseline. Material and Methods T2 MRI studies of 16 patients with localised prostate cancer were studied. The MRI images were rigidly registered to the planning CT and the OAR’s, prostate and the focal lesion were contoured by a consultant oncologist. Using Matlab, three workflows were investigated. Firstly, 32 texture features were calculated to characterise the image properties of healthy and diseased tissues and used to train four different machine learning algorithms. Secondly AlexNet, a CNN trained on > 1 million images containing 1,000 classes was used as a feature extractor for later classification. Lastly, AlexNet was adapted for use on MRI images using transfer learning. Each was initially developed on Brodatz images containing strong textural features. This was then translated onto a set of 40 prostate MRI images published as part of a MICCAI grand challenge, Figure 1, to test performance on a set of medical images. The models were assessed in terms of accuracy, sensitivity, specificity and AUC (Table 1).
EP-1933 A deep learning approach for identifying focal prostate cancer from multi-parametric MRI M. RooneY 1 , A. Killean 2 , J. Mitchell 2 , D.B. McLaren 2 , W.H. Nailon 1,3 1 NHS Lothian, Department of Oncology Physics- Edinburgh Cancer Centre, Edinburgh, United Kingdom ; 2 NHS Lothian, Department of Clinical Oncology- Edinburgh Cancer Centre, Ednburgh, United Kingdom ; 3 The University of Edinburgh, School of Engineering, Edinburgh, United Kingdom Purpose or Objective There are no technical barriers to delivering radiotherapy to small focal lesions within the prostate gland, however, reliably identifying focal disease is challenging. Multi- parametric magnetic resonance imaging (mp-MRI) has significant potential for this and in addition because of the improved image resolution may be combined with machine learning techniques to assist with tumour delineation. The aim of this work was to combine information from T2 weighted, apparent diffusion coefficient (ADC), and diffusion weighted (DW) MRI to train machine learning models to automatically identify focal disease within the prostate. Material and Methods Two datasets were utilised from previously treated patients with localised prostate cancer. The first included 16 patients with diagnostic T2 MRI, the second included 12 patients with diagnostic T2 and ADC studies. The planning CT, T2 and ADC images, where available, were registered rigidly using a Varian Eclipse workstation. An experienced clinician contoured the prostate and focal lesion on both images, Figure 1. Matlab was used to process the images in this study, where sub images were extracted from each before 32 texture features were calculated. These features were used to train SVM, k-NN, decision tree and Ada-boost classification algorithms. In addition, AlexNet, a pre-trained convolution neural network was fine-tuned to classify each sub image as healthy or diseased tissue. The performance of each model was assessed in terms of sensitivity, specificity and AUC (Table 1).
Figure 1 : An example of the T2 MRI images included in the MICCAI dataset with and without an endorectal coil, only cases without an endorectal coil were considered. Results Findings from Brodatz images showed the ability to obtain high levels of classification accuracy using texture analysis and CNNs. Application to the MICCAI data showed that superior sensitivity can be achieved using a fine-tuned CNN over texture analysis. Transfer learning was successful in identifying prostate from surrounding structures in T2 prostate MRI images with the hypothesis being that the CNN would identify generic features common to most images such as colour, contrast and repetitive patterns. Application to the local dataset showed promise. Using AlexNet, the results were on par with those of machine learning but low sensitivity was observed. This is likely due to the high level of imbalance inherent to this data type.
Figure 1 : An example of the clinically defined area of disease on both ADC (left) and T2 (right) MRI scans used in this study. Results The results demonstrate that multimodality imaging data, in the form of T2 MRI and ADC images, can be successfully combined to build models for the identification of focal disease within the prostate. This was achieved through the creation of co-trained classification models using texture and CNN image features derived from the MRI sequences from a cohort of 12 patients. Overall, it can be noted that texture features yield more sensitive results versus the fine-tuned CNN. This novel approach achieved very high classification performance when tested on T2 images with a maximum AUC of 0.935 compared to the highest result of 0.663 AUC found using single sequence MRI studies.
Table 1: Results of each method on all datasets Conclusion Overall, the AUC indicates that the performance of the CNN is on par with machine learning methods. These data strengthen the claim that pre-trained CNN’s are suitable to identify prostate cancer on MRI images.
Made with FlippingBook - Online catalogs