ESTRO 2020 Abstract book

S988 ESTRO 2020

a heterogeneous dataset. By combining the different planes, an uncertainty map was estimated, which enabled assessing the sCT quality. PO-1699 Clinical implementation of MRI-based prostate OARs auto-segmentation with convolutional networks M.H.F. Savenije 1,2 , M. Maspero 1,2 , G.G. Sikkes 1 , J.R.N. Van der Voort van Zyp 1 , A.N.T.J. Kotte 1 , G.H. Bol 1 , C.A.T. Van den Berg 1,2 1 UMC Utrecht, Radiation Oncology, Utrecht, The Netherlands ; 2 UMC Utrecht, Computational imaging group for MR diagnostic & therapy, Utrecht, The Netherlands Purpose or Objective Structure delineation is a necessary and yet time- consuming manual procedure in radiotherapy. Recently, convolutional neural networks have been proposed to speed-up and automatise this procedure obtaining promising results. With the advent of MR-guidance and the increasing spread of MR-only radiotherapy, MR-based segmentation is becoming relevant. However, the majority of the studies used CT for organs- at-risk (OARs) automatic contouring (Cardena et al 2019 Sem Radiat Oncol). In this study, we investigate the feasibility of deep learning-based automatic OARs delineation on MRI. The preparation for clinical implementation along with the performance of the implemented approach will be presented. Material and Methods The patients (pts) diagnosed with prostate cancer patients and undergoing MR-only radiotherapy (Kerkmeijer et al 2018, Radiat Oncol) were included in the study using a total of 128 pts treated in the period 2018-06/2019-10. A 3D T1-w dual gradient-echo was acquired at 3T MRI (Ingenia MR-RT, Philips Healthcare) in radiotherapy position with large FOV, resolution of 1x1x2.5 mm3 and Dixon reconstruction obtaining in-phase, water and fat images. This sequence was used for synthetic CT generation. The first 48 pts were used in a feasibility study training two state-of-art 3D convolutional networks called DeepMedic (Kamnistas et al 2016 Med Imag Anal) and dense V-net (dV-net, Gibson et al 2018 IEEE) to segment bladder, rectum and femurs. Training/testing was performed on a GPU (NVIDIA) as three-fold cross-validation on 32/16 pts, respectively. A research version of commercial software based on multi-atlases and deformable registration (Admire v2.0, Elekta AB) was here considered for comparison. Dice coefficients and 95% Hausdorff distances (HD) were calculated against clinical delineations. For a subset of 8 pts, an expert RTT was requested to score (0 to 3, from clinically acceptable to unacceptable) the quality of the delineation for all the three methods. A choice among the three methods was made and the chosen approach was retrained on 97 pts and implemented for automatic use in the clinical workflow reporting the Dice and HD against clinically used delineations on the successive 31 pts. Results DeepMedic, dV-net and Admire generated contours in 60 s, 2 s and 10-15 min, respectively. Performances were higher for both the networks compared to Admire (Fig1). The qualitative analysis (Fig2) demonstrated that delineation from DeepMedic required fewer adaptations, followed by dV-net and then Admire. DeepMedic was clinically implemented on 2019-08. After retraining DeepMedic and testing on the successive pts, the performances slightly improved thanks to the larger amount of patients included in the training (Fig 2).

Conclusion High conformality for OARs delineation was achieved with two in-house trained networks, obtaining a significant speed-up of the delineation procedure. One of the networks, DeepMedic, was successfully adopted in the clinical workflow maintaining in a clinical setting the accuracy obtained in the feasibility study. PO-1700 Efficacy evaluation of 2-D and 3-D U-Net semantic segmentation of normal lungs T. Nemoto 1 , F. Natsumi 2 , Y. Masamichi 3 , K. Atsuhiro 1 , T. Atsuya 4 , K. Etsuo 2 , S. Naoyuki 1 1 Keio University School of Medicine, Department of Radiology, Tokyo, Japan ; 2 Tokai University School of Medicine, Department of Radiation Oncology, Kanagawa, Japan ; 3 Fujitsu Limited, System Platform Solution Unit, Tokyo, Japan ; 4 Ofuna Chuo Hospital, Radiation Oncology Center, Kanagawa, Japan Purpose or Objective Several studies have focused on semantic segmentation of lung tissues on CT images using 2-D or 3-D U-Net, which are deep learning networks for semantic segmentation. However, to our knowledge, there are no reports on the differences between U-Net and existing auto- segmentation tools using the same dataset. Furthermore, the 2-D and 3-D U-Net approaches, applied under similar conditions using the same dataset, have not been compared. We therefore attempted semantic segmentation of lung CT images using both 2-D and 3-D U- Net, then examined their efficacies in comparison with that of a commercially dominant auto-segmentation tool, the Smart Segmentation ® Knowledge Based Contouring (Varian Medical Systems, Palo-Alto, CA, USA; hereinafter referred to as Smart segmentation) system. We also compared the utilities of the 2-D and 3-D U-Net with each other. Material and Methods We examined 232 non-small cell lung cancer (NSCLC) cases published by the The Cancer Imaging Archive (TCIA), an open access database of medical images for cancer research . A ground truth value for the lungs is obtained by contouring according to the RTOG 1106 which recommends that gross tumor volume, the hilar portions of the lungs and the trachea/main bronchi not be included in the lung.

Made with FlippingBook - Online magazine maker