ESTRO 38 Abstract book

S1121 ESTRO 38

test data set to segment the final ROIs slice by slice. The segmentation was conducted on a workstation with Intel(R) Xeon(R) CPU E5-2686 v4 2.30GHz, the NVIDIA Tesla V100 GPU accelerator (64 GB GPU Memory), and 244 GB Main Memory. Results Median (interquartile range) dice similarity coefficient was 0.95 (0.85-1.00), 1.00 (1.00-1.00), 1.00 (0.97-1.00), 0.94 (0.87-0.96) for the prostate, seminal vesicles, rectum, and bladder, respectively. An averaged computation time to complete segmentation was 0.12 s per slice. Conclusion Our proposed method, which employs the FusionNet architecture, was highly accurate for automated ROI segmentation. EP-2043 Efficiency Boosting of HN Positional Verification Using Highly Accelerated 3D MR Imaging in MRgRT Y. Zhou 1 , W.W.K. Fung 2 , K.F. Cheng 2 , J. Yuan 1 , O.L. Wong 1 , G. Chiu 2 , K.Y. Cheung 1 , S.K. Yu 1 1 Hong Kong Sanatorium & Hospital, Medical Physics and Research Department, Happy Valley, Hong Kong SAR China ; 2 Hong Kong Sanatorium & Hospital, Department of Radiotherapy, Happy Valley, Hong Kong SAR China Purpose or Objective We aims to evaluate the performance of highly accelerated 3D MRI scan for head and neck (HN) in MR- guided-radiotherapy (MRgRT) on efficiency enhancement and inter-fractional positional error measurement. Material and Methods 18 healthy volunteers immobilized with a customized 5- point thermoplastic mask received 183 scans on a 1.5T MR- sim to simulate MRgRT fractions. Each scan included a high-resolution (voxel-size=1.05x1.05x1.05mm 3 , duration=5min) and a highly-accelerated low-resolution (acceleration-factor=9, voxel-size=1.4x1.4x1.4mm 3 , duration=59s) T1w spin-echo sequence (TR/TE=420/7.2ms) (Fig.1). The high-resolution images of the first session were used as the reference to mimic planning MRI. Rigid image registration was used to pair- wisely register the following sessional high-resolution and low-resolution images to the reference using 3D-slicer, named HHR and LHR respectively. Disagreement of inter- sessional positional shift calculated from HHR and LHR were analyzed using Bland-Altman plot. Systematic and random errors were also compared. Results In efficiency, accelerated MRI, although with artifacts and lower image quality, considerably reduced scan time from 5min to 1min. LHR also reduced the automated registration time on a personal computer from ~45s to ~15s. The calculated translation shifts (mm) were 0.00±0.76 (mean±SD), 0.23+0.33, -0.23±0.69 and 0.33±1.08 in LR, AP and SI from HHR, and correspondingly -0.01±0.78, 0.26±0.31 and -0.32±0.71 from LHR (Table 1). The calculated rotation shifts ( o ) were -0.04±0.14, 0.00+0.00, and 0.16±0.44 in roll, pitch and yaw from HHR, and correspondingly -0.07±0.15, 0.00+0.00, and 0.13±0.43 from LHR. Bland-Altman analysis showed the calculated shift difference from LHR to HHR was small, i.e. -0.01 95%CI: [-0.23, 0.21], 0.03 [-0.14, 0.19], and -0.09 [-0.36, 0.18] in LR, AP and SI translation (mm) respectively, and - 0.03 [-0.14. 0.09], 0.03 [-0.003 to 0.003], and -0.02 [-0.26, 0.21] in roll, pitch and yaw ( o ). The calculated systematic error and random error from HHR and LHR were also highly consistent, showing negligible differences (Table 1).

120Mixed and 140kV. The highest noise and deviations were observed in bone, where a mean difference of 52HU appears and the difference between maximum and minimum deviations is 129.5HU. The analysis of all patients and tissues one by one shows that even Lung has a mean at 120kV (-763.6HU) only 2.4HU different to the 120Mixed, the extreme values are separated by 50.7HU, and the standard deviations are 84.6 and 94.3HU respectively. Fat, blood, muscle and liver show very small differences and deviations between 120kV and 120Mixed images, always under 6HU.

Conclusion The calculation of 120Mixed images from 80kV and 120kV DECT images is accurate enough for soft tissues. Deviations increase at high HU tissues as bone. A fixed weighting factor seems to have some limitations to calculate 120Mixed images in all the HU range. More estudies are needed to find the impact of such a limitation in dose calculation, and to find the factor dependency on density and composition. EP-2042 A clinically applicable deep learning model for segmentation in the prostate region M. Nakamura 1 , H. Enno 2 , T. Kabasawa 2 , Y. Shido 2 , Y. Okunishi 2 , K. Muguruma 2 , H. Hirashima 3 , T. Mizowaki 3 1 Kyoto University, Department of Information Technology and Medical Engineering, Kyoto, Japan ; 2 Rist- Inc., Department of Research and Development, Tokyo, Japan ; 3 Kyoto University, Department of Radiation Oncology and Image-Applied Therapy, Kyoto, Japan Purpose or Objective The purpose of this study was to evaluate the accuracy of a deep learning segmentation model using FusionNet architecture, to delineate the prostate, seminal vesicles, rectum, and bladder in pelvic CT images. Material and Methods The clinical data used in this study were obtained from randomly chosen from 469 prostate cancer patients who underwent IMRT or VMAT in prone position between July 2007 and October 2016. Regions of interest (ROIs), including the prostate, seminal vesicles, rectum, and bladder, were manually drawn by radiation oncologists and medical physicists. All CT images were acquired with a 512×512 matrix and 2.5-mm slice thickness (voxel size, 1.07 mm×1.07 mm×2.5 mm). A total number of CT images of 14,301 and their corresponding structural images were randomly assigned to either training (60%), validation (20%) or testing (20%) sets. A deep neural network, FusionNet, was implemented as the segmentation model. The CT image data set for training was augmented by performing rotation and shear on the original images. The model was then trained using the CT image data set with corresponding ROIs label data set. The optimization algorithm called Adam (adaptive moment estimation optimizer, learning rate=2e-5) was used to train the network weights. The trained model was evaluated with

Made with FlippingBook - professional solution for displaying marketing and sales documents online