ESTRO 2020 Abstract book
S119 ESTRO 2020
OC-0215 LSTM networks for proton dose calculation in highly heterogeneous tissues A. Neishabouri 1,2,3 , N. Wahl 1,3 , L. Norberto Burigo 1,3 , U. Köthe 2 , B. Mark 1,3 1 German Cancer Research Center DKFZ, Division of Medical Physics in Radiation Oncology, Heidelberg, Germany ; 2 Heidelberg University, Visual Learning Lab Heidelberg, Heidelberg, Germany ; 3 HIRO, Heidelberg Institute for Radiation Oncology, Heidelberg, Germany Purpose or Objective Monte Carlo (MC) simulations are considered the accurate, yet time consuming gold standard for dose calculation in proton treatment planning. For acceptable runtimes, especially for inverse planning, analytical algorithms are often used providing significant speed-up but lacking the necessary accuracy in highly heterogeneous tissues. In this study, we investigate whether state-of-the-art Recurrent Neural Networks (RNN) could be used to arrive at a better compromise between speed and accuracy. Material and Methods In order to generate training and test data for the RNN we performed 4000 MC simulation using Topas MC for a single proton pencil beam (E_init = 104 MeV) impinging from arbitrary beam orientations onto random target points within the lung for a single case. As illustrated in figure 1, we parameterize dose calculation for the RNN as the problem to predict a 3D dose distribution based on a sequence of 2D CT images that are seen sequentially by the proton beam while it penetrates the geometry in a left-to-right manner. Furthermore, we apply a long short- term memory (LSTM) architecture for our RNN, which is ideally suited to work with sequential data, avoiding problems with vanishing and exploding gradients and adequately modeling lags between cause and effect in sequential data. The model performance is tested on a hold out of 800 samples using standard gamma comparison metrics.
training phase, we use one noise realization per patient for training set, whereas we use three different noise realizations per patient in validation set to better model the noise inherent to MC dose distributions. We train the network in 2.5D with input noisy MC dose distributions simulated using 1x10⁶ particles while keeping 1x10⁹ particles as nearly noisefree output reference. Results After training, both models successfully denoise new MC dose distributions. On average (averaged over five patients with different tumor sites), dilated UNet (simpleNet) yields relative errors on Δ D 95 and Δ D 98 of 0.62% ( 0.59% ) and 0.56% (0.64%) vs. noisy MC input of 12.07% and 14.48%. Moreover, we observe an improvement in signal-to-noise ratio of 20.94 dB for simpleNet vs. 18.06 dB by dilated UNet.
Conclusion We propose a framework based on deep learning to denoise whole-volume MC dose distributions. We show that simpleNet outperforms dilated UNet at denoising. While dilated UNet ( complex architecture ) tends to mimic the reference closely, superior performance is offered by simpleNet ( simple architecture ). The proposed framework can be trained on multiple noise realizations per patient and from any tumor site at full image resolution ( 512x512 ). Both networks provide dose-volume histogram (DVH) comparable to the reference MC simulation but globally, simpleNet performs better than dilated UNet. Inference time for both models was 10s, compared to 100 min to simulate 1x10⁹ particles as in the reference.
Results Comparing the ground truth 3D MC simulation dose with the generated dose by the LSTM model, we were able to achieve 99.33 ± 0.91 % mean gamma index pass-rate with a global distance to agreement criterion of [0.5%, 2mm], for the entire test set of 800 samples. LSTM dose calculation took ~0.05s per pencil beam (Nvidia GeForce GTX 970 GPU) which corresponds to a speed up of more than 3 orders of magnitude compared to our reference MC simulation (Topas running on an OpenStack architecture with 28 VCPUs). Figure 2 illustrates this comparison for the random test data sample incorporated in Figure 1.
Made with FlippingBook - Online magazine maker