ESTRO 2021 Abstract Book

S1382

ESTRO 2021

Conclusion Although a dedicated evaluation is envisaged to determine the suitability of sCT for dose calculations, the investigated cGAN with three input channels (air, bones, soft tissue) presented proper accuracy and promising results for supporting abdominal tumor treatment in particle therapy. PO-1661 Investigation into the impact of AI-enhanced CBCTs on CBCT-CT deformable image registration D. Balfour 1 , P. Looney 1 , L. Turtle 2 , J. Fenwick 3 , D. Boukerroui 1 , J. Sage 1 1 Mirada Medical Ltd., Science, Oxford, United Kingdom; 2 Clatterbridge Cancer Centre, Radiotherapy, Wirral, United Kingdom; 3 Clatterbridge Cancer Centre, Physics, Wirral, United Kingdom Purpose or Objective Accurate and reliable CT-CBCT deformable image registration (DIR) is a topic of central importance to tasks in adaptive RT workflows, such as contour warping and daily dose estimation. Recent literature has demonstrated successes in translating CBCT images to a CT-like quality by application of generative adversarial networks (GANs). These build upon general-purpose image translation GANs such as pix2pix [1]. However, their utility in clinical settings where CT-CBCT registration is required remains to be seen. This study investigates how such AI-improved CBCT images can help to solve the CBCT-CT DIR problem. Materials and Methods This study used 81 head and neck (H&N) cancer patients, each with a planning CT (Philips Brilliance Big Bore; pixel spacing 1.17x1.17 mm, slice thickness 3 mm) and a CBCT (Varian OBI; pixel spacing 0.51x0.51 mm, slice thickness 2.98 mm) volume. Patients were randomly separated into a 65:16 training/testing split. Each training CT was warped to the corresponding CBCT, thus preserving the characteristics of the CBCT images. This was performed using a mutual information (MI) DIR algorithm. This preprocessing resulted in a set of 3896 anatomically-matched slice pairs for network training. No preprocessing was applied to the test data. The GAN consisted of a generator and a discriminator trained in an adversarial way [1]. The generator used a U-net architecture. The discriminator architecture was a 70x70 patch discriminator as in [1]. It was trained for 1000 full iterations (epochs). Only the generator is used to convert CBCT images to pseudoCTs. Two DIR pipelines were compared: 1. A pseudo-CT was generated from the CBCT and then registered to the CT. 2. Direct CBCT-CT DIR. Either way, the transformation was applied to warp the original CBCT. Two DIR methods were used: A CT-CT optical flow (OF) method, and a multimodal MI-based method. All registrations were performed with RTx 1.8 (Mirada Medical). DIR performance was assessed using a local feature matching algorithm which estimates the target registration error (TRE) [2]. The pseudoCT was not used in the DIR QA process.

Made with FlippingBook Learn more on our blog