ESTRO 2024 - Abstract Book
S3808
Physics - Image acquisition and processing
ESTRO 2024
groundbreaking method provides clinicians with real-time, toxicity-free views into 3D anatomical changes, enabling daily adjustments to radiation therapy. This not only enhances treatment effectiveness but also minimizes radiation exposure to healthy tissue. This approach could bring a new standard to radiation therapy, to elevate patient outcomes and care quality.
Material/Methods:
3D reconstruction from 2D projections faces key challenges:
- Limited 3D & Bone Emphasis: Bi-planar projections prioritize dense bone, often overshadowing softer tissues, hampering detailed 3D replication.
- Evolving Anatomy: Initial CT scans might not capture ongoing anatomical changes due to factors like tumor progression, organ shifts, weight loss, or radiotherapy effects.
We adopt a groundbreaking approach involving the utilization of a Generative Adversarial Network (GAN) to generate a 3D scan from 2D projections while simultaneously deforming a prior CT scan to align with this newly generated 3D representation. - GAN-Based 3D Reconstruction: A 3D StyleGAN designed for synthesizing head and neck virtual CT scans. The generator generates synthetic CT images by learning from a large dataset of real CT scans, while the discriminator assesses the authenticity of these synthetic images. We have used approx. 3,500 CT scans to train this model which is able to synthesize head and neck CT scans. - Deformation of Prior CT: Concurrently, a prior CT scan, which may not accurately represent the current state of the patient's anatomy, is deformed non-rigidly to optimally align with the newly generated 3D image. A U-Net architecture inspired from the VoxelMorph[1] deep learning deformable registration framework was adopted. This model was pre-trained using more than 1000 pairs of longitudinal data. - Joint Optimization & Refinement: The true breakthrough lies in the simultaneous optimization of both the generation and deformation processes. These two aspects are not treated in isolation; instead, they work collaboratively and iteratively. The generated 3D volume guides the deformation of the prior CT, ensuring that it accurately corresponds to the patient's current anatomy. Conversely, the deformation process provides feedback to the generator, enabling it to refine its output to match the deformed CT.
Our holistic training framework seeks to find the optimal compromise between the prior CT data and the generated 3D image, ultimately creating a highly accurate and up-to-date representation of the patient's anatomy.
Results:
We evaluated our reconstruction method using data from 70 VMAT-treated patients, including initial planning CTs and subsequent CBCTs. Using deformable image registration, we aligned planning CTs with last CBCTs to produce virtual CTs (vCTs) as reference, from which bi-planar projections were generated by a cone-beam projector [4]. By seamlessly integrating the anatomical information from the planning CTs, our innovative method offers precise volume reconstructions, ensuring accurate alignment with the structural changes induced by treatment or internal positioning shifts as depicted in Figure 1. Our approach produces 3D reconstructions in less than a minute, with
Made with FlippingBook - Online Brochure Maker