ESTRO 2024 - Abstract Book

S4116

Physics - Inter-fraction motion management and offline adaptive radiotherapy

ESTRO 2024

2894

Poster Discussion

Patient position verification in non-coplanar SRS with dual X-ray projections via 2D-3D registration

Siqi Ye, Lei Xing

Stanford University, Radiation Oncology, Palo Alto, USA

Purpose/Objective:

Non-coplanar CBCT is crucial for the precise localization in repeated stereotactic radiosurgery (SRS). However, in some cases, due to potential collisions between CT gantry and the patient, the CT gantry can only rotate by a very small angle. These limited-angle measurements are insufficient to generate high-quality CT images for image-domain registration. This presents the challenging problem of using only a few 2D projections to register with the 3D planning CT. The most popular intensity-based methods for 2D-3D registration involve the conversion of the 3D CT image into digitally reconstructed radiographs (DRRs). This allows for similarity measurements to be conducted in the 2D projection domain. However, this process involves heavy computation caused by accurate DRR generation and optimization iterations. Recent deep learning methods further compound the challenge by requiring a large dataset containing DRR and X-ray (projection) pairs for model training. To address these issues, we propose an efficient dataset-free registration model utilizing a lightweight neural representation network. We follow the procedure of intensity-based methods that convert the 3D volume into 2D projection space and compare it with the acquired 2D projections. Unlike previous methods, rather than directly optimizing the registration parameters, which can be either rigid parameters or deformation fields, we optimize the network’s parameters that establish a mapping between the input 3D volume and the registration parameters. Specifically, we input the spatial coordinates of the 3D volume into a 3-layer multi-layer perceptron (MLP), with each layer containing 256 neurons, and obtain registration parameters from the network’s output. We use the outputs to transform the input 3D volume to a new position, and then convert it into 2D projections using the Radon transform which incorporates imaging physics. The network’s parameters are optimized to minimize the difference between the projections generated from the transformed 3D volume and the actual measurements. In this process, no dataset is required; only the 3D volume at a reference position and the acquired two projections at the new position are needed. Material/Methods:

Results:

We experimented on both simulated data and real data acquired from Varian’s TrueBeam machine. For the simulated data experiments, we simulated 2D projections of a head-and-neck image (150x150 pixels with 30 slices) positioned at six different positions, which included translations and rotations along the 3 axes. For each position, we generated 2 projections with 45 o discrepancy using the Radon transform. To assess the accuracy of the registration results, we compared the network’s output to the ground-truth transformation parameters. The proposed method achieved a rotation accuracy of ±0.01 o and a translation accuracy of ±0.1 pixels. The registration process for each case required only around 15 seconds using a single NVIDIA GeForce 3090 GPU.

Made with FlippingBook - Online Brochure Maker