ESTRO 2023 - Abstract Book
S1404
Digital Posters
ESTRO 2023
Conclusion It was first demonstrated that missing tissues in simulation imaging could be generated with high similarity (reaching up to 0.86 of similarity index) using the machine learning method for all cases tested. Addition of patient body outline information further improved the dosimetric accuracy with the mean of gamma pass rates equal to or higher than 96.6% in all evaluated cases.
PO-1693 Deep learning for perspective deformation correction in X-ray imaging
Y. Huang 1 , A. Maier 2 , F. Fan 2 , B. Kreher 3 , X. Huang 4 , R. Fietkau 1 , C. Bert 1 , F. Putz 1
1 University Hospital Erlangen, Department of Radiation Oncology, Erlangen, Germany; 2 Friedrich-Alexander-Universität Erlangen-Nürnberg, Pattern Recognition Lab, Erlangen, Germany; 3 Siemens Healthcare GmbH, Diagnostics Products, Forchheim, Germany; 4 Shanghai Jiao Tong University, Department of Automation, Shanghai, China Purpose or Objective In cone-beam X-ray transmission imaging, perspective deformation causes difficulty in direct, accurate geometric assessments of anatomical structures. Perspective deformation correction will enable cone-beam computed tomography (CBCT) systems with more applications in anatomic landmark detection, fiducial marker registration and image fusion with MRI. For example, fast 2D MRI/X-ray hybrid imaging without patient repositioning has important potential applications in interventional surgery and radiation therapy. In this work, we aim to convert perspective projections into orthogonal projections using deep learning. Materials and Methods Directly converting a single perspective projection image into an orthogonal projection image is extremely challenging due to the lack of depth information. In contrast to biplanar systems which use two orthogonal views, a complementary (180°) view setting is proposed. A complementary view in parallel-beam is fully redundant, but can provide bounding information and reduce uncertainty for point-to-point correspondence in cone-beam. It provides a practical way to identify perspective deformed structures. Two state-of-the-art networks Pix2pixGAN and TransU-Net are applied to learn perspective deformation. Results The experiments on numerical bead phantom data demonstrate the advantage of complementary views (RMSE=1.40) over orthogonal views (RMSE = 3.87) or a single view (RMSE = 4.68). The study on spatial space demonstrate that Pix2pixGAN as a fully convolutional network achieves better performance in polar space (RMSE = 1.40) than Cartesian space (RMSE = 5.31), while TransU-Net as a transformer network achieves comparable (slightly worse) performance in Cartesian space (RMSE =2.95) to polar space (RMSE = 1.73). Further study shows that our method has certain tolerance to geometric inaccuracy such as source-to-isocenter distances, rotation angles, detector principal point shifts and respiratory motion within calibration accuracy. The experiments on patients' chest and head data demonstrate that our method allows accurate anatomic landmark assessment. To demonstrate the generalizability to real applications, experiments on real cadaver data of knees with various surgical metal implants are carried out. Our method is demonstrated effective to correct perspective deformation for important anatomical landmarks and robust in the presence of bulky metal implants and surgical screws, but yet has limited performance on thin and long metal objects like K-wires. Conclusion A framework to learn perspective deformation in CBCT using complementary views is proposed. It has certain tolerance to geometric inaccuracies, and can generalize well to real data. This will enable CBCT system with more potential applications in radiation oncology in the future. 1 Hospital Clínico Universitario de Valencia, Radiofísica y Protección Radiológica, Valencia, Spain; 2 Universidad de Valparaíso, CINGS, Centro de Investigación en Ingeniería para la Salud, Valparaíso, Chile Purpose or Objective There is a growing interest in obtaining quantitative indicators of brain volumes in the follow-up of patients undergoing radiation therapy of central nervous system neoplasms, that frequently exhibit radiation-induced atrophy in brain structures. MRI-imaging is the modality of choice. Acquisition recommendations for volumetric studies are isotropic voxel 1x1x1mm3 and sagittal slice orientation as indicated by the MPRAGE protocol. However, in clinical practice, modifications have been observed due to time constraints, leading to bigger and anisotropic voxels. Thus, we studied the impact of modifying the acquisition parameters: bigger voxel sizes, slice orientation (axial, sagittal) and anisotropy in volumetric MRI quantifications in different brain structures. Materials and Methods 10 healthy subjects underwent 3D-T1-weighted MRI acquisitions. Variations from the MPRAGE protocol were considered, testing bigger voxels (1.1x1.1x1.1 mm3 and 1.25x1.25x1.25 mm3), anisotropic voxels (0.9x0.9x1.2 mm3 and 1x1x1x1.25 PO-1694 MRI derived brain morphometric measurements: effect of voxel size, anisotropy and slice orientation T. Lusa Aguero 1 , S. Chabert 2 , A. Veloz 2
Made with FlippingBook flipbook maker