ESTRO 2025 - Abstract Book
S3033
Physics - Image acquisition and processing
ESTRO 2025
Conclusion: We demonstrated that Seq2Morph had superior performance over the widely used B-Spline algorithm, with notably reduced computation time. Seq2Morph is specifically suited for head and neck adaptive radiotherapy and is expected to enhance the efficiency and accuracy of both online and offline adaptive radiotherapy.
Keywords: Deformable Image Registration, Deep learning
References: Guha Balakrishnan, Amy Zhao, Mert R. Sabuncu, John Guttag, Adrian V. Dalca. VoxelMorph: A Learning Framework for Deformable Medical Image Registration. IEEE Transactions on Medical Imaging. 2019. 38(8): 1788-1800.
3166
Digital Poster A Multi-domain Model for Reducing Artefacts in Cone-beam CT Takao Onishi, Megumi Nakao, Takashi Mizowaki, Mitsuhiro Nakamura Graduate School of Medicine, Kyoto University, Kyoto, Japan
Purpose/Objective: Cone-beam CT (CBCT) frequently exhibit artefacts in the abdominal region. To reduce such artefacts, we proposed a novel deep learning model called Axial Sinogram-domain Artefact Reduction (ASAR) model. Material/Methods: The patient cohort utilized in this study comprised a total of 185 cases of abdominal disease, categorized as follows: 124 cases for training, 27 for validation, and 27 for internal testing, all derived from planning CT (pCT), as well as 7 cases for external testing derived from clinical acquired CBCT. To address the challenges associated with acquiring a large volume of sinogram data for clinically obtained CBCT (cSino), we created synthetic sinogram data derived from pCT (sSino). The sSino was designed to serve as equivalent training data to cSino, by simulating projection data affected by gastrointestinal motion and other artefact-inducing factors, using the previously developed technique. The ASAR model was trained across Axial image domain and Sinogram domain. Within each domain, artefact contaminated data and corresponding ground-truth data were paired as training data for every iteration. In this study, two models were trained separately: Multi-domain model which were trained in both domains, and Axial model only trained in axial domain. Evaluation metrics included mean absolute error (MAE) and structural similarity index (SSIM) comparing model-generated synthetic CT (sCT) images with corresponding pCT images. Results: The MAE and SSIM between CBCT and pCT were originally 134±38.6 [HU] and 0.55±0.11 in internal testing, and 98.9±9.0 [HU] and 0.46±0.10 in external testing. After applying the multi-domain model, the MAE and SSIM between sCT and pCT improved to 24.0±5.6 [HU] and 0.94±0.02, compared to 53.5±8.0 [HU] and 0.80±0.06 obtained using the Axial model. For external testing, the MAE and SSIM between sCT and pCT were 65.9±7.2 [HU] and 0.62±0.12 with the multi-domain model, and 82.3±10.3 [HU] and 0.50±0.10 with the Axial model.
Made with FlippingBook Ebook Creator