ESTRO 2024 - Abstract Book
S3963
Physics - Image acquisition and processing
ESTRO 2024
Conclusion:
The results obtained are favorable to the clinical use of brain and pelvis DL models for sCT generation. The final validation stage, currently in progress, is the use of sCT image for rigid registration with daily CBCT for patient repositioning.
Keywords: synthetic CT, MR only radiotherapy, deep learning
3060
Digital Poster
Enhancing Gamma Knife CBCT Image Quality Using Pix2Pix Generative Adversarial Networks
Prabhakar Ramachandran 1,2 , Zachery Colbert 1 , Darcie Anderson 2 , Andrew Fielding 2 , Daniel Arrington 1 , Matthew Foote 1 1 Princess Alexandra Hospital, Radiation Oncology, Brisbane, Australia. 2 Queensland University of Technology, Science & Engineering Faculty, Brisbane, Australia
Purpose/Objective:
Cone-beam computed Tomography (CBCT) is integral to Gamma Knife radiosurgery for patient positioning and treatment monitoring. However, CBCT images suffer from poor image quality, variability in Hounsfield units, and imaging artefacts [1]. Addressing these limitations is vital for optimising the clinical utility of CBCT in Gamma Knife radiosurgery procedures. To tackle these challenges, our study focuses on creating a deep learning-based model to generate synthetic CBCT images with superior quality, closely resembling the gold standard CT scans. We utilised Generative Adversarial Networks (GANs), introduced in 2014 by Goodfellow et al., which have revolutionised the field of medical imaging. Our approach employs a Pix2Pix GAN model [3], consisting of a generator and a discriminator, to translate Gamma Knife CBCT images into synthetic CBCT images. The primary aim is to markedly improve the quality of CBCT images, which is likely to aid in utilising these scans for dose computation and treatment monitoring.
Material/Methods:
The dataset included 2,872 CT and CBCT slices from 15 patients, with 20% of these scans utilised for model validation and testing. A MATLABĀ® script was developed to register the CT and CBCT images and convert them to NIfTI format. From these NIfTI images, paired 2D slices were extracted for training and validation. We employed a Pix2Pix GAN model for image translation, featuring a generator based on the U-Net architecture [4] and a simple Convolutional Neural Network (CNN) discriminator. The generator was trained using a composite loss function that included the discriminator's binary cross-entropy (BCE) loss, L1 loss, and the Structural Similarity Index Metric (SSIM). The generated synthetic CBCT images were evaluated against the ground truth CT scans using SSIM [5].
Made with FlippingBook - Online Brochure Maker