ESTRO 2021 Abstract Book
S364
ESTRO 2021
Abstract Text In modern external beam image-guided radiotherapy (IGRT), cone-beam computed tomography (CBCT) plays a crucial role in accurate patient position verification. Also, CBCT can facilitate online adaptive radiotherapy (ART) by visualising daily anatomical variations without recurring to additional rescanning on CT. However, CBCT image quality is inferior to that of CT in soft-tissue contrast and CT number consistency due to artefacts. Therefore, CBCT is not sufficient to perform accurate dose calculations. Patients need to be referred for a rescan CT (rCT) when significant anatomical differences are noted between daily images and the planning CT. However, scheduling and acquiring an rCT adds logistic complexity and patient burden to the treatment. On the contrary, with ART based on CBCT, these issues can be addressed. For example, enabling accurate dose calculations on the daily CBCT images could eliminate the need for acquiring an rCT. A prerequisite for online ART based on CBCT is that the CT number accuracy is sufficient to enable dose calculation. Considerable literature has recently emerged proposing methods for correcting CBCT imaging artefacts and increasing image intensity consistency using look-up table-based approaches, deformable image registration (DIR) of the planning CT to the daily anatomy on CBCT and model- or Monte Carlo-based methods for scatter estimation and correction. These techniques can be deployed on a time scale of minutes, which is not acceptable when aiming to use CBCT images for daily online dose evaluation or online pre-treatment adaptation. Recently, deep learning (DL) has been proposed for fast CBCT artefact correction as it can solve image-to- image translation problems, or image synthesis, within seconds. This talk will provide an overview of the DL-based methods revising architectures, spatial configuration, anatomical sites of the applications, and the performance obtained. The current status will be discussed, touching upon current challenges as presented in a recent review (2021) published by Spadea and Maspero et al. Abstract Text Purpose: Cone-beam computed tomography (CBCT) has been widely used in image-guided radiotherapy (RT) to improve treatment accuracy. In addition to the standard application of patient setup for RT, CBCT has the potential to be further developed to assess the inter-fraction variability of organs-at-risk (OARs) caused by patient motion, weight loss, and other factors. The accurate segmentation of OARs on CBCT images is needed to quantify radiation doses, which could guide real-time treatment strategies – whether repositioning the patient or adapting the plan – to minimize severe toxicity. Further, segmentation time must be fast to avoid significant delays. However, accurate and fast segmentation is a challenging task. Manual segmentation is time consuming, labor intensive and subject to variabilities. Automatic segmentation is difficult because the image quality of CBCT is usually far inferior to the simulation CT, resulting in segmentation errors. Recently, we developed deep learning algorithms to accurately segment multiple OARs on CBCT images within only a minute using the improved image quality of synthetic CT (sCT) and synthetic MRI (sMRI). Method: For sCT-aided OAR segmentation, a pre-trained cycle-consistent generative adversarial network (CycleGAN) is first used to generate a high-quality sCT from a CBCT. Via CycleGAN, the image quality of the generated sCT approaches that of the planning CT. A deep attention fully convolution network is then used to perform OAR segmentation using the sCT. By integrating a deep attention strategy into the segmentation network, the significant features that accurately represent the different organs are automatically highlighted. For sMRI-aided OAR segmentation, a sMRI is first synthesized from a CBCT using a pre-trained CycleGAN model with dense blocks and a self-attention concept. sMRI has superb soft-tissue contrast while CBCT better highlights the bony structures. The complementary information from sMRI and CBCT is collected and synthesized by dual pyramid networks DPNs to delineate the multiple OARs. CBCT images and their corresponding manual contours drawn by expert physicians are used as pairs to train and test the proposed models. These sCT-aided OAR segmentation algorithm has been applied to disease sites in abdomen, while sMRI-aided OAR segmentation algorithm has been applied to the head and neck and pelvis sites. Cross- validation experiments have been performed to evaluate these methods. Dice similarity coefficient (DSC) and mean surface distance (MSD) were used to quantify the differences between our segmentation and the ground truth manual contours. Higher DSC and lower MSD values mean better performance for a segmentation algorithm. Results: Across a cohort of 65 head and neck cancer patients, the following DSC values were achieved for 17 important OARs: brain stem 0.87±0.03, left/right cochlea 0.79±0.10/0.79±0.11, left/right eye 0.89±0.08/0.89±0.07, larynx 0.90±0.08, left/right lens 0.75±0.06/0.77±0.06, mandible 0.86±0.13, optic chiasm 0.66±0.14, left/right optic nerve 0.78±0.05/0.77±0.04, oral cavity 0.96±0.04, left/right parotid 0.89±0.04/0.89±0.04, pharynx 0.83±0.02, and spinal cord 0.84±0.07. For a cohort of 60 abdominal cancer patients, the DSC and MSD are 0.80±0.20 and 1.95±2.37mm for duodenum, 0.87±0.05 and 1.33±0.58mm for small bowel, 0.88±0.07 and 2.10±2.21mm for large bowel, 0.91±0.08 and 1.59±1.92 mm for stomach, 0.93±0.05 and 2.90±4.77 mm for liver, 0.92±0.05 and 2.05±4.71 mm for left kidney, 0.93±0.04 and 2.09±3.40mm for right kidney, and 0.88±0.07 and 0.75±0.50 mm for spinal cord. For a cohort of 100 prostate cancer patients, DSC and MSD between the segmentation results and the ground truth were 0.96 ± 0.03 and 0.65 ± 0.67 mm for bladder; 0.91 ± 0.08 and 0.93 ± 0.96 mm for prostate; 0.93 ± 0.04 and 0.72 ± 0.61 mm for rectum; 0.95 ± 0.05, 1.05 ± 1.40 mm; and 0.95 ± 0.05, 1.08 ± 1.48 mm for left and right femoral heads, respectively. These results are significantly better than those of competing methods in most evaluation metrics. Conclusion: The retrospective studies demonstrate that our proposed deep-learning-based segmentations can reliably and rapidly segment multiple OARs in three different anatomical regions. The OAR auto-segmentation SP-0476 The use of deep-learning based CBCT segmentation in adaptive radiotherapy X. Yang 1 , Y. Lei 1 , J. Roper 1 , P. Patel 1 , A. Jani 1 , J. Bradley 1 , T. Liu 1 1 Emory University, Radiation Oncology, Atlanta, USA
Made with FlippingBook Learn more on our blog