ESTRO 2023 - Abstract Book

S2115

Digital Posters

ESTRO 2023

Figure 1: boxplot of mean absolute error (MAE) in hounsfield units for the two synthetic ct generation algorithms.

Conclusion We compared two AI based synthetic CT generation tools on multiple image quality indicators. Values for both algorithms were comparable to those reported in literature for both image similarity and organ-level mean absolute error metrics. Future work will include dosimetric analysis as well as evaluate the synthetic CT’s ability to be used for patient positioning on these cases to further ascertain suitability of the algorithms for clinical use.

PO-2351 Can artificial intelligence bring cone beam CT acquisitions to planning CT quality?

T. Roque 1 , A. Oumani 2 , O. Teboul 3 , N. Paragios 4 , P. Fenoglietto 5

1 TheraPanacea, Clinical Affairs , Paris, France; 2 TheraPanacea, AI enginering, Paris, France; 3 TheraPanacea, AI engineering, Paris, France; 4 Centrale Supelec, University of Paris-Saclay, Department of Mathematics, Gif-sur-Yvette, France; 5 L'Institut du Cancer de Montpellier, Department of Radiation Oncology, Paris, France Purpose or Objective Cone-beam CT (CBCT) is an essential component of treatment delivery in radiation therapy and has contributed substantially to improving overall treatment quality. However, its main usage is primarily devoted to patient positioning due to limited quality, resolution, and field of view. Recent progress on signal panels have contributed on improving the signal to noise ratio for some family of machines. Harnessing and using CBCT beyond patient positioning could be a pivotal aspect in radiation oncology and contribute to the effective implementation of adaptive treatment at scale. Such an action would require improving substantially the quality of signal and augmenting the field of view such that organ at risk annotation, full scale dose simulation and replanning can be performed. In this study, we evaluate the relevance of an artificial intelligence solution that could enhance the quality, resolution, and field of view of conventional vendor agnostic CBCTs for pelvic cases. Materials and Methods Cycle generative neural networks (C-GAN) are deep learning architectures that seek to recover a bijective transformation between different image modalities. In the context of pelvic patients, on top of the signal differences between CT and CBCT the lack of paired data due to imperfect alignment between the two modalities also needs to be considered. To address this challenge, a two-level cycle architecture has been developed. First, using weakly paired data, a generative network is built to bring closer the CBCT signal to the planning CT (pCT). Subsequently, this is fed to a deformable image registration framework that establishes correspondences between the two signals and propagates to a second cycle GAN which is retrained using "paired" data, resulting in the synthetic CTs (sCTs). Synthetic CTs were generated from pelvis CBCTs for a test cohort of 50 patients (with 194 CBCTs). Mean absolute error (MAE), structural similarity index measure (SSIM), peak signal to noise ratio (PSNR) on the patient pelvis were computed to assess the similarity between the planning CT and sCT. Results Average and standard deviation MAE, SSMI, PSNR between planning CTs and CBCTs, and planning CTs and synthetic CTs on the test set are depicted in Tab. 1. The average MAE was 59.97±13.12 HU for pelvic cases originally but was improved to 38.13±8.3 with deep learning. The mean SSIM and PSNR between CBCT and pCT on the testing data were 0.79 and 31.14 dB, and 0.94 and 32.92 dB between the sCT and pCT images, respectively.

Table 1: Overall results of imaging metrics between planning CTs/CBCTs and planning CTs/synthetic CTs

Made with FlippingBook - professional solution for displaying marketing and sales documents online