ESTRO 2023 - Abstract Book

S750

Monday 15 May 2023

ESTRO 2023

Conclusion When subject to ACR accreditation standards for diagnostic CTs, the Varian HyperSight imaging system was able to pass the majority of tests, with quality similar to those produced by diagnostic CT scanners and substantially superior to existing linac-based CBCT imaging systems. Further protocol optimization is being conducted on this new imaging system, which has the potential to perform CT simulations and dose calculations similar to conventional CT simulators.

PD-0903 Human validation of a Deep Learning MRI-based Synthetic CT for RT Planning L. Crespi 1,2 , S. Camnasio 1 , D. Dei 3,4 , N. Lambri 3,5 , P. Mancosu 5 , M. Scorsetti 3,5 , D. Loiacono 1

1 Politecnico di Milano, Dipartimento di Elettronica, Informazione e Bioingegneria, Milan, Italy; 2 Human Technopole, Center for Health Data Science, Milan, Italy; 3 Humanitas University, Department of Biomedical Sciences, Pieve Emanuele (MI), Italy; 4 IRCCS Humanitas Research Hospital, Department of Radiotherapy and Radiosurgery, Rozzano (MI), Italy; 5 IRCCS Humanitas Research Hospital, Medical Physics Unit - Radiotherapy and Radiosurgery Department, Rozzano (MI), Italy Purpose or Objective MRI-based planning typically requires (a) MRI sequences to exploit the MRI high soft tissue contrast and (b) CT series with electron density, for dose calculation. However, the use of both imaging modalities results in a more complex and time- consuming RT workflow (e.g., image registrations, multiple acquisitions…). In this work, we proposed a deep learning (DL) model to generate a synthetic CT (sCT) from in-phase (IP) and out-of-phase (OOP) MRI sequences to streamline the MRI- based planning workflow. Materials and Methods CycleGAN, a DL model consisting of two generative adversarial networks (GAN) trained to generate synthetic images across different modalities, was used for the generation of sCTs from IP/OOP MRI pairs. Two different models were trained. The first one was trained on the Chaos grand-challenge dataset, including 1300 T1-weighted IP/OOP MRI slices, 1087 T2- weighted MRI slices, and 6407 CT slices acquired from the abdominal-thoracic region of 40 patients. The second model was trained using an internal dataset as the source for the CT images, including 5970 CT slices acquired in the abdominal- thoracic region of 100 patients. The two models were used to generate consistent sCTs with the respect to the real CTs available in the two different datasets. To assess the models’ performance, the sCTs generated from MRI images of 20 test patients not included in the training data were used. The following metrics were considered to compare the generated images to the real ones: Frechet inception distance (FID), Kullback-Leibler divergence (KL), histogram correlation (HC). Finally, 12 RT experts (i.e., radiation oncologists and medical physicists) from a single center blindly evaluated real and synthetic CT images. Results Both models were rather accurate and realistic (see some examples in Figure 1). Figure 2 shows the FID, KL, and HC metrics computed on sCTs, grouped in 10 different cranial-caudal axial views along the abdominal-thoracic region (where FID and HC are normalized w.r.t. values of metrics on real images). The sCTs image quality depended on the position: the slices generated from the central part of the MRI package were better than those generated at the cranial-caudal periphery. However, on average FID, KL, and HC metrics (respectively 1.03, 1.93, and 0.97 on the AuToMI dataset and 1.33, 2.09, and 0.95 on the Chaos dataset) suggest that the quality of images is good and reasonably close to real images (especially for AuToMI dataset). Finally, the RT experts were not able to distinguish between real and synthetic CT images: the statistical analysis performed on their evaluations showed no significant differences between real and synthetic CT images for both models (p-value 0.933 and 0.930).

Made with FlippingBook - professional solution for displaying marketing and sales documents online