ESTRO 2023 - Abstract Book
S1348
Digital Posters
ESTRO 2023
Conclusion AI-based segmentation is significantly faster than manual delineation. The geometric accuracy is high for large structures, which are also the most laborious to manually delineate, with lower accuracy in small structures, especially the optic apparatus. Despite the geometric differences, DVH parameters showed no difference in most structures, which is also suggested by the high γ index.
PO-1649 Style-based generative model to reconstruct head and neck 3D CTs
A. Cafaro 1,2,3 , T. Henry 3 , Q. Spinat 1 , J. Colnot 3 , A. Leroy 1,2,3 , P. Maury 3 , A. Munoz 3 , G. Beldjoudi 3 , L. Hardy 1 , C. Robert 3 , V. Lepetit 4 , N. Paragios 1 , V. Grégoire 3 , E. Deutsch 3,2 1 TheraPanacea, R&D Artificial Intelligence, Paris, France; 2 Paris-Saclay University, Gustave Roussy, Inserm 1030, Molecular Radiotherapy and Therapeutic Innovation, Villejuif, France; 3 Joint Collaboration Gustave Roussy - Centre Léon Bérard, Radiation Oncology, Villejuif-Lyon, France; 4 Ecole des Ponts ParisTech, Research Artificial Intelligence, Marne-la-vallée, France Purpose or Objective Generative Adversarial Networks (GAN), a deep learning method, has many potential applications in the medical field. Their capacity of realistic image synthesis combined with the control it enables, allows for data augmentation, image enhancement, image reconstruction, domain adaptation or even disease progression modeling. Compared to classical GANs, a StyleGAN modulates the output at different levels of resolution with a low-dimensional signature, allowing a control on coarse-to-fine anatomic structures when applied to medical data. In this study, we evaluated the potential of a 3D style based GAN (StyleGAN) for generating and retrieving realistic head and neck 3D CTs, focused on a center zone prone to tumor presence. Materials and Methods We trained our StyleGAN with a dataset constituted by 3500 CTs with head and neck cancers, from 6 publicly available cohorts from The Cancer Imaging Archive (TCIA) and private internal data. The dataset was splitted between 3000 cases for training and 500 for validation. CTs were focused on the head and neck center region around the mouth, with a small zone of 80x96x112 (1.3mm x 2.4mm x 1.9mm) due to high-memory requirements. After training, the model can generate synthetic but realistic 3D CTs from random "signatures". To evaluate the generative power of this model, we show that we can find signatures that generate synthetic 3D CTs very close to real ones. We present our process in Figure 1. We used 60 external patients undergoing VMAT treatment to benchmark the model with the mean reconstruction error, i.e., the normalized mean absolute error between the reconstructions and the real CTs. Results Training our StyleGAN took 2 weeks. After training, we randomly chose signatures which were given to the model to generate diverse and realistic 3D CTs as seen in Figure 2. We also show that given an unseen CT, we could generate the closest artificial version of it. This “reconstruction” is very close to the real case with almost the same level of details. Our model has a high capacity of retrieving the diversity of anatomies with fine details: on average, we achieve 1.7% (std. 0.5%) of reconstruction error on the 60 test patients. The signature that allowed us to generate the closest 3D output could be modulated afterwards to change the coarse-to-fine structures of the corresponding CT.
Made with FlippingBook flipbook maker