ESTRO 2022 - Abstract Book
S1400
Abstract book
ESTRO 2022
Figure 1. Structures of different multi-task learning architectures
Results For tumor segmentation, Trival Unet obtained the best result (DSC=0.800), result of Modified Unet was close to the former (DSC=0.791), and DeepLab performed the worst (DSC=0.647). For T-Stage classification, Modified Unet obtained the best result (Accuracy=85.7%), Trival Unet was second (Accuracy=81.4%), and DeepLab still performed the worst (Accuracy=72.0%). In order to know whether multi-task method got better results, we investigated the performance of both tasks in single task situation. For tumor segmentation, single-task Unet performed the best over all the trials, of which DSC was 0.807, 0.007 larger than Trival Unet’s. For T-Stage classification, we used Resnet for training. It performed poorly (Accuracy=58.6%). Conclusion In this study, we compared multi-task performance of different CNN architectures: DeepLab, Trival Unet and Modified Unet. Our results showed that T-Stage classification was a difficult task for CNN. Such difficulty contributed to the unusual result: tumor segmentation did not perform better in multi-task than in single-task situation. Despite the difficulty, multi- task method obtained considerable results. Thus, using multi-task method to solve tasks which may be difficult for general single task CNN method is worth considering.
PO-1613 AI-driven combined deformable registration and image synthesis between radiology and histopathology
A. Leroy 1,2 , M. Lerousseau 2 , T. Henry 2 , T. Estienne 2 , M. Classe 3,2 , N. Paragios 1 , E. Deutsch 2 , V. Grégoire 4
1 Therapanacea, Artificial Intelligence, Paris, France; 2 Gustave Roussy, Paris-Saclay University, Inserm 1030, Molecular Radiotherapy and Therapeutic Innovation, Villejuif, France; 3 Gustave Roussy, Pathology Department, Villejuif, France; 4 Centre Léon Bérard, Radiation Oncology Department, Lyon, France Purpose or Objective Although widely used for all steps of cancer treatment, radiologic imaging modalities (CT, MRI, …) provide insufficient assessment of tissue properties and cancer proliferation. A complete understanding of tumor micro-environment often goes through additional pathologic examination on surgically excised specimens requiring a multimodal registration between 2D histological slide and 3D anatomical volume. Yet, this step is substantially difficult because of the extreme shrinkage and out-of-plane deformations that the tissue undergoes through histological process, the differences in resolution scales and color intensities, often imposing a burdensome time-consuming manual mapping. The aim of our work is to provide an end- to-end deep learning framework to automatically register 2D histopathology with 3D radiology in an unsupervised and deformable setting. The latter could directly be integrated into the treatment plan for better delineation and comprehension of tumor heterogeneity towards dose painting.
Materials and Methods
Made with FlippingBook Digital Publishing Software