ESTRO 2024 - Abstract Book

S3051

Physics - Autosegmentation

ESTRO 2024

[3] Wu, J., Fu, R., Fang, H., Zhang, Y. and Xu, Y., 2023. Medsegdiff-v2: Diffusion based medical image segmentation with transformer. arXiv preprint arXiv:2301.11798.

[4] Rahman, A., Valanarasu, J.M.J., Hacihaliloglu, I. and Patel, V.M., 2023. Ambiguous medical image segmentation using diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 11536 11546).

1390

Mini-Oral

A new method for training automatic tumor contouring models without manual annotations

Marius Schmidt-Mengin 1 , Quentin Spinat 1 , Alexis Benichoux 1 , Gizem Temiz 2 , Lorenzo Colombo 2 , Madalina-Liana Costea 2 , Olivier Teboul 1 , Nikos Paragios 3 1 TheraPanacea, AI engineering, Paris, France. 2 TheraPanacea, Clinical Affairs, Paris, France. 3 TheraPanacea, CEO, Paris, France

Purpose/Objective:

Automatic delineation of target volume for radiotherapy treatment is crucial for personalized medicine. However, the development of automated delineation methods relies on manual contours from experts. This process is time consuming and hindered by inter-expert variability. Weakly supervised approaches aim to overcome these difficulties by eliminating the need for ground truth contours. These approaches are based on 3D volumes that are binary labeled as “positive” or “negative”, where “positive” means that the patient image contains at least one region of interest (eg. tumor, lesion).

The aim of this study is to evaluate the performance of our weakly supervised approach compared to an existing benchmark method.

Material/Methods:

Our approach consists of two stages. First, a neural network (ResNet [1]) is trained to identify whether slices of 3D volumes in any 3D orientation contain the regions of interest (eg. tumors, lesions). Secondly, the inverse Radon transform is used to reconstruct a delineation of the regions of interest.The method allows us to capture information along the z-axis with axial slices and in the xy-plane with sagittal and coronal slices. The method was tested using three different publicly available datasets: (i) the AutoPET II dataset [3], which consists of 963 FDG-PET/CT pairs of cancer patients, about half of which were labeled as “negative” by radiologists; (ii) the Duke breast cancer MRI dataset [2], which consists of 922 biopsy confirmed breast cancer patients (for this dataset, we separated the left breast from the right breast and treated the breasts that don’t contain any tumors as “negative”); and (iii) MosMed Data COVID-19 dataset [4] consisting of 634 “positive” thoracic CTs withCOVID-19 lesions , and 204 “negative” CTs. The later dataset aimed to demonstrate generalizability of the method to lesions different from cancer.

Made with FlippingBook - Online Brochure Maker