ESTRO 2023 - Abstract Book
S1365
Digital Posters
ESTRO 2023
In rectal cancer, magnetic resonance imaging (MRI) has been considered as the first choice for staging and response monitoring. MRI-based radiotherapy planning has also shown its potential to facilitate individualized plan design for dose escalation that may improve pathological complete response rate. However, the delineation of the gross tumor volume (GTV) is time-consuming and the inter-observer deviation is the main source of treatment variability in the context of clinical trials that incorporates conformal radiotherapy approaches. Although implementing atlas-based auto segmentation algorithms can reduce the delineation time, these methods have low performance in rectal cancer. Several deep learning models have also been proposed based on T2w images or DWI alone, but segmentation result is barely acceptable with dice similarity coefficient (DSC) between 0.55 to 0.64. The purpose of this study is to build a deep-learning algorithm with combined multi-sequences MRI as input to improve the performance of automatic rectal tumor localization and segmentation. Materials and Methods A total of 220 LARC patients were included. All have pre-treatment multi-parametric MRI as T1-weighted (T1w), T2 weighted (T2w), four phases dynamic-contrast enhanced (DCE) and dynamic-weighted imaging (DWI) with two b values. A network with two combined serials of U-nets was proposed. Image patches at the same position from all frames were used as the input for the first layer. The results were fed into the second layer to verify the probability of the middle patch belonging to the lesion. The DSC was used to compare the results of the proposed algorithm with the ground-truth which was outlined by an experienced rectal radiation oncologist. Results The network with two combined serials of U-nets with architecture shown in Fig 1. Each U-net contains 12 convolutional layers in descending part and 12 convolutional layers in ascending part. The first U-net worked a localizer which roughly indicate the rectal positions. The second U-net was used to convert image into probability map with the same . The training group was used to optimize the hyperparameters such as learning rate, decay rate, and epochs by maximizing the object function. Five-fold cross-validation was used to evaluate the performance of the classifier. Among all patients, the mean of DSC could achieve 0.820, significantly improved from previous reported work using single modality alone.
Fig1. Architecture of the U-nets. The two U-net was trained differently.
Conclusion Our work showed the deep-learning with combined image sequence provide as a promising tool for fully automatic tumor localization and segmentation in rectal cancer. This efficient and reliable tumor segmentation method may provide a fundamental step to quantitatively extract imaging information and further to assess patient risk or benefit in getting personalized treatment.
PO-1664 Clinical Evaluation of Deep Learning-based automatic CTV Segmentation: A Multi-Tumor Experience
Z. Hou 1 , S. Li 1 , J. Liu 1 , S. Gao 1 , J. Yan 1
1 Nanjing Drum Tower Hospital, The Comprehensive Cancer Centre , Nanjing, China
Purpose or Objective The large variability in tumor appearance and shape makes manual delineation of the clinical target volume (CTV) time consuming, and the results depend on the oncologist’s experience. This study aimed to evaluate the deep learning models that automatically contour CTVs in various tumors on computed tomography (CT) images for radiotherapy planning. Materials and Methods 462 patients with CT images were selected for the present study, including breast-conserving surgery (BCS) (left-sided , n=71; right-sided, n=72), breast-radical mastectomy (BRM) (left-sided , n=43; right-sided, n=37), cervical (preoperation, n=45; postoperation, n=85), and rectal (n=109) carcinomas. Manually delineated CTV contours by radiation oncologists are served as ground truth. Four models were evaluated: FlexNet, Unet, Vnet, and Segresnet, which are commercially available
Made with FlippingBook flipbook maker