ESTRO 2023 - Abstract Book
S1688
Digital Posters
ESTRO 2023
Conclusion In this study, the accuracy of ART using PreciseART® was demonstrated by comparing kVCT and MVCT using a standard phantom provided as a quality assurance (QA) package. Moreover, the improved low-contrast resolution of kVCT contributed to improved ART accuracy. However, the image uniformity of kVCT was inferior to that of MVCT. Therefore, further improvement of kVCT image quality and evolution of PreciseART® will lead to the development of online ART and lead to more accurate ART.
PO-1934 Deep learning for accuracy prediction of deformably registered contours in prostate radiotherapy
P.L. Yeap 1 , Y.M. Wong 2 , A.L.K. Ong 1 , J.K.L. Tuan 1 , E.P.P. Pang 1 , S.Y. Park 1 , J.C.L. Lee 1 , H.Q. Tan 1
1 National Cancer Centre Singapore, Division of Radiation Oncology, Singapore, Singapore; 2 Nanyang Technological University, School of Physical and Mathematical Sciences, Singapore, Singapore Purpose or Objective Automatic deformable image registration (DIR) is a critical step in adaptive radiotherapy. Manually delineated OAR contours on planning CT (pCT) scans are deformably registered onto daily CBCT scans for delivered dose accumulation. However, evaluation of registered contours requires human experts, which is time-consuming and subjects to high inter-observer variability. This work proposes a deep learning model that allows accurate prediction of Dice Similarity Coefficients (DSC) of registered contours. Materials and Methods Our dataset comprises 20 prostate patients with 37-39 daily CBCT scans each (n=760). OARs were manually delineated by a radiation oncologist on every CBCT scan. The corresponding pCT scans were deformably registered to each CBCT scan using RayStation v10A (RaySearch Laboratories, Sweden) to generate virtual CT (vCT) scans. DSC between the registered and manual contours were computed. DIR parameters such as the similarity metric and final resolution were varied to determine settings that give the widest range of DSC. Data was augmented by mirroring each scan along the vertical axis, giving 1520 vCT-CBCT pairs in total. A Siamese neural network was trained on the pre-processed vCT-CBCT pairs through a 10-fold cross validation. Given the small dataset, transfer learning using the pre-trained ResNet-50 model was used. Figure 1 shows the network architecture.
Made with FlippingBook flipbook maker