ESTRO 2025 - Abstract Book

S3045

Physics - Image acquisition and processing

ESTRO 2025

Conclusion: This study shows that it is possible to develop a robust platform for image conversion, ensuring maximum compatibility between tools for RF extraction. However, a cautious approach should be taken as difference between RF seem to be far larger than uncertainties caused by image conversion. The study is supported by a Ministry of Health PNRR grant (RF-2021-12374001).

Keywords: Imaging, Radiomics

References: 1. https://doi.org/10.18637/jss.v086.i08

2. https://doi.org/10.1007/s10278-017-0037-8 3. https://doi.org/10.3389/fninf.2013.00045 4. https://doi.org/10.48550/arXiv.1612.07003 5. https://doi.org/10.1038/s41598-019-46030-0 6. https://doi.org/10.1371/journal.pone.0225550 7. https://doi.org/10.1158/0008-5472.CAN-17-0339 8. https://doi.org/10.48550/arXiv.2405.06184 9. https://psycnet.apa.org/doi/10.1037/1082-989X.1.1.30

3404

Digital Poster Do humans annotate landmarks more precisely than machines? Andreas Smolders 1,2 , Malgorzata Synak 2 , Lisa Fankhauser 1,2 , Dominic Leiser 1 , Sharon Poh Shuxian 1 , Tony Lomax 1,2 , Francesca Albertini 1 1 Center for Proton Therapy, Paul Scherrer Institute, Villigen, Switzerland. 2 Department of Physics, ETH Zurich, Zurich, Switzerland Purpose/Objective: Despite its potential for contour propagation, dose warping, and image fusion, clinical adoption of deformable image registration (DIR) remains slow due to reliability concerns. DIR solutions tend to differ between various algorithms because of DIR's ill-posed nature, and this inter-algorithm variability causes clinical distrust. The gold standard for evaluating DIR quality is the target registration error, measuring the average distance between manually annotated landmarks and their registered positions. However, these annotations themselves are subject to inter-operator variability . This study assessed inter-operator variability in head-and-neck cancer patients and compared it to inter-algorithm variability. Material/Methods: For five head-and-neck cancer patients, the planning CT and one follow-up CT were selected. On each planning CT, one observer annotated 21 reference landmarks, yielding in total 105 landmarks, using the VV software [1], which allows landmark annotation while viewing orthogonal views. Five observers, including the one who annotated the planning CTs, replicated these landmarks manually on the follow-up CTs using the same software while viewing the planning and follow-up CTs on two different screens. Observers included 2 medical doctors and 3 students. Five DIR algorithms were then used to register the CT-pairs. Inter-observer and inter-algorithm variability (i.e. precision) were defined as the root-mean-squared error (RMSE) between annotated landmarks and their mean across all observers or algorithms, respectively. Calculating accuracy requires a ground truth. After thorough visual inspection, it seemed reasonable to assume this to be the mean observer landmarks. The accuracy of DIR algorithms was therefore defined as the RMSE with respect to the observer mean.

Made with FlippingBook Ebook Creator