ESTRO 2021 Abstract Book

S704

ESTRO 2021

a) Clinical example 1 (COVID initiative) : telephone vs face to face on treatment consultation Completing the QA tool identified not only that there were comparable outcomes with significant efficiency and potential quality improvement via telephone consultation but also highlighted where enhancement of other areas of the patient pathway were needed. This mitigated patient care risk to allow benefits from this service improvement to be realised. b) Clinical example 2: Treatment verification for prostate SABR CBCT vs Fiducial Utilising the tool in this clinical example identified key lines of enquiry from both a clinical and technical perspective and promoted an objective evidence-based approach to evaluating the necessities of fiducials in the context of prostate imaged guided radiotherapy. A pathway change was implemented. Clinical example 3: Eclipse pathway vs Pinnacle pathway The tool was implemented following an MDT incident review meeting. Figure1 shows the visual output after completing the tool scoring comparing the two palliative RT pathways utilising two different RT planning systems. Points 1,2 relate to structure, 3,4,5 to process and 6 the outcome. It was evident that despite the outcome of the two pathways being comparable, pathway 2 represented a quality improvement not only in quality including risk but also efficiency features of the pathway and was implemented in practice.

Conclusion The QA tool can support collaborative service improvements and enhance the governance process within RT services. Analysing the themes of user feedback from those who were involved in the piloting of the QA tool indicated that the tool was supportive in, • Objective independent discussion. • Encouraging collaboration between different professions. • Promoting an evidence-based approach across all aspects of the pathway. • Provided a useful output for reporting within governance process. • Future work has been planned to validate the tool in a multicentre setting. Future work has been planned to validate the tool in a multicentre setting. • PD-0868 Comparison of segmentation algorithms for organs at risk delineation on head-and-neck CT images M. Costea 1 , M. Biston 1 , V. Grégoire 1 , D. Saruut 2 1 Centre Léon Bérard, Radiotherapy, Lyon, France; 2 INSA-Lyon, CREATIS, CNRS UMR5220, Lyon, France Purpose or Objective To investigate the performance of head and neck organs-at-risk (OAR) contouring using several atlas-based (ABAS) vs deep learning (DL) segmentation methods. Materials and Methods Ten heterogeneous head-and-neck (HN) patients, having body mass index ranging from 18.9 to 30.7, were selected for the atlas collection. On each of the CT patient data, 20 organs-at-risk (OARs) were manually delineated by expert physicians. Three different ABAS algorithms were tested on 15 HN patients using Advanced Medical Imaging Registration Engine (ADMIRE) v3.26 (Elekta AB, Stockholm, Sweden): Simultaneous Truth and Performance Level Estimation ( STAPLE), Patch Fusion (PF) and Random Forest label fusion (RF). Their performance was evaluated against manually contoured OARs in terms of the dice similarity coefficient (DICE) and Hausdorff distances (HD and 95-percentileHD (HD 95 )). The results were compared with two commercially available solutions: one ABAS (MIM-Maestro 7.0.2, Cleveland, USA) based on majority vote algorithm for label fusion and one DL solution trained on multi-centric data (ART-plan Annotate, Therapanacea, Paris, France). Results All solutions had superior results compared to MIM-Maestro ABAS software for all the OARs. Between the three fusion Elekta-ABAS algorithms, RF label fusion, which contains artificial intelligence learning features, resulted in the best results for the majority of structures but did not segment optical nerves and cochlea (Fig. 1). Compared to the DL solution, ADMIRE RF had equal, better and worse results for 3, 7 and 5 out of 15 common OARs, respectively. Better DICE were obtained for the eyes (0.91 vs 0.87), the larynx (0.79 vs 0.77), the oral cavity (0.87 vs 0.84), the mandible (0.92 vs 0.89), one submandibular gland (0.78 vs 0.77) and for the external contour (0.99 vs 0.76). The HD metrics confirmed the superiority of the DICE results with smaller distances for the mentioned OARs and particularly for the oral cavity (HD: 10.71 vs 13.93; HD 95 : 6.53 vs 9.70) and the mandible (HD: 8.10 vs 10.71; HD 95 : 2.22 vs 3.56). The DL model outperformed ADMIRE RF for one parotid gland (DICE: 0.82 vs 0.80; HD: 13.45 vs 13.33; HD 95 : 7.49 vs 6.98), one submandibular gland (DICE: 0.81 vs 0.75; HD: 8.47 vs 8.14; HD 95 : 4.76 vs 4.87), for the esophagus (DICE: 0.73 vs 0.61; HD: 30.44 vs 34.06; HD 95 :

Made with FlippingBook Learn more on our blog