ESTRO 2025 - Abstract Book

S130

Invited Speaker

ESTRO 2025

considerations and challenges associated with AI in education, such as ‘hallucination’, and explore strategies to mitigate these risks.

This session also serves as a call to arms, urging educators to join forces and build a community dedicated to driving educational progress. Only through collective effort, sharing of best practices, and a willingness to embrace new technologies together can we fully harness the power of AI to shape the future of radiation oncology education.

** This abstract was (of course!) written with the assistance of a large language model

4868

Speaker Abstracts Introducing explainable methods for AI in medicine Neo Christopher Chung Computer Science, University of Warsaw, Warsaw, Poland

Abstract: The integration of machine learning (ML) models within medicine has transformed the landscape of diagnostics and treatment planning. However, the efficacy of these models largely hinges on their interpretability and the transparency of their decision-making processes. Explainability in ML is crucial in the medical domain, as it helps bridge the gap between complex algorithms and clinical practice, fostering trust and facilitating the integration of AI systems into healthcare. In this talk, I provide the fundamental need for explainability in medical ML, outlines existing approaches, and delves into methods and applications of explainable AI (XAI) within medical imaging. 1. Importance of Interpretability and Explainability: Explainability in ML is paramount in medicine due to the high stakes involved in patient care and treatment decisions. From the early adoption of ML algorithms in medicine in the 1980s, clinicians need to understand the reasoning behind ML-driven insights to make informed decisions. Transparent models enable healthcare professionals to verify the accuracy and reliability of predictions, thus fostering a collaborative environment where human expertise complements algorithmic intelligence. In the high-risk field of radiation oncology, explainability ensures that treatment plans are not only data-driven but are also congruent with clinical expectations and ethical standards, minimizing the risk of adverse effects in patient management. 2. Overview of Explainability Methods in ML and AI: Numerous approaches have been developed to enhance the interpretability of ML models These can be broadly classified into model-specific and model-agnostic methods. Model-specific techniques, such as decision trees or linear models, are inherently interpretable due to their straightforward structure. In contrast, model-agnostic methods apply to any ML model and include popular techniques like LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations). Additionally, visualization tools and feature importance measures are employed to unveil the internal workings of complex models like neural networks, offering clinicians insights into how specific inputs influence outputs. 3. Explainable AI (XAI) for Deep Learning in Medical Imaging: Importance estimators – such as saliency maps that highlight regions of an image that contribute most to a model's decision – are becoming a critical tool to understand prediction and classification involving medical images. These methods allow radiologists and oncologists to validate AI-based assessments, ensuring that AI focuses on clinically relevant features. Additionally, XAI facilitates the detection of potential biases and confounding factors in datasets, promoting more equitable and generalized model performance across diverse patient populations. In radiation oncology, XAI techniques may support the development of treatment protocols that are not only optimized for

Made with FlippingBook Ebook Creator