ESTRO 2025 - Abstract Book
S131
Invited Speaker
ESTRO 2025
efficacy but also tailored to individual patients. By clearly elucidating the factors influencing dose distributions and tumor targeting, XAI aids in refining treatment strategies and improving patient outcomes.
Advancing explainable AI in medical imaging and radiation oncology is essential for enhancing trust and adoption of data-driven decision making processes in medicine. The ongoing evolution of XAI methods promises to further align AI system outputs with clinical rationales, ultimately promoting safe and effective integration of machine learning technologies into healthcare settings. As the field progresses, continued research and refinement of XAI techniques will be critical in ensuring these tools meet the dynamic needs of modern medicine, balancing technological innovation with compassionate, evidence-based care.
4869
Speaker Abstracts Clinical utility of explainability in AI Andre Dekker Radiation Oncology (Maastro), GROW Research Institute for Oncology and Reproduction, Maastricht UMC+, Maastricht, Netherlands
Abstract:
Artificial Intelligence (AI) may support radiation oncology practice by increasing efficiency, quality, efficacy and ultimately patient outcomes. However, the integration of AI into clinical practice necessitates a clear understanding of the applicability, benefits and risks, and explainability of AI is an important enable for this.
This teaching lecture will explore the clinical utility of explainability in AI within the context of radiation oncology.
First the rationale for AI in radiation oncology will be underscored. AI's ability to analyze vast amounts of data and identify patterns offers significant advantages in radiation oncology, from decision support to treatment planning and patient monitoring. Despite these benefits, the complexity of AI models often leads to a "black box" phenomenon, where the reasoning behind AI decisions remains opaque. This lack of transparency can hinder clinical adoption and trust. Explainability is also crucial for ensuring that AI-driven decisions are interpretable and justifiable. Health care professionals must understand the rationale behind AI recommendations to make use of AI effectively and make informed decisions, particularly in high-stakes environments like radiation oncology. The lecture will stress that achieving a balance between explainability and performance is essential. While highly complex models may offer superior accuracy, they often lack transparency. Conversely, simpler models may be more interpretable but less accurate. This lecture will discuss strategies to strike this balance effectively. Explainability also plays a vital role in risk management, acceptance, commissioning, and quality assurance (QA) processes. More transparent AI models can help identify potential risks, facilitate acceptance by health care professionals, and ensure rigorous QA standards are met.
Finally, a number of examples related to explainability of AI will be discussed, in two distinct categories: AI for efficiency, such as image and text analysis and AI for efficacy such as outcome prediction and causal analysis.
This lecture aims to underscore the importance of explainability in AI for radiation oncology, providing practical examples and strategies to enhance clinical utility and enhance trust in AI-supported care.
Made with FlippingBook Ebook Creator