ESTRO 2024 - Abstract Book

S4444

Physics - Machine learning models and clinical applications

ESTRO 2024

862

Digital Poster

Explainable Artificial Intelligence in Radiotherapy: A Systematic review

Luca M Heising 1,2 , Cecile J.A. Wolfs 1 , Maria J.A. Jacobs 1,2 , Frank Verhaegen 1 , Carol X.J. Ou 2

1 GROW, Radiation Oncology (MAASTRO), Maastricht, Netherlands. 2 Tilburg University, Department of Management, Tilburg, Netherlands

Purpose/Objective:

Over the past years, the role of Artificial Intelligence (AI) in radiotherapy (RT) has grown substantially. For every step of the RT workflow, AI aids have been proposed. To date, many of these applications remain in the research phase. One of the key barriers to its implementation is the lack of AI transparency [1]. Explainable AI (XAI) is an emerging line of research exploring methods to explain AI behavior with the promise of empowering overseeing and retaining control over AI-driven decisions. The development of XAI requires human-centric design as XAI serves as a tool for human and AI interaction [2]. In many cases, XAI simplifies the AI model to be interpretable to humans. While this may be harmless in everyday scenarios, it should be considered in the context of healthcare, particularly in an RT setting where high accuracy is required. The objective of our systematic review is to formulate recommendations for investigating and then developing XAI for RT with the aim of balancing user-comprehensibility and XAI accuracy.

Material/Methods:

The literature search includes deep learning applications in RT. We excluded articles that were not research articles and articles that clearly aimed at diagnosing cancer. After removing duplicates and reviewing papers, our search in three databases has led to 479 articles. Eligible papers were selected according to the PRISMA guidelines [3]. In our analysis, we use a thematic coding system based on the proposed taxonomy on XAI [4], and RT-oriented themes. For example, XAI-related codes include XAI method, XAI validation, and target audience, whereas RT-related codes cover cancer site, RT modality, and RT workflow step. Furthermore, we use VOSviewer for a bibliographic analysis of keywords of the included papers.

Results:

In total, 69 papers published between 2013 and 2023 met the inclusion criteria. Our bibliographic analysis reveals that XAI was rarely the sole focus of included studies; instead, it was often an addition to the AI application. Meanwhile, there is no one clear prevailing preference in XAI methods (Figure 1). Our literature review indicates many studies lack a clear description of their purpose for XAI. From our analyses, we have distilled three primary purposes for RT: 1) scientific discovery, 2) model enhancement, and 3) clinical application. The latter can be further subdivided into AI verification and justification of AI-driven decisions. A major challenge of XAI is the balance between human interpretability and XAI accuracy. The primary purpose of the majority of papers was AI implementation in clinical RT practice. In this context, it becomes crucial to involve end-users (i.e. physicists, physicians, RT technologists) in the development process. Different users have various skills, knowledge-bases, and tasks associated with the AI's outputs.

Made with FlippingBook - Online Brochure Maker