ESTRO 2024 - Abstract Book

S4445

Physics - Machine learning models and clinical applications

ESTRO 2024

Figure 1: Co-occurrence of XAI method (legend) with AI problem type and data type (x-axis). In the systematic review, some of the XAI methods stand out as less commonly used. These include the 'proposed method,' where the authors introduce a novel XAI approach to elicit AI behavior, 'manipulation,' where the authors experimentally alter the architecture or input data to investigate their impact on AI outcomes, and 'correlation', where the authors explore which features correlate with each other to what extend.

Conclusion:

AI researchers in the RT field are acknowledging the urgency of explaining AI behavior to its users. As a response, proposed XAI techniques are exploited for AI applications in RT. While currently XAI is rarely the primary focus, future XAI development demands a balance between human interpretability and AI accuracy. We finalize our work with recommendations for researchers endeavoring explaining AI for RT. We argue that the urgency for accurate XAI for scientific discovery and model improvement can be counterbalanced with rigorous evaluation and validation studies. Finally, for implementation in RT, we provide recommendations for balancing user comprehensibility and XAI accuracy and stress the importance of objective and subjective XAI validation. While the latter can be tested in user studies, the former requires robustness checks, simulatability measures, or in case of a ground truth, conventional metrics such as accuracy.

Keywords: Explainable Artificial Intelligence

References:

[1] J. He, S. L. Baxter, J. Xu, J. Xu, X. Zhou, and K. Zhang, “The practical implementation of artificial intelligence technologies in medicine,” Nat. Med., vol. 25, no. 1, pp. 30–36, Jan. 2019, doi: 10.1038/s41591-018-0307-0.

Made with FlippingBook - Online Brochure Maker