ESTRO 2024 - Abstract Book

S2857

Interdisciplinary - Health economics & health services research

ESTRO 2024

2) Mak KS, Smith AB, Eidelman A, Clayman R, Niemierko A, Cheng JS, Matthews J, Drumm MR, Nielsen ME, Feldman AS, Lee RJ, Zietman AL, Chen RC, Shipley WU, Milowsky MI, Efstathiou JA. Quality of Life in Long-term Survivors of Muscle-Invasive Bladder Cancer. Int J Radiat Oncol Biol Phys. 2016 Dec 1;96(5):1028-1036. doi: 10.1016/j.ijrobp.2016.08.023. Epub 2016 Aug 24. PMID: 27727064.

2723

Digital Poster

Trust in Artificial Intelligence in Radiotherapy: A Survey

Luca M Heising 1,2 , Carol X.J. Ou 2 , Frank Verhaegen 2 , Cecile J.A. Wolfs 1 , Frank Hoebers 1 , Maria J.G. Jacobs 1,2

1 GROW, Radiation Oncology (MAASTRO), Maastricht, Netherlands. 2 Tilburg University, Department of Management, Tilburg, Netherlands

Purpose/Objective:

With contemporary high-performance computing infrastructures, a large body of literature has investigated artificial intelligence (AI) applications for radiotherapy (RT), with positive perspectives and promising outcomes [1]. Contrasting to the promising results, the number of mature AI studies, i.e. studies assessing AI to the gold standard, and prospective real-world AI studies remains minimal [2]. Several studies have investigated barriers to the implementation of AI, with trust emerging as a challenging obstacle [3] – [8]. Consequently, trust redeems a fundamental factor for clinicians’ intention and behavior to use AI in practice. Lack of transparency is often regarded as one of the reasons for mistrust in AI. In RT, the most emerging AI tools are for automated contouring and treatment planning. Commissioning and quality assurance (QA) procedures for safe and efficient introduction of AI based applications in clinical practice heavily depend on the method implemented but also on the ‘trust’ that is present [6]. The aim of the current study was to establish the role of explainable AI (XAI) in engendering trust in AI. A conceptual framework of trust in AI was proposed by Solberg et al. [10], suggesting a moderating effect of perceived control over AI on trust. XAI promises to allow for human oversight over the AI decision [11]. The aim of the current study was to test trust in AI within the field of RT and scrutinize what factors affect this trust. A questionnaire was developed to examine trust in AI in RT. The questionnaire consisted of questions from validated surveys that aimed to measure the conceptual framework as proposed by Solberg et al. Questions regarding XAI were added based on the literature. In the survey, we differentiate various AI tools by profession. Specifically, RT Technicians (RTT) and RT oncologists (RTO) were asked about automated delineation, MPs were asked about automated treatment error identification, and radiologists were asked about a AI-aided diagnosis system. After thorough evaluation through a pilot study, the questionnaire consisted of 33 questions and was sent to professionals in the field of radiotherapy and radiology in the Netherlands. The RT professionals chose to voluntarily participate in the survey by clicking a link. For a follow up investigation of the survey findings, a group panel was held with 30 ambassadors of eight radiotherapy clinics in the Netherlands. During the panel, participants were asked to answer whether their colleagues trust or mistrust AI. Furthermore, they were asked to name characteristics of trustworthy AI and what obstacles they foresee in implementing AI in RT. Material/Methods:

Made with FlippingBook - Online Brochure Maker