ESTRO 2025 - Abstract Book
S2175
Interdisciplinary – Education in radiation oncology
ESTRO 2025
Material/Methods: Two sets of 20 sample questions were compiled to assess clinically relevant brachytherapy knowledge pertinent to board certification at the skill level of a physician and physics resident. Each question was entered into the following 3 LLM models: ChatGPT, Gemini and CoPilot. The answers were scored as either accurate (1) or partially accurate (0.5). In order to compare performance, the mean accuracy of each LLM was calculated per category of questions (physician vs. physics). The answers were also qualitatively compared in regard to depth, breadth and clarity. Results: The mean accuracy scores for ChatGPT, Gemini and CoPilot for the physician-level questions were 80%, 68% and 80%, respectively; and were 73%, 75% and 60% for the physics-level questions. Across both categories, ChatGPT gave the most thorough and intensive answers and explanations to questions while CoPilot gave the most concise answers, rarely exceeding more than a couple sentences. Gemini’s responses always provided the most links and references as well as cautionary statements to consult medical professionals for more information. ChatGPT also gave warning statements to check the information, whereas CoPilot did not. Conclusion: These preliminary results demonstrate there is still much room for improvement of the accuracy of LLMs in the clinical brachytherapy knowledge space. Although ChatGPT and CoPilot performed similarly on the physician-level questions, the responses from ChatGPT were far superior in terms of detail and broader understanding of the topic when compared to CoPilot. For the physics-level questions, CoPilot performed poorly and instead ChatGPT and Gemini were slightly superior and more similar in performance. Again, ChatGPT’s responses were also qualitatively superior to those of Gemini. This study demonstrates that although there is significant potential in using these LLMs as an efficient and easily accessible educational tool for residents, they must be used with caution as they often gave the wrong answers, even with confident and convincing explanations. However if used properly, these have the potential to be powerful tools for both educators and students alike. Digital Poster Radiotherapy in italian media: (mis)information, patients’ perception and medical career choices Federico Gagliardi 1 , Emma D'Ippolito 1 , Roberta Grassi 1 , Angelo Sangiovanni 1 , Vittorio Salvatore Menditti 1 , Dino Rubini 1 , Luca D'Ambrosio 1 , Luca Boldrini 2 , Viola Salvestrini 3 , Francesca De Felice 4 , Giuseppe Carlo Iorio 5 , Antonio Piras 6 , Luca Nicosia 7 , Valerio Nardone 1 1 Medicina di precisione, Università della Campania "L.Vanvitelli", Napoli, Italy. 2 Fondazione Policlinico, Istituto Gemelli, Roma, Italy. 3 AOU, Università degli studi di Firenze, Firenze, Italy. 4 AOU, Sapienza, Roma, Italy. 5 AOU, Città della salute e della scienza di Torino, Torino, Italy. 6 Villa Santa Teresa, Università di Palermo, Palermo, Italy. 7 IRRCS, Sacro cuore Don Calabria, Negrar, Italy Purpose/Objective: In Italy, the reputation of radiotherapy has undergone a notable transformation over time, with an increasing negative sentiment in the media potentially exerting influence on public opinion and patient treatment decisions. The portrayal of radiotherapy in the media is of great consequence, as it has the potential to influence perceptions of its safety and efficacy, which in turn affects access to care and the career choices of medical students. In accordance with the methodology employed by Wawrzuta et al. to investigate american perceptions of RT through the examination of the New York Times, this study aims to analyse how Italian media, specifically “Corriere della Sera”, covers the topic of radiotherapy to identify any biases or misinformation that could undermine public trust and hinder the future development of the field. 1 Keywords: artificial intelligence, large language model, 2104
Made with FlippingBook Ebook Creator