Abstract Book

S294

ESTRO 37

SP-0558 Automated planning and prediction models for bias-free treatment technique selection A.W. Sharfo 1 , M. Dirkx 1 , R. Bijman 1 , L. Rossi 1 , T. Arts 2 , S. Breedveld 1 , M. Hoogeman 1 , B. Heijmen 1 1 Erasmus MC Cancer Institute, Radiation Oncology, Rotterdam, The Netherlands 2 University Medical Center Utecht, Radiology, Utrecht, The Netherlands Abstract text Generally, treatment planning is performed iteratively in a trial-and-error procedure, and as a result, the treatment plan quality may highly depend on the available planning time, and the experience of the treatment planner. This may result in a sub-optimal treatment, where the patient has an enhanced probability of developing radiation-associated complications. On the other hand, it is well known that the risk of radiation-induced toxicity increases when higher doses and larger volumes of sensitive structures are involved. Over the years, toxicity prediction models have been published for a number of organs-at-risk, quantifying risk with clinically relevant metrics. In modern radiotherapy, for each patient, different treatment options (modalities/techniques/fractionations) may be available. Each of which has its own pros and cons. Treatment planning may be used to assist in making choices between treatment options. However, with current trial-and-error planning, both plans may be suboptimal to an unknown extent, which may jeopardize adequate selection. Automated planning may be used to substantially enhance the accuracy and validity of treatment option selection. This is especially true if for all treatment options the plan is automatically generated with multi-criterial optimization using exactly the same optimizer, optimization scheme, planning constraints and prioritized objectives. This approach may be used for ‘bias-free’ selection of the (on average) most favorable treatment option for a patient population with a certain type of cancer, and it can also be used to select the best option for each individual patient. In this presentation, a system for ‘bias-free’ selection of treatment options using automated planning will be discussed. Examples will be presented for VMAT, proton therapy, CyberKnife, and hypofractionation. Moreover, we will discuss the impact of the accuracy of the applied prediction models on the selection. SP-0559 For the motion: Until we finally perfect x-ray vision, we need patient specific QA L. McDermott 1 1 Noordwest Alkmaar, Radiotherapy, Alkmaar, The Netherlands Abstract text Radiotherapy is using radiation that you can’t see to treat tumours that you can’t see. In an ideal world, in a variation on the Superman idea, a perfect x-ray vision tool would available that can scan a person, localise any tumour cells, assess their sensitivity and simultaneously treat disease with a precise, ablative dose of high energy charged or uncharged particles. In this scenario, there is no treatment planning or verification required, since for every individual the machine can carry out the required treatment directly. But this is not the world we live in (yet). The questions is, is there still a place for patient specific QA today? Let’s focus on the word “still”. There are many impressive Radiotherapy products available today that automate currently error-prone, human driven processes. Debate: Is there still a place for patient specific QA?

commentary BMJ 1995;311:1539 for this talk and session in general. Most points made are still very relevant even after more than 20 years from this insightful commentary. SP-0557 Acceptance and commissioning of outcome prediction models L.C. Holloway 1 1 Ingham Institute and Liverpool and Macarthur Cancer Therapy Centres, Radiation Oncology, Sydney, Australia Abstract text With increasing availability and access to radiotherapy datasets as well as increasing computing power, novel learning methods and the scope of data recorded in clinics, there has also been an increase in radiotherapy outcome prediction models. These models are being developed for both cancer outcomes and toxicity outcomes and are covering many disease sites. Availability of these models has the potential to provide additional information to clinicians and patients, particularly where the current standard of randomised clinical trial evidence may not be available for this particular patient with their individual disease characteristics and co-morbidities. With publication of these outcome models and potentially availability of these models in software tools comes the question of how we determine if a particular model is acceptable and valid for use within our clinics. The first question to be asked is the validity of the proposed model beyond the data that may have been used for training the model. The ‘Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD)’ statement (Collins et al BMC Medicine 13:1, 2015) presents clear guidelines in this regard, splitting models into types 1 and 2, where only a single dataset has been used, type 3 where the model has been developed on one dataset and validated on an independent dataset and type 4 where the model has been independently validated on a separate dataset without reference to the model development dataset. At a minimum type 3 models are necessary but ideally this would be type 4. Any particular software implementation of this model should also be carefully tested to ensure that this is performing as expected for a known dataset, where possible openly available data previously used with the model can be used to achieve this. The second question to be asked is the appropriateness of the given model for the data within a given clinic. In the same fashion that we commission physics dosimetry tools ( e.g. dose calculation models used within a treatment planning system) we should also be ensuring that any outcome prediction model is commissioned for the datasets available in our individual clinics. This requires carefully scrutiny of the data will be used in model to ensure this data is measured in the same manner as that which the model was developed on and any differences accounted for. This includes factors such as imaging parameters. A retrospective cohort of patient data, importantly in the same format as the data which will be used for future patients, should be used to test the model. Ideally this would be undertaken with statistical rigour such that differences between the model outcomes and local practice would be observed. If this is not possible then active review of the model should be undertaken for the initial cohort of patients. As a community it is then important that results from this acceptance and commissioning work is shared to enable further models to be developed and updated where appropriate and to ensure that processes and tools for undertaking this work can also be further developed.

Made with FlippingBook flipbook maker