ESTRO 2023 - Abstract Book
S1762
Digital Posters
ESTRO 2023
on the ADMM optimization framework was used as a benchmark. PDASC couples the primal dual active set with a continuation strategy on the regularization parameter in outer iteration. Each inner iteration first identifies the active set from both primal and dual variables, then updates the primal variable by solving a (typically small) least-squares problem defined on the active set, from which the dual variable can be updated explicitly.We compare these two algorithms on a brain and a lung SBRT case (Table 1). All simulations of clinical cases were run with the same server hardware. We compare two algorithms on l2 -norm value of plan quality and optimization time for different sparsity levels. Results Through the parameter and iteration-stopping criterion adjusting, PDASC and ADMM can reach comparable plan quality (Figure 1A and 1C). Both algorithms showed the trend that the higher the sparsity, the lower the plan quality. When PDASC and ADMM reach similar plan quality, PDASC takes a shorter time searching higher sparsity solutions, and ADMM spends less than finding low-sparsity solutions. For example, the plan quality will get worse as sparsity increased and sharply changes around 90%-100% sparsity level. The higher the sparsity, the shorter the optimization time PDASC use; the lower the sparsity, the less optimization time ADMM spends (Figure 1B and 1D). For the brain case, when sparsity is greater than about 50%, PDASC uses shorter optimization time. For the lung SBRT case, when sparsity is greater than about 30%, PDASC spends less optimization time. Their optimization time difference will significantly increase when sparsity tends to 0 or 100%.
Made with FlippingBook - professional solution for displaying marketing and sales documents online