ESTRO 2024 - Abstract Book
S3014
Physics - Autosegmentation
ESTRO 2024
Conclusion:
Our experiments demonstrate that model sharing with naive fine-tuning in the context of P2PFL has the risk of overfitting to the special feature distribution of the target center data (Zurich and BraTS), while forgetting learned knowledge from the original dataset (Erlangen). LWF can improve P2PFL by preserving learned knowledge and improving model generalizability.
Keywords: Brain metastases, data privacy, multicenter study
References:
[1] Huang Y, Bert C, Gomaa A, Fietkau R, Maier A, Putz F. A Survey of Incremental Transfer Learning: Combining Peer to-Peer Federated Learning and Domain Incremental Learning for Multicenter Collaboration. arXiv preprint arXiv:2309.17192. 2023 Sep 29.
[2] Li Z, Hoiem D. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence. 2017 Nov 14;40(12):2935-47.
[3] Huang Y, Bert C, Fischer S, Schmidt M, Dörfler A, Maier A, Fietkau R, Putz F. Continual learning for peer-to-peer federated learning: A study on automated brain metastasis identification. arXiv preprint arXiv:2204.13591. 2022 Apr 26.
[4] Hinton G, Vinyals O, Dean J. Distilling the knowledge in a neural network. in NIPS Deep Learning and Representation Learning Workshop, 2015, pp. 1–9.
Made with FlippingBook - Online Brochure Maker