Resilience Under Attack: Benchmarking Optimizers Against Poisoning in Federated Learning for Image Classification Using CNN
Ref: CISTER-TR-250501 Publication Date: 16 to 18, Jun, 2025
Resilience Under Attack: Benchmarking Optimizers Against Poisoning in Federated Learning for Image Classification Using CNN
Ref: CISTER-TR-250501 Publication Date: 16 to 18, Jun, 2025Abstract:
Federated Learning (FL) enables decentralized model training while preserving data privacy but remains susceptible to poisoning attacks. Malicious clients can manipulate local data or model updates, threatening FL’s reliability, especially in privacy-sensitive domains like healthcare and finance. While client side optimization algorithms play a crucial role in training local models, their resilience to such attacks is under explored. This study empirically evaluates the robustness of three widely used optimization algorithms: SGD, Adam, and RMSProp-against label-flipping attacks (LFAs) in image classification tasks using Convolutional Neural Networks (CNNs). Through 900 individual runs in both federated and centralized learning (CL) settings, we analyze their performance under Independent and Identically Distributed (IID) and Non-IID data distributions. Results reveal that SGD is the most resilient, achieving the highest accuracy in 87% of cases, while Adam performs best in 13%. Additionally, centralized models outperform FL on CIFAR-10, whereas FL excels on Fashion-MNIST, highlighting the impact of dataset characteristics on adversarial robustness.
International Work-Conference on Artificial Neural Networks (IWANN 2025), Advanced topics in computational intelligence.
A Coruña, Spain.
Record Date: 20, May, 2025









Yohannes Biadgligne
Kai Li