Publikationen

LayerDBA: Circumventing Similarity-Based Defenses in Federated Learning

AutorNikolov, Javor; Pegoraro, Alessandro; Rieger, Phillip; Sadeghi, Ahmad-Reza
Datum2024
ArtConference Proceedings
AbstraktFederated Learning (FL) allows multiple parties to jointly train a Deep Neural Network (DNN). Instead of collecting all data at a single central entity, the training process is outsourced to individual clients. Each client trains its own model locally and shares only the parameters of the trained DNN with a central server. Although outsourcing the training strengthens clients' privacy, it also allows malicious clients to manipulate the resulting model and inject backdoors. While most existing backdoor attacks and defenses focus on scenarios where the benign clients' data are similar, therefore, independently and identically distributed (IID), less attention was given to more challenging non-IID scenarios. To demonstrate the vulnerability of FL for non-IID scenarios, we propose the LayerDBA attack that splits the poisoned parameter values across different model updates to ensure high distances between the individual updates. This allows LayerDBA to circumvent state-of-the-art defenses such as FoolsGold or Contra, which focus on non-IID scenarios. LayerDBA exploits their assumption of high similarities between poisoned model updates, thereby showing that sophisticated adversaries can always ensure high distances between different poisoned updates. We combine the approach with the dynamic Marksman trigger to create an effective but stealthy backdoor attack. LayerDBA achieves an attack success rate of 50% for CIFAR-10 and 80% for MNIST against state-of-the-art defenses by controlling 5% of clients.
Konferenz45th IEEE Symposium on Security and Privacy Workshops (SPW 2024)
ISBN979-8-3503-5487-4
InProceedings: 45th IEEE Symposium on Security and Privacy Workshops: SPW 2024, p.299-305
PublisherIEEE
Urlhttps://tubiblio.ulb.tu-darmstadt.de/id/eprint/152677