Autor | Nguyen, Thien Duc; Rieger, Phillip; Yalame, Mohammad Hossein; Möllering, Helen; Fereidooni, Hossein; Marchal, Samuel; Miettinen, Markus; Mirhoseini, Azalia; Sadeghi, Ahmad-Reza; Schneider, Thomas; Zeitouni, Shaza |
---|
Datum | 2021 |
---|
Art | Journal Article, Report |
---|
Abstrakt | Recently, a number of backdoor attacks against Federated Learning (FL) have been proposed. In such attacks, an adversary injects poisoned model updates into the federated model aggregation process with the goal of manipulating the aggregated model to provide false predictions on specific adversary-chosen inputs. A number of defenses have been proposed; but none of them can effectively protect the FL process also against so-called multi-backdoor attacks in which multiple different backdoors are injected by the adversary simultaneously without severely impacting the benign performance of the aggregated model. To overcome this challenge, we introduce FLGUARD, a poisoning defense framework that is able to defend FL against state-of-the-art backdoor attacks while simultaneously maintaining the benign performance of the aggregated model. Moreover, FL is also vulnerable to inference attacks, in which a malicious aggregator can infer information about clients' training data from their model updates. To thwart such attacks, we augment FLGUARD with state-of-the-art secure computation techniques that securely evaluate the FLGUARD algorithm. We provide formal argumentation for the effectiveness of our FLGUARD and extensively evaluate it against known backdoor attacks on several datasets and applications (including image classification, word prediction, and IoT intrusion detection), demonstrating that FLGUARD can entirely remove backdoors with a negligible effect on accuracy. We also show that private FLGUARD achieves practical runtimes. |
---|
Serie | Crytography and Security |
---|
Publisher | arXiv |
---|
Url | https://tubiblio.ulb.tu-darmstadt.de/id/eprint/125878 |
---|