Skip to content

Fast integration of backdoor attacks in machine learning and federated learning.

Notifications You must be signed in to change notification settings

mtuann/fedlearn-backdoor-attacks

Repository files navigation

Table of contents

Description

This github project provided a fast integration for readers to work in the field of backdoor attacks in machine learning and federated learning.

Overview of project

  • Attack methods: DBA, LIRA, IBA, BackdoorBox, 3DFed, Chameleon, etc.
  • Defense methods: Krum, RFA, FoolsGold, etc.

How to develop from this project

  • Define dataset in tasks folder.
  • Define your own attack method in .attacks folder. Default attack method is adding trigger to the training dataset and training the model with the poisoned dataset.
  • Define your own defense method in defenses folder. Default defense method is FedAvg.
  • Define your own model in models folder and config for experiment in exps folder.
    • All the experiments are inherited from the 3 base .yaml files in exps folder for 3 datasets: mnist_fed.yaml (MNIST), cifar_fed.yaml (CIFAR-10), imagetnet_fed.yaml (Tiny ImageNet).
    • The base setting for each experiment contains 100 clients, in which 4 clients are attackers, 10 clients participate in each round, and 2 rounds are performed for benign training, and 5 rounds are performed for attack training.
    • The dataset is divided by dirichlet distribution with $\alpha = 0.5$.
    • The pre-trained model is downloaded from Google Drive (DBA).

Here is the example of running the code:

python training.py --name cifar10 --params ./exps/run_cifar10__2023.Nov.24/cifar10_fed_100_10_4_0.5_0.05.yaml

python training.py --name mnist --params ./exps/run_mnist__2023.Nov.24/mnist_fed_100_10_4_0.5_0.05.yaml

python training.py --name tiny-imagenet --params ./exps/run_tiny-imagenet__2023.Nov.24/tiny-imagenet_fed_100_10_4_0.5_0.05.yaml

In these commands, the --name argument is the name of experiment, and the --params argument is the path to the .yaml file of experiment setting.


For more commands, please refer to file ./exps/run-yaml-cmd.md. Please note that the ./exps/run-yaml-cmd.md file is generated by the ./exps/gen-run-yaml-file.py file. Refer to the ./exps/gen-run-yaml-file.py to generate different commands for your own experiments.

Dataset for Backdoor Attack in FL

Dataset Case Description
MNIST Case - The MNIST database of handwritten digits, has a training set of 60,000 examples, and a test set of 10,000 examples.
CIFAR-10 Case - The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images
CIFAR-100 Case - The CIFAR-100 dataset consists of 60000 32x32 colour images in 100 classes, with 600 images per class., 500 training images and 100 testing images per class.
Tiny ImageNet Case - The Tiny ImageNet contains 200 image classes, a training dataset of 100,000 images, a validation dataset of 10,000 images, and a test dataset of 10,000 images (50 validation and 50 test images per class). All images are of size 64×64
EMNIST Case - There are 62 classes (10 digits, 26 lowercase, 26 uppercase), 814255 samples, 697932 training samples, 116323 test samples.

Edge-case backdoors

Durability

Working with FedML

Types of Attacks in FL setting:

  • Byzantine Attack
  • DLG Attack (Deep Leakage from Gradients)
  • Backdoor Attack
  • Model Replacement Attack

Survey Papers for Machine Learning Security

Title Year Venue Code Dataset URL Note
A Survey on Fully Homomorphic Encryption: An Engineering Perspective 2017 ACM Computing Surveys link
Generative Adversarial Networks: A Survey Toward Private and Secure Applications 2021 ACM Computing Surveys link
A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks 2021 ACM Computing Surveys link
Video Generative Adversarial Networks: A Review 2022 ACM Computing Surveys link
Taxonomy of Machine Learning Safety: A Survey and Primer 2022 ACM Computing Surveys link
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses 2022 ACM Computing Surveys link
Generative Adversarial Networks: A Survey on Atack and Defense Perspective 2023 ACM Computing Surveys link
Trustworthy AI: From Principles to Practices 2023 ACM Computing Surveys link
Deep Learning for Android Malware Defenses: A Systematic Literature Review 2022 ACM Computing Surveys link
Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses 2022 IEEE TPAMI link
A Comprehensive Review of the State-of-the-Art on Security and Privacy Issues in Healthcare 2023 ACM Computing Surveys link
A Comprehensive Survey of Privacy-preserving Federated Learning: A Taxonomy, Review, and Future Directions 2023 ACM Computing Surveys link
Recent Advances on Federated Learning: A Systematic Survey 2023 arXiv link
Federated Learning for Generalization, Robustness, Fairness: A Survey and Benchmark 2023 arXiv link
Backdoor attacks and defenses in federated learning: Survey, challenges and future research directions 2024 Engineering Applications of Artificial Intelligence link

Paper Backdoor Attack in ML/ FL

Code for Backdoor Attack in ML/ FL

Title Year Venue Code Dataset URL Note
Practicing-Federated-Learning Github link
Attack of the Tails: Yes, You Really Can Backdoor Federated Learning NeurIPS'20 link
DBA: Distributed Backdoor Attacks against Federated Learning ICLR'20 link
LIRA: Learnable, Imperceptible and Robust Backdoor Attacks ICCV'21 link
Backdoors Framework for Deep Learning and Federated Learning AISTAT'20, USENIX'21 link
BackdoorBox: An Open-sourced Python Toolbox for Backdoor Attacks and Defenses 2023 Github link
3DFed: Adaptive and Extensible Framework for Covert Backdoor Attack in Federated Learning IEEE S&P'23 link
Neurotoxin: Durable Backdoors in Federated Learning ICML'22 link Durability
Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning ICML'23 link Durability
PerDoor: Persistent Backdoors in Federated Learning using Adversarial Perturbations COINS'23 link

Other Resources for Backdoor Attack in ML/ FL

Backdoor Attack code resources in FL

In FL community, there are many code resources for backdoor attack, in which each of them has its own FL scenario (e.g., hyperparameters, dataset, attack methods, defense methods, etc.). Thus, we provide a list of popular code resources for backdoor attack in FL as follows:

About

Fast integration of backdoor attacks in machine learning and federated learning.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published