Skip to content
/ FLTracer Public

FLTracer: Accurate Poisoning Attack Provenance in Federated Learning

Notifications You must be signed in to change notification settings

Eyr3/FLTracer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

45 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FLTracer

This repo is the official implementation of FLTracer: Accurate Poisoning Attack Provenance in Federated Learning, IEEE Transactions on Information Forensics and Security (TIFS), 2024, Xinyu Zhang, Qingyu Liu, Zhongjie Ba, Yuan Hong, Tianhang Zheng, Feng Lin, Li Lu, Kui Ren.

This is the Additional Experimental Results (Appendix) for FLTracer (PDF).

Installation

Our code is implemented and evaluated on pytorch. The following packages are used by our code.

  • torch==2.0.1
  • numpy==1.24.3
  • scikit-learn==1.3.1
  • filterpy==1.4.5
  • tqdm==4.66.1

Our code is evaluated on Python 3.8.11.

Usage

Prepare Datasets

  • Image Classification: MNIST, EMNIST, CIFAR10,
  • Traffic Sign Classification: German Traffic Sign Recognition Benchmark (GTSRB),
  • Human Activity Recognition: HAR,
  • Object Classification: BDD100K dataset.

Save Updates

To detect attackers, you should provide local updates of suspect clients, which need to have the following folder structure.

results
|-- weights < updates need to be detected >
    |-- epoch_xxx < local updates at xxx epoch >
        |-- node_yyy.npz  < local updates of yyy client >
        |-- ...
    |-- ...
|-- reference < reference updates for domain detection >
    |-- epoch_xxx < local updates at xxx epoch >
        |-- node_yyy.npz  < local updates of yyy client >
        |-- ...
    |-- ...
|-- model
    < contains all saved models >

Demo Data

We share links to the 10 local updates for an epoch and their reference updates. You need to place the compressed file in the project directory and unzip it. Remember to modify the parameters in the blind_backdoor_detect.yaml file to ensure that the updates can be read successfully.

Detect Anomalies

After saving local updates, you can detect anomalies with various settings using the following command:

python main.py  --params config/add_noise_detect.yaml

The following is a description of some parameters in the configuration file:

  • path: it should match your path to be detected.
  • model: is the model type of local updates, which support resnet, vgg, and vit.
  • clients_num: is the number of participating clients.
  • lambda_signv, lambda_sortv, lambda_classv: are thresholds in Local Anomaly Detection. We use MAD to detect anomalies, with default settings of 2.5, 3.0, or higher.
  • tau: is the threshold in Task Detection, with default settings of -0.9.
  • corr_first_num: is the number of malicious clients, if unknown, please set the same value as clients_num.
  • reference_path: it should be matched with your reference path for Domain Detection.
  • update_epoch: is the epoch of updating the Kalman Filter estimator while detecting.

Repeat Experiments

Our experiments use backdoors101 to train models and launch attacks to prepare for the detection of model updates.

We provide all experimental examples on CIFAR10 (ResNet18).

  • Run experiments for add noise attack detection:
python main.py --params config/add_noise_detect.yaml
  • Run experiments for dirty label attack detection:
python main.py --params config/dirty_label_detect.yaml
  • Run experiments for sign-flipping attack detection:
python main.py --params config/sign_flipping_detect.yaml
  • Run experiments for adaptive untargeted attack detection, e.g. MB attack and Fang attack:
python main.py --params config/adaptive_untarget_attack_detect.yaml
  • Run experiments for backdoor attack detection:
python main.py --params config/patch_BN_backdoor_detect.yaml

Citation

@article{zhang2024fltracer,
  title={Fltracer: Accurate poisoning attack provenance in federated learning},
  author={Zhang, Xinyu and Liu, Qingyu and Ba, Zhongjie and Hong, Yuan and Zheng, Tianhang and Lin, Feng and Lu, Li and Ren, Kui},
  journal={IEEE Transactions on Information Forensics and Security},
  year={2024},
  publisher={IEEE}
}

About

FLTracer: Accurate Poisoning Attack Provenance in Federated Learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages