This is the official PyTorch implementation of FCert: Certifiably Robust Few-Shot Classification in the Era of Foundation Models, accepted by [IEEE S&P 2024]. In this paper, we propose the first certified defense against data poisoning attacks to few-shot classification. Below is an illustration of our method, where we utilize robust statistics techniques to estimate a robust distance for each class.
In this repo, we implement FCert for CLIP on three benchmark datasets. We test our code in Python 3.8, CUDA 12.3, and PyTorch 2.2.2.
Please install CLIP by:
pip install git+https://github.com/openai/CLIP.git
To evaluate the certification performance of FCert against individual attack for
python -u main.py --dataset_type cifarfs --model_type CLIP --certification_type ind --classes_per_it_val 15 --num_support_val 15
To evaluate the certification performance of FCert against group attack for
python -u main.py --dataset_type cifarfs --model_type CLIP --certification_type group --classes_per_it_val 15 --num_support_val 15
You can add --file_path
in the command line to specify the location where you want to save the certification result.
You can cite our paper if you use this code for your research.
@article{wang2024fcert,
title={FCert: Certifiably Robust Few-Shot Classification in the Era of Foundation Models},
author={Wang, Yanting and Zou, Wei and Jia, Jinyuan},
journal={arXiv preprint arXiv:2404.08631},
year={2024}
}
Our code is based on Prototypical Networks and learn2learn.