Skip to content

thunlp/MixADA

Repository files navigation

Better Robustness by More Coverage: Adversarial Training with Mixup Augmentation for Robust Fine-tuning

This is the repo for reproducing the results in our paper: Better Robustness by More Coverage: Adversarial Training with Mixup Augmentation for Robust Fine-tuning (arxiv). ACL 2021 (Findings).

Dependencies

I conducted all experiments under Torch==1.4.0, Transformers==2.3.0. You can see a complete list of dependencies in requirements.txt, although you don't have to install all of them as most of them are unnecessary for this codebase.

Data

We provide the exact data that we used in our experiments for easier reproduction. The download link is here.

Running

I have included examples of how to run model training with MixADA as well as how to evaluate the models under adversarial attacks in run_job.sh and run_job2.sh. However, you need to modify the scripts to fill in your dataset and pretrained model checkpoint paths.

Reference

Please consider citing our work if you found this code or our paper beneficial to your research.

@inproceedings{Si2020BetterRB,
  title={Better Robustness by More Coverage: Adversarial Training with Mixup Augmentation for Robust Fine-tuning},
  author={Chenglei Si and Zhengyan Zhang and Fanchao Qi and Zhiyuan Liu and Yasheng Wang and Qun Liu and Maosong Sun},
  booktitle={Findings of ACL},
  year={2021},
}

Contact

If you encounter any problems, feel free to raise them in issues or contact the authors.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published