Skip to content
/ RSPC Public

Code for "Improving Robustness of Vision Transformers by Reducing Sensitivity to Patch Corruptions"

License

Notifications You must be signed in to change notification settings

guoyongcs/RSPC

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

72 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Improving Robustness of Vision Transformers by Reducing Sensitivity to Patch Corruptions

Yong Guo, David Stutz, and Bernt Schiele. CVPR 2023.

This repository contains the official Pytorch implementation and the pretrained models of Reducing Sensitivity to Patch Corruptions (RSPC).

Catalog

  • Pre-trained Models on CIFAR
  • Pre-trained Models on ImageNet
  • Evaluation and Training Code

Dependencies

Our code is built based on pytorch and timm library. Please check the detailed dependencies in requirements.txt.

Dataset Preparation

  • CIFAR-10 and related robustness benchmarks: Please download the clean CIFAR-10 and the corrupted benchmark CIFAR-10-C.

  • CIFAR-100 and related robustness benchmarks: Please download the clean CIFAR-100 and the corrupted benchmark CIFAR-100-C.

  • ImageNet and related robustness benchmarks: Please download the clean ImageNet dataset. We evaluate the models on varisous robustness benchmarks, including ImageNet-C, ImageNet-A, and ImageNet-P.

Results and Pre-trained Models

Pre-trained models on CIFAR-10 and CIFAR-100

  • Pre-trained models on CIFAR-10 and CIFAR-10-C
Model CIFAR-10 CIFAR-10-C #Params Download
RSPC-RVT-S 97.73 94.14 23.0M model
RSPC-FAN-S-Hybrid 98.06 94.59 25.7M model
  • Pre-trained models on CIFAR-100 and CIFAR-100-C
Model CIFAR-100 CIFAR-100-C #Params Download
RSPC-RVT-S 84.81 74.94 23.0M model
RSPC-FAN-S-Hybrid 85.30 75.72 25.7M model

ImageNet-1K pre-trained models

  • RSPC-RVT pre-trained models
Model IN-1K $\uparrow$ IN-C $\downarrow$ IN-A $\uparrow$ IN-P $\downarrow$ #Params Download
RSPC-RVT-Ti 79.5 55.7 16.5 38.0 10.9M model
RSPC-RVT-S 82.2 48.4 27.9 34.3 23.3M model
RSPC-RVT-B 82.8 45.7 32.1 31.0 91.8M model
  • RSPC-FAN pre-trained models
Model IN-1K $\uparrow$ IN-C $\downarrow$ IN-A $\uparrow$ IN-P $\downarrow$ #Params Download
RSPC-FAN-T-Hybrid 80.3 57.2 23.6 37.3 7.5M model
RSPC-FAN-S-Hybrid 83.6 47.5 36.8 33.5 25.7M model
RSPC-FAN-B-Hybrid 84.2 44.5 41.1 30.0 50.5M model

Training and Evaluation

  • CIFAR-10 and CIFAR-100: Please refer to EXP_CIFAR.

  • RSPC-RVT on ImageNet-K: Please refer to RSPC_RVT.

  • RSPC-FAN on ImageNet-1K: Please refer to RSPC_FAN.

Acknowledgement

This repository is built using the timm library, RVT, and FAN repositories.

Citation

If you find this repository helpful, please consider citing:

@inproceedings{guo2023improving,
  title={Improving robustness of vision transformers by reducing sensitivity to patch corruptions},
  author={Guo, Yong and Stutz, David and Schiele, Bernt},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={4108--4118},
  year={2023}
}

About

Code for "Improving Robustness of Vision Transformers by Reducing Sensitivity to Patch Corruptions"

Resources

License

Stars

Watchers

Forks

Packages

No packages published