Skip to content

Official repository of paper titled "D3Former: Debiased Dual Distilled Transformer for Incremental Learning".

Notifications You must be signed in to change notification settings

abdohelmy/D-3Former

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

D3Former: Debiased Dual Distilled Transformer for Incremental Learning

🎉 Accepted to CLVision @ CVPR 2023 [paper] | [poster]

Abstract: In class incremental learning (CIL) setting, groups of classes are introduced to a model in each learning phase. The goal is to learn a unified model performant on all the classes observed so far. Given the recent popularity of Vision Transformers (ViTs) in conventional classification settings, an interesting question is to study their continual learning behaviour. In this work, we develop a Debiased Dual Distilled Transformer for CIL dubbed D3Former. The proposed model leverages a hybrid nested ViT design to ensure data efficiency and scalability to small as well as large datasets. In contrast to a recent ViT based CIL approach, our D3Former does not dynamically expand its architecture when new tasks are learned and remains suitable for a large number of incremental tasks. The improved CIL behaviour of D3Former owes to two fundamental changes to the ViT design. First, we treat the incremental learning as a long-tail classification problem where the majority samples from new classes vastly outnumber the limited exemplars available for old classes. To avoid the bias against the minority old classes, we propose to dynamically adjust logits to emphasize on retaining the representations relevant to old tasks. Second, we propose to preserve the configuration of spatial attention maps as the learning progresses across tasks. This helps in reducing catastrophic forgetting by constraining the model to retain the attention on the most discriminative regions. D3Former obtains favorable results on incremental versions of CIFAR-100, MNIST, SVHN, and ImageNet datasets.

Getting Started

We advise using python 3.8, CUDA 11.3 and pytorch version 1.10.1

You may download Anaconda and read the installation instruction on their official website: https://www.anaconda.com/download/

Create a new environment from the provided yml file:

conda env create -f incremental.yml
conda activate D3former

Datasets

CIFAR100 will be downloaded automatically by torchvision when running the experiments.

ImageNet-subset100 can be downloaded by following the instructions here

ImageNet1K can be downloaded from the official website

The corresponding path of the folder has to be set as data_dir argument in main.py

Running Experiments

For CIFAR-100

python3 main.py --gpu 0 --dataset cifar100 --nb_cl_fg 50 --nb_cl 10 --the_lambda 10 --tau 1 --gamma 0.1 --warmup 20
python3 main.py --gpu 0 --dataset cifar100 --nb_cl_fg 50 --nb_cl 5 --the_lambda 10 --tau 1 --gamma 0.1 --warmup 20
python3 main.py --gpu 0 --dataset cifar100 --nb_cl_fg 50 --nb_cl 2 --the_lambda 10 --tau 1 --gamma 0.1 --warmup 20

ImageNet subset-100

python3 main.py --gpu 0 --dataset imagenet_sub --nb_cl_fg 50 --nb_cl 10 --the_lambda 4 --tau 0.3 --gamma 0.05 --warmup 20
python3 main.py --gpu 0 --dataset imagenet_sub --nb_cl_fg 50 --nb_cl 5 --the_lambda 4 --tau 0.3 --gamma 0.05 --warmup 20
python3 main.py --gpu 0 --dataset imagenet_sub --nb_cl_fg 50 --nb_cl 2 --the_lambda 4 --tau 0.3 --gamma 0.05 --warmup 20

ImageNet-1K

python3 main.py --gpu 0 --dataset imagenet --nb_cl_fg 500 --nb_cl 100 --the_lambda 4 --tau 0.3 --gamma 0.05 --warmup 20
python3 main.py --gpu 0 --dataset imagenet --nb_cl_fg 500 --nb_cl 50 --the_lambda 4 --tau 0.3 --gamma 0.05 --warmup 20

Acknowledgement

Our code is built upon AANet. We would like to thank the authors for their implementation.

Citation

If you found this project useful, consider starring the repo and cite us in your work:

@InProceedings{Mohamed_2023_CVPR,
    author    = {Mohamed, Abdelrahman and Grandhe, Rushali and Joseph, K. J. and Khan, Salman and Khan, Fahad},
    title     = {D3Former: Debiased Dual Distilled Transformer for Incremental Learning},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
    month     = {June},
    year      = {2023},
    pages     = {2420-2429}
}

About

Official repository of paper titled "D3Former: Debiased Dual Distilled Transformer for Incremental Learning".

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages