Qiankun Gao, Chen Zhao, Yifan Sun, Teng Xi, Gang Zhang, Bernard Ghanem, Jian Zhang
[Paper
] [Supp
] [arXiv
] [BibTex
]
![](https://private-user-images.githubusercontent.com/73707470/261842151-4db6b2d1-7c2e-4211-a4e4-6bf8380d0132.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjA3NjA2MzAsIm5iZiI6MTcyMDc2MDMzMCwicGF0aCI6Ii83MzcwNzQ3MC8yNjE4NDIxNTEtNGRiNmIyZDEtN2MyZS00MjExLWE0ZTQtNmJmODM4MGQwMTMyLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MTIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzEyVDA0NTg1MFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTQ0YTNhMTVjMDU4NDY2MjU3MjkyNmY0NWI4ZTI0YTdlYTMxM2M5MTEzNTBiZDdkOWEyYzUwOTcxODIzYmM4YTAmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.B1kg_NmVOJkj7gJA5wcSOggWh1QiMATAGxxjbznbJf8)
- [2023/08/19] Camera ready is submitted.
- [2023/07/14] Accepted to ICCV 2023 as poster presentation, code is released to the public!
-
Install all dependencies via
pip
pip install -r requirements.txt
⚠️ Removetorch
andtorchvision
fromrequirements.txt
first if another version of pytorch have already installed.
-
Create a dataset root diretory, e.g.,
data
. -
CIFAR100
andImageNet-R
datasets will be automatically downloaded, whileDomainNet
requires manual download. -
Overview of dataset root diretory
├── cifar100 │ └── cifar-100-python ├── domainnet │ ├── clipart │ ├── infograph │ ├── painting │ ├── quickdraw │ ├── real │ └── sketch └── imagenet-r ├── imagenet-r ├── train_list.txt └── val_list.txt
⚠️ The train-validation split of ImageNet-R dataset are consistent with the L2P JAX code, replace thetrain_list.txt
andval_list.txt
with train_list_coda-p.txt and val_list_coda-p.txt if you want to use the train-validation splitation of CODA-Prompt.
-
Generate config file (replace
<root>
with your dataset root path)python main.py data.root=<root> data.dataset=cifar100 --print_config > cifar100.yaml
-
Run code with an experiment config file
python main.py --config=cifar100.yaml
-
Reproduce results in the paper
We provide configs and Makefile to quickly reproduce the ten-tasks experimental results reported in the paper, run the following command if the
make
has been installed:make vit_adapter make vit_lora make vit_prefix make swin_adapter make convnext_adapter
Run
make
command withBASE
arg (default isbase/cifar100_order1.yaml
) to reproduce other experiments, e.g.:make BASE="base/imagenet-r_order1.yaml" vit_adapter
Modifiy
data.num_increment_classes
(5/10
for CIFAR100/ImageNet-R) in base config files to reproduce20-task
experiments.
- PyTorch implementation of L2P and DualPrompt.
- JAX implementation of L2P and DualPrompt: https://github.com/google-research/l2p.
- CODA-Prompt , state-of-the-art work from CVPR 2023.
- ESN, state-of-the-art work from AAAI 2023.
- Continumm, awesome data loading library for Continual Learning.
@article{gao2023lae,
title = {A Unified Continual Learning Framework with General Parameter-Efficient Tuning},
author = {Qiankun Gao, Chen Zhao, Yifan Sun, Teng Xi, Gang Zhang, Bernard Ghanem, Jian Zhang},
journal = {International Conference on Computer Vision (ICCV)},
year = {2023}
}