Skip to content
/ LAE Public

A Unified Continual Learning Framework with General Parameter-Efficient Tuning, ICCV 2023 [PyTorch Code]

License

Notifications You must be signed in to change notification settings

gqk/LAE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A Unified Continual Learning Framework with General
Parameter-Efficient Tuning

News

  • [2023/08/19] Camera ready is submitted.
  • [2023/07/14] Accepted to ICCV 2023 as poster presentation, code is released to the public!

Installation

  • Install all dependencies via pip

    pip install -r requirements.txt

    ⚠️ Remove torch and torchvision from requirements.txt first if another version of pytorch have already installed.

Dataset

  1. Create a dataset root diretory, e.g., data.

  2. CIFAR100 and ImageNet-R datasets will be automatically downloaded, while DomainNet requires manual download.

  3. Overview of dataset root diretory

    ├── cifar100
    │   └── cifar-100-python
    ├── domainnet
    │   ├── clipart
    │   ├── infograph
    │   ├── painting
    │   ├── quickdraw
    │   ├── real
    │   └── sketch
    └── imagenet-r
        ├── imagenet-r
        ├── train_list.txt
        └── val_list.txt

    ⚠️ The train-validation split of ImageNet-R dataset are consistent with the L2P JAX code, replace the train_list.txt and val_list.txt with train_list_coda-p.txt and val_list_coda-p.txt if you want to use the train-validation splitation of CODA-Prompt.

Experiment

  • Generate config file (replace <root> with your dataset root path)

    python main.py data.root=<root> data.dataset=cifar100 --print_config > cifar100.yaml
  • Run code with an experiment config file

    python main.py --config=cifar100.yaml
  • Reproduce results in the paper

    We provide configs and Makefile to quickly reproduce the ten-tasks experimental results reported in the paper, run the following command if the make has been installed:

    make vit_adapter
    make vit_lora
    make vit_prefix
    make swin_adapter
    make convnext_adapter

    Run make command with BASE arg (default is base/cifar100_order1.yaml) to reproduce other experiments, e.g.:

    make BASE="base/imagenet-r_order1.yaml" vit_adapter
    

    Modifiy data.num_increment_classes (5/10 for CIFAR100/ImageNet-R) in base config files to reproduce 20-task experiments.

Acknowledgement

Citation

@article{gao2023lae,
  title = {A Unified Continual Learning Framework with General Parameter-Efficient Tuning},
  author = {Qiankun Gao, Chen Zhao, Yifan Sun, Teng Xi, Gang Zhang, Bernard Ghanem, Jian Zhang},
  journal = {International Conference on Computer Vision (ICCV)},
  year = {2023}
}

About

A Unified Continual Learning Framework with General Parameter-Efficient Tuning, ICCV 2023 [PyTorch Code]

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published