Skip to content

Code for Improving Task-free Continual Learning by Distributionally Robust Memory Evolution (ICML 2022)

License

Notifications You must be signed in to change notification settings

joey-wang123/DRO-Task-free

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Improving Task-free Continual Learning by Distributionally Robust Memory Evolution (ICML 2022)

Package Requirements

  • Python 3.8
  • Pytorch 1.8.1

Note : Our current implementation achieves better results across various hyperparameters than the results in our paper. For example, we can achieve around 38% on CIFAR10 (memory 500), more than 21.5% on CIFAR100 (memory 5000), more than 28% on mini-ImageNet (memory 10000) even combined with simple experience replay (ER) baseline. Our current implementation does not include the gradient dot product constraint since it has little gains but increases computation cost.

Download DataSet

Download mini-ImageNet dataset from here and put the dataset into the '/Data' folder.

Running Experiments

Improved Experience Replay

ER baseline + SGLD on CIFAR10:

python er_main.py --method SGLD --lr 0.1 --samples_per_task -1 --dataset split_cifar10 --disc_iters 1 --mem_size 50 --suffix 'ER' --buffer_batch_size 10 --batch_size 10 --hyper_search

ER baseline + SVGD on CIFAR10:

python er_main.py --method SVGD --lr 0.1 --samples_per_task -1 --dataset split_cifar10 --disc_iters 1 --mem_size 50 --suffix 'ER' --buffer_batch_size 10 --batch_size 10 --hyper_search

ER baseline + SGLD on CIFAR100:

python er_main.py --method SGLD --lr 0.1 --samples_per_task -1 --dataset split_cifar100 --disc_iters 3 --mem_size 50 --suffix 'ER' --buffer_batch_size 10 --batch_size 10 --hyper_search

ER baseline + SVGD on CIFAR100:

python er_main.py --method SVGD --lr 0.1 --samples_per_task -1 --dataset split_cifar100 --disc_iters 3 --mem_size 50 --suffix 'ER' --buffer_batch_size 10 --batch_size 10 --hyper_search

ER baseline + SGLD on mini-ImageNet:

python er_main.py --method SGLD --lr 0.1 --samples_per_task -1 --dataset miniimagenet --disc_iters 3 --mem_size 100 --suffix 'ER' --buffer_batch_size 10 --batch_size 10 --hyper_search

ER baseline + SVGD on mini-ImageNet:

python er_main.py --method SVGD --lr 0.1 --samples_per_task -1 --dataset miniimagenet --disc_iters 3 --mem_size 100 --suffix 'ER' --buffer_batch_size 10 --batch_size 10 --hyper_search

Acknowledgements

We would like to thank authors of the following repositories

Cite

@inproceedings{wang2022,
  title={Improving Task-free Continual Learning by Distributionally Robust Memory Evolution},
  author={Wang, Zhenyi and Shen, Li and Fang, Le and Suo, Qiuling and Duan, Tiehang and Gao, Mingchen},
  booktitle={International Conference on Machine Learning},
  year={2022}
}

Questions?

For general questions, contact Zhenyi Wang

About

Code for Improving Task-free Continual Learning by Distributionally Robust Memory Evolution (ICML 2022)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages