- Code for WACV'24 paper Continual Learning of Unsupervised Monocular Depth from Videos by Hemang Chawla, Arnav Varma, Elahe Arani and Bahram Zonooz.
We propose the CUDE framework for Continual Unsupervised Depth estimation with multiple tasks with domain and depth-range shifts, across various weather and lighting conditions, and sim-to-real, indoor-to-outdoor, outdoor-to-indoor scenarios. We also propose MonodepthCL - a method to mitigate catastrophic forgeting in CUDE.
MonoDepthCL was trained on TeslaV100 GPU for 20 epochs for each task with AdamW optimizer at a resolution of (192 x 640) with a batchsize 12. The docker environment used can be setup as follows:
git clone https://github.com/NeurAI-Lab/MIMDepth.git
cd MIMDepth
make docker-build
MonodepthCL is trained in a self-supervised manner from videos.
For training, utilize a .yaml
config file or a .ckpt
model checkpoint file with train.py
.
python train.py <config_file.yaml or model_checkpoint.ckpt>
The splits used within the CUDE benchmark can be found in splits folder.
If you find the code useful in your research, please consider citing our paper:
@inproceedings{cude2024wacv, author={H. {Chawla} and A. {Varma} and E. {Arani} and B. {Zonooz}}, booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision}, title={Continual Learning of Unsupervised Monocular Depth from Videos}, year={2023} }
This project is licensed under the terms of the MIT license.