Skip to content

Code for WACV'24 paper Continual Learning of Unsupervised Monocular Depth from Videos

License

Notifications You must be signed in to change notification settings

NeurAI-Lab/CUDE-MonoDepthCL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CUDE-MonoDepthCL

alt text alt text We propose the CUDE framework for Continual Unsupervised Depth estimation with multiple tasks with domain and depth-range shifts, across various weather and lighting conditions, and sim-to-real, indoor-to-outdoor, outdoor-to-indoor scenarios. We also propose MonodepthCL - a method to mitigate catastrophic forgeting in CUDE.

Install

MonoDepthCL was trained on TeslaV100 GPU for 20 epochs for each task with AdamW optimizer at a resolution of (192 x 640) with a batchsize 12. The docker environment used can be setup as follows:

git clone https://github.com/NeurAI-Lab/MIMDepth.git
cd MIMDepth
make docker-build

Training

MonodepthCL is trained in a self-supervised manner from videos. For training, utilize a .yaml config file or a .ckpt model checkpoint file with train.py.

python train.py <config_file.yaml or model_checkpoint.ckpt>

The splits used within the CUDE benchmark can be found in splits folder.

Cite Our Work

If you find the code useful in your research, please consider citing our paper:

@inproceedings{cude2024wacv,
	author={H. {Chawla} and A. {Varma} and E. {Arani} and B. {Zonooz}},
	booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
	title={Continual Learning of Unsupervised Monocular Depth from Videos},
	year={2023}
}

License

This project is licensed under the terms of the MIT license.

About

Code for WACV'24 paper Continual Learning of Unsupervised Monocular Depth from Videos

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published