Skip to content

This is the official implementation of HomoGAN, CVPR2022

Notifications You must be signed in to change notification settings

megvii-research/HomoGAN

Repository files navigation

[CVPR2022] Unsupervised Homography Estimation with Coplanarity-Aware GAN

Mingbo Hong1,2, Yuhang Lu1,3, Nianjin Ye1, Chunyu Lin4, Qijun Zhao2, Shuaicheng Liu5,1

1. Megvii Technology, 2. Sichuan University, 3. Univesity of South Carolina

4. Beijing Jiaotong University, 5. University of Electronic Science and Technology of China

This is the official implementation of HomoGAN, CVPR2022, [PDF]

Presentation Video:

[Bilibili] [Youtube]

Summary

Pipeline

pipeline

Dependencies

pip install -r requirements.txt

Download the Deep Homography Dataset

Please refer to Content-Aware Unsupervised Deep Homography Estimation..

  • Download raw dataset
# GoogleDriver
https://drive.google.com/file/d/19d2ylBUPcMQBb_MNBBGl9rCAS7SU-oGm/view?usp=sharing
# BaiduYun
https://pan.baidu.com/s/1Dkmz4MEzMtBx-T7nG0ORqA (key: gvor)
  • Unzip the data to directory "./dataset"

  • Run "video2img.py"

Be sure to scale the image to (640, 360) since the point coordinate system is based on the (640, 360).
e.g. img = cv2.imresize(img, (640, 360))

Pre-trained model

The models provided below are the retrained version(with minor differences in quantitative results)
model RE LT LL SF LF Avg Model
Pre-trained 0.24 0.47 0.59 0.62 0.43 0.47 Baidu Google
Fine-tuning 0.22 0.38 0.57 0.47 0.30 0.39 Baidu Google

How to test?

python evaluate.py --model_dir ./experiments/HomoGAN/ --restore_file xxx.pth

How to train?

You need to modify ./dataset/data_loader.py slightly for your environment, and you can also refer to Content-Aware Unsupervised Deep Homography Estimation.

Pre-training:

1) set "pretrain_phase" in ./experiments/HomoGAN/params.json as True
2) python train.py --model_dir ./experiments/HomoGAN/

Fine-tuning:

1) set "pretrain_phase" in ./experiments/HomoGAN/params.json as False
2) python train.py --model_dir ./experiments/HomoGAN/ --restore_file xxx.pth

Citation

If you use this code or ideas from the paper for your research, please cite our paper:

@InProceedings{Hong_2022_CVPR,
    author    = {Hong, Mingbo and Lu, Yuhang and Ye, Nianjin and Lin, Chunyu and Zhao, Qijun and Liu, Shuaicheng},
    title     = {Unsupervised Homography Estimation With Coplanarity-Aware GAN},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2022},
    pages     = {17663-17672}
}

Acknowledgments

In this project we use (parts of) the official implementations of the following works:

We thank the respective authors for open sourcing their methods.

About

This is the official implementation of HomoGAN, CVPR2022

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages