Skip to content

molspace/FastMVS_experiments

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FastMVS experiments

This repository contains the code to reproduce the results of Deep Learning course project "How good MVSNets are at Depth Fusion?".

Based on CVPR 2020 paper:

Fast-MVSNet: Sparse-to-Dense Multi-View Stereo With Learned Propagation and Gauss-Newton Refinement

How to use

git clone https://github.com/molspace/FastMVS_experiments

Installation

pip install -r requirements.txt

Training

  • Download the preprocessed DTU training data from MVSNet and unzip it to data/dtu.

  • Train the network

    python fastmvsnet/train.py --cfg configs/dtu.yaml - for original FastMVSNet

    python fastmvsnet/train1.py --cfg configs/dtu.yaml - for FastMVSNet with gt_depth directly added into the input as another dimension

    python fastmvsnet/train2.py --cfg configs/dtu.yaml - for FastMVSNet with gt_depth features extracted separately and concatenated to image features

    You can change the batch size in the configuration file according to your own pc.

Testing

  • Download the rectified images from DTU benchmark and unzip it to data/dtu/Eval.

  • Test with the pretrained model

    python fastmvsnet/test.py --cfg configs/dtu.yaml TEST.WEIGHT outputs/pretrained.pth

Depth Fusion

We need to apply depth fusion tools/depthfusion.py to get the complete point cloud. Please refer to MVSNet for more details.

python tools/depthfusion.py -f dtu -n flow2

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages