This repository contains the code to reproduce the results of Deep Learning course project "How good MVSNets are at Depth Fusion?".
Based on CVPR 2020 paper:
Fast-MVSNet: Sparse-to-Dense Multi-View Stereo With Learned Propagation and Gauss-Newton Refinement
git clone https://github.com/molspace/FastMVS_experiments
pip install -r requirements.txt
-
Download the preprocessed DTU training data from MVSNet and unzip it to
data/dtu
. -
Train the network
python fastmvsnet/train.py --cfg configs/dtu.yaml
- for original FastMVSNetpython fastmvsnet/train1.py --cfg configs/dtu.yaml
- for FastMVSNet with gt_depth directly added into the input as another dimensionpython fastmvsnet/train2.py --cfg configs/dtu.yaml
- for FastMVSNet with gt_depth features extracted separately and concatenated to image featuresYou can change the batch size in the configuration file according to your own pc.
-
Download the rectified images from DTU benchmark and unzip it to
data/dtu/Eval
. -
Test with the pretrained model
python fastmvsnet/test.py --cfg configs/dtu.yaml TEST.WEIGHT outputs/pretrained.pth
We need to apply depth fusion tools/depthfusion.py
to get the complete point cloud. Please refer to MVSNet for more details.
python tools/depthfusion.py -f dtu -n flow2