The following DTU and Blended MVS datasets can be readily ingested by our training pipeline:
-
DTU Dataset (train & eval data, point clouds)
-
Blended MVS Dataset (train & eval data, point clouds)
To work with other datasets, organize your data as shown below and refer to the Colmap tutorial on reconstructing a dense initial point cloud.
The data is organized as follows:
dtu_eval_data # DTU evaluation data
public_data
|-- <dataset_name>
|-- <case_name>
|-- cameras_sphere.npz # camera parameters
|-- image
|-- 000.png # target image for each view
|-- 001.png
|-- mask
|-- 000.png # masks used only during evaluation
|-- 001.png
...
point_cloud_data
|-- <dataset_name>
|-- <case_name>
|-- dense
|-- points.ply
|-- points.ply.vis # point cloud in Colmap output format
Here the cameras_sphere.npz
follows the data format in IDR, where world_mat_xx
denotes the world to image projection matrix, and scale_mat_xx
denotes the normalization matrix.
Building CUDA extensions requires the Ninja build system. We also recommend ensuring that your system CUDA version matches or is newer than your PyTorch CUDA version before installing the CUDA extensions.
pip install -r requirements.txt
cd cuda_extensions
bash build_cuda_extensions.sh
Dependencies (click to expand)
- joblib==1.3.2
- matplotlib==3.8.2
- numpy==2.1.1
- open3d==0.18.0
- opencv_python==4.9.0.80
- pandas==2.2.2
- point_cloud_utils==0.30.4
- pyhocon==0.3.60
- PyMCubes==0.1.4
- pyntcloud==0.3.1
- scikit_learn==1.4.0
- scipy==1.14.1
- torch==2.2.0
- tqdm==4.66.1
- trimesh==4.1.3
For training and evaluation on all DTU/BMVS scenes:
- Training
bash train_dtu.sh
bash train_bmvs.sh
- Evaluation
bash eval_meshes_dtu.sh
bash eval_meshes_bmvs.sh
To evaluate the extracted meshes at different iterations, pass the corresponding mesh filename {iter_steps}.ply
using the --mesh_name
argument in the corresponding .sh
file.
For working with a single DTU/BMVS scene (replace bmvs/bear
with any {dataset}/{case}
):
- Training
python exp_runner.py \
--conf ./confs/bmvs.conf \
--case bmvs/bear \
--mode train
- Extract mesh from trained model
python exp_runner.py \
--conf ./confs/bmvs.conf \
--case bmvs/bear \
--mode validate_mesh \
--mesh_resolution 1024 \
--is_continue
The extracted mesh can be found at exp/bmvs/bear/meshes/<iter_steps>.ply
.
- Render Image
python exp_runner.py \
--conf ./confs/bmvs.conf \
--case bmvs/bear \
--mode render \
--image_idx 0 \
--is_continue
The rendered image can be found at exp/bmvs/bear/renders/<iter_steps>.png
.
This codebase builds upon a simplified version of NeuS, which makes use of code snippets borrowed from IDR and NeRF-pytorch.
Our custom CUDA extensions are adapted from the libigl C++ implementation of the fast winding number.
For DTU evaluations, we use a Python implementation of the original DTU evaluation code; for Blended MVS evaluations, we use a modified version of the DTU evaluation code with ground truth point clouds from Gaussian surfels. Our mesh cleaning code is borrowed from SparseNeuS.
Thanks for all of these great projects.