This repository is an implementation of paper:
"Learning Visibility Field for Detailed 3D Human Reconstruction and Relighting" by Ruichen Zheng, Peng Li, Haoqian Wang, Tao Yu
WARNING This is a messy research repo with limited code quality. We have only tested it on Linux (Ubuntu 22.04).
We provide synthesized data sample and pretrained model weight, that should run out of the box. Realworld results represented in the paper have code specific to our in-house capturing system, hence could not be included.
If you find our work useful, please consider cite our paper
@InProceedings{Zheng_2023_CVPR,
author = {Zheng, Ruichen and Li, Peng and Wang, Haoqian and Yu, Tao},
title = {Learning Visibility Field for Detailed 3D Human Reconstruction and Relighting},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2023},
pages = {216-226}
}
git clone --recursive https://github.com/pengHTYX/VisRecon.git -b release --depth=1
If you forgot --recursive
, call
git submodule update --init --recursive
to clone pybind11
-
Create conda environment
conda create --name vis-fuse -y python=3.10 conda activate vis-fuse
-
Install pytorch. Snippet below is an example, please following official instructions
conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
-
Install common python packages
pip install -r requirements.txt
-
Install Libigl
python -m pip install libigl
-
Install our custom CPP lib (requires
CUDA
)cd vis_fuse_utils python setup.py install
-
(Optional) Additional dependencies
Not needed to run the demo
- Download data and pretrained model weight from release page, unzip. Move
sample_data_thuman
to desired location (following instructions refer it as/path/to/sample_data_thuman
). Moveout
to project root folder - Download env_sh.npy for PIFu repo and place it under
implicit
- Render views (RGB-D)
python thuman_renderer.py --data_folder /path/to/sample_data_thuman
- Reconstruct and save results to
out/vis_fuse/test/4/#
python train.py --config configs/vis_fuse.json --save --test --data_folder /path/to/sample_data_thuman
- Visualize using interactive viewer
python prt_render_gui.py --window glfw
- Request THuman2.0, download and train-val-test split
- Generate random poses for each model and save as
cams.mat
in each subfolder and render views (or use other format for your liking)python thuman_renderer.py --data_folder /path/to/training_dataset
We do not include code to generate random poses
- Generate occlusion samples
python thuman_gen.py --data_folder /path/to/training_dataset
- Visualize and verify training data
python thuman.py --data_folder ~/dataset/training_dataset
- Refer to
config.py
to write config (i.e.my_config.json
) and save it toconfigs
- Train
python train.py --config configs/my_config.json