Skip to content
forked from naivate/VS

Implementation of "VS: Reconstructing Clothed 3D Human from Single Image via Vertex Shift"

Notifications You must be signed in to change notification settings

starVisionTeam/VS

 
 

Repository files navigation

VS: Reconstructing Clothed 3D Human from Single Image via Vertex Shift

Leyuan Liu, Yuhan Li, Yunqi Gao, Changxin Gao, Yuanyuan Liu, Jingying Chen


Various applications require high-fidelity and artifact-free 3D human reconstructions. However, current implicit function-based methods inevitably produce artifacts while existing deformation methods are difficult to reconstruct high-fidelity humans wearing loose clothing. In this paper, we propose a two-stage deformation method named Vertex Shift (VS) for reconstructing clothed 3D humans from single images. Specifically, VS first stretches the estimated SMPL-X mesh into a coarse 3D human model using shift fields inferred from normal maps, then refines the coarse 3D human model into a detailed 3D human model via a graph convolutional network embedded with implicit-function-learned features. This ``stretch-refine'' strategy addresses large deformations required for reconstructing loose clothing and delicate deformations for recovering intricate and detailed surfaces, achieving high-fidelity reconstructions that faithfully convey the pose, clothing, and surface details from the input images. The graph convolutional network's ability to exploit neighborhood vertices coupled with the advantages inherited from the deformation methods ensure VS rarely produces artifacts like distortions and non-human shapes and never produces artifacts like holes, broken parts, and dismembered limbs. As a result, VS can reconstruct high-fidelity and artifact-less clothed 3D humans from single images, even under scenarios of challenging poses and loose clothing. Experimental results on three benchmarks and two in-the-wild datasets demonstrate that VS significantly outperforms current state-of-the-art methods.

Qualitative Results

Citation

Please consider citing the paper if you find the code useful in your research.

@InProceedings{VS_CVPR2024,
  author = {Liu, Leyuan and Li, Yuhan and Gao, Yunqi and Gao, Changxin and Liu, Yuanyuan and Chen, Jingying},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, 
  title = {{VS}: Reconstructing Clothed 3D Human from Single Image via Vertex Shift}, 
  year = {2024},
  pages = {10498-10507}
}

The paper can be downloaded from here.

Installation

Environment

Install "Manifold"

This code relies on the Robust Watertight Manifold Software. First cd into the location you wish to install the software. For example, we used cd ~/code. Then follow the installation instructions in the Watertight README. If you installed Manifold in a different path than ~/code/Manifold/build, accordingly (see this line)

cd VS
conda env create -f environment.yaml
conda activate VS
pip install -r requirements.txt

Download Pre-trained model and Related SMPL-X data

Link:https://pan.baidu.com/s/1GDk1d6p5FEzd4Y1mSY9UTg

Access Code:vsvs.

The latest_net.pth is saved under ./VS/Mr/checkpoints/debug/,pifuhd.pt is saved under ./VS/pifuhd_ori/,data is saved under ./VS/.

Quick Start

python -m apps.infer -in_dir ./examples -out_dir ./results

Acknowledgements

Note that the *** code of this repo is based on ***. We thank the authors for their great job!

Contact

We are still updating the code. If you have any trouble using this repo, please do not hesitate to E-mail Leyuan Liu ([email protected]) or Yuhan Li ([email protected]).

About

Implementation of "VS: Reconstructing Clothed 3D Human from Single Image via Vertex Shift"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 96.7%
  • Cuda 1.6%
  • C++ 0.7%
  • C 0.3%
  • Shell 0.3%
  • Cython 0.3%
  • GLSL 0.1%