Implementation of Shape Generation and Completion Through Point-Voxel Diffusion
Linqi Zhou, Yilun Du, Jiajun Wu
conda create -n pvd python=3.6 cudatoolkit=11.1.* -c nvidia
conda activate pvd
conda install pytorch torchvision cudatoolkit=11.1 -c pytorch
pip install kaolin==0.1.0 pytorch3d==0.2.5 xmltodict==0.12.0 numba==0.51.2 pycuda==2019.1.2 matplotlib
pip install torch-scatter==2.0.4 torch-sparse==0.6.1 torch-cluster==1.5.4 torch-spline-conv==1.2.0 descartes==1.1.0 fire==0.3.1 jupyter==1.0.0 opencv_python==4.3.0.* Shapely==1.7.0 Pillow==6.2.1 torch_geometric==1.6.0 open3d trimesh ninja
Install PyTorchEMD by
cd metrics/PyTorchEMD
python setup.py install
cp build/**/emd_cuda.cpython-36m-x86_64-linux-gnu.so .
The code was tested on Unbuntu with Titan RTX.
For generation, we use ShapeNet point cloud, which can be downloaded here.
For completion, we use ShapeNet rendering provided by GenRe.
We provide script convert_cam_params.py
to process the provided data.
For training the model on shape completion, we need camera parameters for each view which are not directly available. To obtain these, simply run
$ python convert_cam_params.py --dataroot DATA_DIR --mitsuba_xml_root XML_DIR
which will create ..._cam_params.npz
in each provided data folder for each view.
Pretrained models can be downloaded here.
$ python train_generation.py --category car|chair|airplane
Please refer to the python file for optimal training parameters.
$ python test_generation.py --category car|chair|airplane --model MODEL_PATH
Some generation and completion results are as follows.
Multimodal completion on a ShapeNet chair.
Multimodal completion on PartNet.
Multimodal completion on two Redwood 3DScan chairs.
@inproceedings{Zhou_2021_ICCV,
author = {Zhou, Linqi and Du, Yilun and Wu, Jiajun},
title = {3D Shape Generation and Completion Through Point-Voxel Diffusion},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2021},
pages = {5826-5835}
}
For any questions related to codes and experiment setting, please contact Linqi Zhou and Yilun Du.