This is the source code for PVN3D: A Deep Point-wise 3D Keypoints Voting Network for 6DoF Pose Estimation (PDF, Video).
- Install CUDA9.0/CUDA10.0
- Set up python environment from requirement.txt:
pip3 install -r requirement.txt
- Install tkinter through
sudo apt install python3-tk
- Install python-pcl.
- Install PointNet++:
python3 setup.py build_ext
-
Download the YCB-Video Dataset from PoseCNN. Unzip it and link the unzipped
YCB_Video_Dataset
topvn3d/datasets/ycb/YCB_Video_Dataset
:ln -s path_to_unziped_YCB_Video_Dataset pvn3d/datasets/ycb
- Preprocess the validation set to speed up training:
cd pvn3d python3 -m datasets.ycb.preprocess_testset
- Start training on the YCB-Video Dataset by:
The trained model checkpoints are stored in
chmod +x ./train_ycb.sh ./train_ycb
train_log/ycb/checkpoints/
- Start evaluating by:
You can evaluate different checkpoint by revising the
chmod +x ./eval_ycb.sh ./eval_ycb.sh
tst_mdl
ineval_ycb.sh
to path of your target model. - We provide our pre-trained models here. Download the ycb pre-trained model, move it to
train_log/ycb/checkpoints/
and modifytst_mdl
ineval_ycb.sh
for testing.
- Scripts for synthesis data in LineMOD dataset.
- Training code and pre-trained models for the LineMOD dataset.
Please cite PVN3D if you use this repository in your publications:
@article{he2019pvn3d,
title={PVN3D: A Deep Point-wise 3D Keypoints Voting Network for 6DoF Pose Estimation},
author={He, Yisheng and Sun, Wei and Huang, Haibin and Liu, Jianran and Fan, Haoqiang and Sun, Jian},
journal={arXiv preprint arXiv:1911.04231},
year={2019}
}