This repo contains the code of our paper:
HybrIK: A Hybrid Analytical-Neural Inverse Kinematics Solution for 3D Human Pose and Shape Estimation
Jiefeng Li, Chao Xu, Zhicun Chen, Siyuan Bian, Lixin Yang, Cewu Lu
[Paper
]
[Supplementary Material
]
[arXiv
]
[Project Page
]
In CVPR 2021
[2022/07/31] Training code with predicted camera is released.
[2022/07/25] HybrIK is now supported in Alphapose! Multi-person demo with pose-tracking is available.
[2022/04/26] Achieve SOTA results by adding the 3DPW dataset for training.
[2022/04/25] The demo code is released!
- Provide pretrained model
- Provide parsed data annotations
# 1. Create a conda virtual environment.
conda create -n hybrik python=3.7 -y
conda activate hybrik
# 2. Install PyTorch
conda install pytorch==1.6.0 torchvision==0.7.0 -c pytorch
# 3. Pull our code
git clone https://github.com/Jeff-sjtu/HybrIK.git
cd HybrIK
# 4. Install
python setup.py develop
- Download the SMPL model
basicModel_neutral_lbs_10_207_0_v1.0.0.pkl
from here atcommon/utils/smplpytorch/smplpytorch/native/models
. - Download our pretrained model (paper version) from [ Google Drive | Baidu (code:
qre2
) ]. - Download our pretrained model (with predicted camera) from [ Google Drive | Baidu (code:
4qyv
) ].
First make sure you download the pretrained model (with predicted camera) and place it in the ${ROOT}
directory, i.e., ./pretrained_w_cam.pth
.
- Visualize HybrIK on videos (run in single frame):
python scripts/demo_video.py --video-name examples/dance.mp4 --out-dir res_dance
- Visualize HybrIK on images:
python scripts/demo_image.py --img-dir examples --out-dir res
Download Human3.6M, MPI-INF-3DHP, 3DPW and MSCOCO datasets. You need to follow directory structure of the data
as below. Thanks to the great job done by Moon et al., we use the Human3.6M images provided in PoseNet.
|-- data
`-- |-- h36m
`-- |-- annotations
`-- images
`-- |-- pw3d
`-- |-- json
`-- imageFiles
`-- |-- 3dhp
`-- |-- annotation_mpi_inf_3dhp_train.json
|-- annotation_mpi_inf_3dhp_test.json
|-- mpi_inf_3dhp_train_set
`-- mpi_inf_3dhp_test_set
`-- |-- coco
`-- |-- annotations
| |-- person_keypoints_train2017.json
| `-- person_keypoints_val2017.json
|-- train2017
`-- val2017
- Download Human3.6M parsed annotations. [ Google | Baidu ]
- Download 3DPW parsed annotations. [ Google | Baidu ]
- Download MPI-INF-3DHP parsed annotations. [ Google | Baidu ]
./scripts/train_smpl_cam.sh test_3dpw configs/256x192_adam_lr1e-3-res34_smpl_3d_cam_2x_mix_w_pw3d.yaml
Download the pretrained model [Google Drive].
./scripts/validate_smpl.sh ./configs/256x192_adam_lr1e-3-res34_smpl_3d_cam_2x_mix.yaml ./pretrained_w_cam.pth
Method | 3DPW | Human3.6M |
---|---|---|
SPIN | 59.2 | 41.1 |
VIBE | 56.5 | 41.5 |
VIBE w. 3DPW | 51.9 | 41.4 |
PARE | 49.3 | - |
PARE w. 3DPW | 46.4 | - |
HybrIK | 48.8 | 34.5 |
HybrIK w. 3DPW | 45.3 | 36.3 |
If our code helps your research, please consider citing the following paper:
@inproceedings{li2021hybrik,
title={Hybrik: A hybrid analytical-neural inverse kinematics solution for 3d human pose and shape estimation},
author={Li, Jiefeng and Xu, Chao and Chen, Zhicun and Bian, Siyuan and Yang, Lixin and Lu, Cewu},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={3383--3393},
year={2021}
}