Skip to content

This is the code for the paper "Jinkai Zheng, Xinchen Liu, Wu Liu, Lingxiao He, Chenggang Yan, Tao Mei: Gait Recognition in the Wild with Dense 3D Representations and A Benchmark. (CVPR 2022)"

Notifications You must be signed in to change notification settings

yan811/Gait3D-Benchmark

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Gait3D-Benchmark

This is the code for the paper "Jinkai Zheng, Xinchen Liu, Wu Liu, Lingxiao He, Chenggang Yan, Tao Mei: Gait Recognition in the Wild with Dense 3D Representations and A Benchmark. (CVPR 2022)". The official project page is here.

What's New

  • [Mar 2022] Another gait in the wild dataset GREW is supported.
  • [Mar 2022] Our Gait3D dataset and SMPLGait method are released.

Model Zoo

Gait3D

Input Size: 128x88(64x44)

Method Rank@1 Rank@5 mAP mINP download
GaitSet(AAAI2019)) 42.60(36.70) 63.10(58.30) 33.69(30.01) 19.69(17.30) model-128(model-64)
GaitPart(CVPR2020) 29.90(28.20) 50.60(47.60) 23.34(21.58) 13.15(12.36) model-128(model-64)
GLN(ECCV2020) 42.20(31.40) 64.50(52.90) 33.14(24.74) 19.56(13.58) model-128(model-64)
GaitGL(ICCV2021) 23.50(29.70) 38.50(48.50) 16.40(22.29) 9.20(13.26) model-128(model-64)
OpenGait Baseline* 47.70(42.90) 67.20(63.90) 37.62(35.19) 22.24(20.83) model-128(model-64)
SMPLGait(CVPR2022) 53.20(46.30) 71.00(64.50) 42.43(37.16) 25.97(22.23) model-128(model-64)

*It should be noticed that OpenGait Baseline is equal to SMPLGait w/o 3D in our paper.

Cross Domain

Datasets in the Wild (GaitSet, 64x44)

Source Target Rank@1 Rank@5 mAP
GREW (official split) Gait3D 15.80 30.20 11.83
GREW (our split) 16.50 31.10 11.71
Gait3D GREW (official split) 18.81 32.25 ~
GREW (our split) 43.86 60.89 28.06

Requirements

  • pytorch >= 1.6
  • torchvision
  • pyyaml
  • tensorboard
  • opencv-python
  • tqdm
  • py7zr
  • tabulate
  • termcolor

Installation

You can replace the second command from the bottom to install pytorch based on your CUDA version.

git clone https://github.com/Gait3D/Gait3D-Benchmark.git
cd Gait3D-Benchmark
conda create --name py37torch160 python=3.7
conda activate py37torch160
conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.2 -c pytorch
pip install tqdm pyyaml tensorboard opencv-python tqdm py7zr tabulate termcolor

Data Preparation

Please download the Gait3D dataset by signing an agreement. We ask for your information only to make sure the dataset is used for non-commercial purposes. We will not give it to any third party or publish it publicly anywhere.

Data Pretreatment

Run the following command to preprocess the Gait3D dataset.

python misc/pretreatment.py --input_path 'Gait3D/2D_Silhouettes' --output_path 'Gait3D-sils-64-44-pkl' --img_h 64 --img_w 44
python misc/pretreatment.py --input_path 'Gait3D/2D_Silhouettes' --output_path 'Gait3D-sils-128-88-pkl' --img_h 128 --img_w 88
python misc/pretreatment_smpl.py --input_path 'Gait3D/3D_SMPLs' --output_path 'Gait3D-smpls-pkl'

Data Structrue

After the pretreatment, the data structure under the directory should like this

├── Gait3D-sils-64-44-pkl
│  ├── 0000
│     ├── camid0_videoid2
│        ├── seq0
│           └──seq0.pkl
├── Gait3D-sils-128-88-pkl
│  ├── 0000
│     ├── camid0_videoid2
│        ├── seq0
│           └──seq0.pkl
├── Gait3D-smpls-pkl
│  ├── 0000
│     ├── camid0_videoid2
│        ├── seq0
│           └──seq0.pkl

Train

Run the following command:

sh train.sh

Test

Run the following command:

sh test.sh

Citation

Please cite this paper in your publications if it helps your research:

@inproceedings{zheng2022gait3d,
  title={Gait Recognition in the Wild with Dense 3D Representations and A Benchmark},
  author={Jinkai Zheng, Xinchen Liu, Wu Liu, Lingxiao He, Chenggang Yan, Tao Mei},
  booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2022}
}

Acknowledgement

Here are some great resources we benefit:

  • The codebase is based on OpenGait.
  • The 3D SMPL data is obtained by ROMP.
  • The 2D Silhouette data is obtained by HRNet-segmentation.
  • The 2D pose data is obtained by HRNet.
  • The ReID featrue used to make Gait3D is obtained by FastReID.

About

This is the code for the paper "Jinkai Zheng, Xinchen Liu, Wu Liu, Lingxiao He, Chenggang Yan, Tao Mei: Gait Recognition in the Wild with Dense 3D Representations and A Benchmark. (CVPR 2022)"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages

  • Python 96.9%
  • Shell 3.1%