Skip to content

[CVPR 2024] The official repo for "GaussianAvatar: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians"

License

Notifications You must be signed in to change notification settings

aipixel/GaussianAvatar

Repository files navigation

GaussianAvatar: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians

Liangxiao Hu1,†, Hongwen Zhang2, Yuxiang Zhang3, Boyao Zhou3, Boning Liu3, Shengping Zhang1,*, Liqiang Nie1,

1Harbin Institute of Technology 2Beijing Normal University 3Tsinghua University

*Corresponding author    †Work done during an internship at Tsinghua University

📣 Updates

[4/3/2024] The pretrained models for the other three people from People Snapshot are released on OneDrive.

[7/2/2024] The scripts for your own video are released.

[23/1/2024] Training and inference codes for People Snapshot are released.

Introduction

We present GaussianAvatar, an efficient approach to creating realistic human avatars with dynamic 3D appearances from a single video.

Installation

To deploy and run GaussianAvatar, run the following scripts:

conda env create --file environment.yml
conda activate gs-avatar

Then, compile diff-gaussian-rasterization and simple-knn as in 3DGS repository.

Download models and data

  • SMPL/SMPL-X model: register and download SMPL and SMPL-X, and put these files in assets/smpl_files. The folder should have the following structure:
smpl_files
 └── smpl
   ├── SMPL_FEMALE.pkl
   ├── SMPL_MALE.pkl
   └── SMPL_NEUTRAL.pkl
 └── smplx
   ├── SMPLX_FEMALE.npz
   ├── SMPLX_MALE.npz
   └── SMPLX_NEUTRAL.npz
  • Data: download the provided data from OneDrive. These data include assets.zip, gs_data.zip and pretrained_models.zip. Please unzip assets.zip to the corresponding folder in the repository and unzip others to gs_data_path and pretrained_models_path.

Run on People Snapshot dataset

We take the subject m4c_processed for example.

Training

python train.py -s $gs_data_path/m4c_processed -m output/m4c_processed --train_stage 1

Evaluation

python eval.py -s $gs_data_path/m4c_processed -m output/m4c_processed --epoch 200

Rendering novel pose

python render_novel_pose.py -s $gs_data_path/m4c_processed -m output/m4c_processed --epoch 200

Run on Your Own Video

Preprocessing

  • masks and poses: use the bash script scripts/custom/process-sequence.sh in InstantAvatar. The data folder should have the followings:
smpl_files
 ├── images
 ├── masks
 ├── cameras.npz
 └── poses_optimized.npz
  • data format: we provide a script to convert the pose format of romp to ours (remember to change the path in L50 and L51):
cd scripts & python sample_romp2gsavatar.py
  • position map of the canonical pose: (remember to change the corresponding path)
python gen_pose_map_cano_smpl.py

Training for Stage 1

cd .. &  python train.py -s $path_to_data/$subject -m output/{$subject}_stage1 --train_stage 1 --pose_op_start_iter 10

Training for Stage 2

  • export predicted smpl:
cd scripts & python export_stage_1_smpl.py
  • visualize the optimized smpl (optional):
python render_pred_smpl.py
  • generate the predicted position map:
python gen_pose_map_our_smpl.py
  • start to train
cd .. &  python train.py -s $path_to_data/$subject -m output/{$subject}_stage2 --train_stage 2 --stage1_out_path $path_to_stage1_net_save_path

Todo

  • Release the reorganized code and data.
  • Provide the scripts for your own video.
  • Provide the code for real-time annimation.

Citation

If you find this code useful for your research, please consider citing:

@inproceedings{hu2024gaussianavatar,
        title={GaussianAvatar: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians},
        author={Hu, Liangxiao and Zhang, Hongwen and Zhang, Yuxiang and Zhou, Boyao and Liu, Boning and Zhang, Shengping and Nie, Liqiang},
        booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
        year={2024}
}

Acknowledgements

This project is built on source codes shared by Gaussian-Splatting, POP, HumanNeRF and InstantAvatar.

About

[CVPR 2024] The official repo for "GaussianAvatar: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published