3Great Bay University 4Shanghai AI Laboratory
TL;DR: WildAvatar is a large-scale dataset from YouTube with 10,000+ human subjects, designed to address the limitations of existing laboratory datasets for avatar creation.
conda create -n wildavatar python=3.9
conda activate wildavatar
pip install -r requirements.txt
pip install pyopengl==3.1.4
- Download WildAvatar.zip
- Put the WildAvatar.zip under ./data/WildAvatar/.
- Unzip WildAvatar.zip
- Install yt-dlp
- Download and Extract images from YouTube, by running
python prepare_data.py --ytdl ${PATH_TO_YT-DLP}$
- Put the SMPL_NEUTRAL.pkl under ./assets/.
- Run the following script to visualize the smpl overlay of the human subject of ${youtube_ID}
python vis_smpl.py --subject "${youtube_ID}"
- The SMPL mask and overlay visualization can be found in data/WildAvatar/${youtube_ID}/smpl and data/WildAvatar/${youtube_ID}/smpl_masks
For example, if you run
python vis_smpl.py --subject "__-ChmS-8m8"
The SMPL mask and overlay visualization can be found in data/WildAvatar/__-ChmS-8m8/smpl and data/WildAvatar/__-ChmS-8m8/smpl_masks
For training and testing on WildAvatar, we currently provide the adapted code for HumanNeRF and GauHuman.
If you find our work useful for your research, please cite our paper.
@article{huang2024wildavatar,
title={WildAvatar: Web-scale In-the-wild Video Dataset for 3D Avatar Creation},
author={Huang, Zihao and Hu, ShouKang and Wang, Guangcong and Liu, Tianqi and Zang, Yuhang and Cao, Zhiguo and Li, Wei and Liu, Ziwei},
journal={arXiv preprint arXiv:2407.02165},
year={2024}
}
This project is built on source codes shared by GauHuman, HumanNeRF, and CLIFF. Many thanks for their excellent contributions!
If you have any questions, please feel free to contact Zihao Huang (zihaohuang at hust.edu.cn).