head.mp4
-
2024/04/12
: ✨✨✨SMPL & Rendering scripts released! Champ your dance videos now💃🤸♂️🕺. See docs. -
2024/03/30
: 🚀🚀🚀Watch this amazing video tutorial. It's based on the unofficial(unstable) Champ ComfyUI🥳. -
2024/03/27
: Visit our roadmap🕒 to preview the future of Champ.
- System requirement: Ubuntu20.04/Windows 11, Cuda 12.1
- Tested GPUs: A100, RTX3090
Create conda environment:
conda create -n champ python=3.10
conda activate champ
pip install -r requirements.txt
Install packages with poetry
If you want to run this project on a Windows device, we strongly recommend to use
poetry
.
poetry install --no-root
-
Download pretrained weight of base models:
-
Download our checkpoints: \
Our checkpoints consist of denoising UNet, guidance encoders, Reference UNet, and motion module.
Finally, these pretrained models should be organized as follows:
./pretrained_models/
|-- champ
| |-- denoising_unet.pth
| |-- guidance_encoder_depth.pth
| |-- guidance_encoder_dwpose.pth
| |-- guidance_encoder_normal.pth
| |-- guidance_encoder_semantic_map.pth
| |-- reference_unet.pth
| `-- motion_module.pth
|-- image_encoder
| |-- config.json
| `-- pytorch_model.bin
|-- sd-vae-ft-mse
| |-- config.json
| |-- diffusion_pytorch_model.bin
| `-- diffusion_pytorch_model.safetensors
`-- stable-diffusion-v1-5
|-- feature_extractor
| `-- preprocessor_config.json
|-- model_index.json
|-- unet
| |-- config.json
| `-- diffusion_pytorch_model.bin
`-- v1-inference.yaml
We have provided several sets of example data for inference. Please first download and place them in the example_data
folder.
Here is the command for inference:
python inference.py --config configs/inference/inference.yaml
If using poetry
, command is
poetry run python inference.py --config configs/inference/inference.yaml
Animation results will be saved in results
folder. You can change the reference image or the guidance motion by modifying inference.yaml
.
You can also extract the driving motion from any videos and then render with Blender. We will later provide the instructions and scripts for this.
The default motion-02 in inference.yaml
has about 250 frames, requires ~20GB VRAM.
Note: If your VRAM is insufficient, you can switch to a shorter motion sequence or cut out a segment from a long sequence. We provide a frame range selector in inference.yaml
, which you can replace with a list of [min_frame_index, max_frame_index]
to conveniently cut out a segment from the sequence.
Try Champ with your dance videos! It may take time to setup the environment, follow the instruction step by step🐢, report issue when necessary. See our instructions on data preparation here.
Champ ComfyUI tutorial see here!
We thank the authors of MagicAnimate, Animate Anyone, and AnimateDiff for their excellent work. Our project is built upon Moore-AnimateAnyone, and we are grateful for their open-source contributions.
Visit our roadmap to preview the future of Champ.
If you find our work useful for your research, please consider citing the paper:
@misc{zhu2024champ,
title={Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance},
author={Shenhao Zhu and Junming Leo Chen and Zuozhuo Dai and Yinghui Xu and Xun Cao and Yao Yao and Hao Zhu and Siyu Zhu},
year={2024},
eprint={2403.14781},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Multiple research positions are open at the Generative Vision Lab, Fudan University! Include:
- Research assistant
- Postdoctoral researcher
- PhD candidate
- Master students
Interested individuals are encouraged to contact us at [email protected] for further information.