Skip to content

Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance

License

Notifications You must be signed in to change notification settings

NearlineMan/champ

 
 

Repository files navigation

Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance

1Nanjing University 2Fudan University 3Alibaba Group
*Equal Contribution +Corresponding Author
head.mp4

Framework

framework

News

  • 2024/04/12: ✨✨✨SMPL & Rendering scripts released! Champ your dance videos now💃🤸‍♂️🕺. See docs.

  • 2024/03/30: 🚀🚀🚀Watch this amazing video tutorial. It's based on the unofficial(unstable) Champ ComfyUI🥳.

  • 2024/03/27: Cool Demo on replicate🌟, Thanks camenduru!👏

  • 2024/03/27: Visit our roadmap🕒 to preview the future of Champ.

Installation

  • System requirement: Ubuntu20.04/Windows 11, Cuda 12.1
  • Tested GPUs: A100, RTX3090

Create conda environment:

  conda create -n champ python=3.10
  conda activate champ

Install packages with pip

  pip install -r requirements.txt

Install packages with poetry

If you want to run this project on a Windows device, we strongly recommend to use poetry.

poetry install --no-root

Download pretrained models

  1. Download pretrained weight of base models:

  2. Download our checkpoints: \

Our checkpoints consist of denoising UNet, guidance encoders, Reference UNet, and motion module.

Finally, these pretrained models should be organized as follows:

./pretrained_models/
|-- champ
|   |-- denoising_unet.pth
|   |-- guidance_encoder_depth.pth
|   |-- guidance_encoder_dwpose.pth
|   |-- guidance_encoder_normal.pth
|   |-- guidance_encoder_semantic_map.pth
|   |-- reference_unet.pth
|   `-- motion_module.pth
|-- image_encoder
|   |-- config.json
|   `-- pytorch_model.bin
|-- sd-vae-ft-mse
|   |-- config.json
|   |-- diffusion_pytorch_model.bin
|   `-- diffusion_pytorch_model.safetensors
`-- stable-diffusion-v1-5
    |-- feature_extractor
    |   `-- preprocessor_config.json
    |-- model_index.json
    |-- unet
    |   |-- config.json
    |   `-- diffusion_pytorch_model.bin
    `-- v1-inference.yaml

Inference

We have provided several sets of example data for inference. Please first download and place them in the example_data folder.

Here is the command for inference:

  python inference.py --config configs/inference/inference.yaml

If using poetry, command is

poetry run python inference.py --config configs/inference/inference.yaml

Animation results will be saved in results folder. You can change the reference image or the guidance motion by modifying inference.yaml.

You can also extract the driving motion from any videos and then render with Blender. We will later provide the instructions and scripts for this.

The default motion-02 in inference.yaml has about 250 frames, requires ~20GB VRAM.

Note: If your VRAM is insufficient, you can switch to a shorter motion sequence or cut out a segment from a long sequence. We provide a frame range selector in inference.yaml, which you can replace with a list of [min_frame_index, max_frame_index] to conveniently cut out a segment from the sequence.

SMPL & Rendering

Try Champ with your dance videos! It may take time to setup the environment, follow the instruction step by step🐢, report issue when necessary. See our instructions on data preparation here.

ComfyUI tutorial

Champ ComfyUI tutorial see here!

Acknowledgements

We thank the authors of MagicAnimate, Animate Anyone, and AnimateDiff for their excellent work. Our project is built upon Moore-AnimateAnyone, and we are grateful for their open-source contributions.

Roadmap

Visit our roadmap to preview the future of Champ.

Citation

If you find our work useful for your research, please consider citing the paper:

@misc{zhu2024champ,
      title={Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance},
      author={Shenhao Zhu and Junming Leo Chen and Zuozhuo Dai and Yinghui Xu and Xun Cao and Yao Yao and Hao Zhu and Siyu Zhu},
      year={2024},
      eprint={2403.14781},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Opportunities available

Multiple research positions are open at the Generative Vision Lab, Fudan University! Include:

  • Research assistant
  • Postdoctoral researcher
  • PhD candidate
  • Master students

Interested individuals are encouraged to contact us at [email protected] for further information.

About

Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%