Skip to content

Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning

License

Notifications You must be signed in to change notification settings

0x0101010/EchoMimic

Β 
Β 

Repository files navigation

EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning

*Equal Contribution.
Terminal Technology Department, Alipay, Ant Group.

πŸ“£ πŸ“£ Updates

  • [2024.07.25] πŸ”₯πŸ”₯πŸ”₯ Accelerated models and pipe on Audio Driven are released. The inference speed can be improved by 10x (from ~7mins/240frames to ~50s/240frames on V100 GPU)
  • [2024.07.23] πŸ”₯ EchoMimic gradio demo on modelscope is ready.
  • [2024.07.23] πŸ”₯ EchoMimic gradio demo on huggingface is ready. Thanks Sylvain Filoni@fffiloni.
  • [2024.07.17] πŸ”₯πŸ”₯πŸ”₯ Accelerated models and pipe on Audio + Selected Landmarks are released. The inference speed can be improved by 10x (from ~7mins/240frames to ~50s/240frames on V100 GPU)
  • [2024.07.14] πŸ”₯ ComfyUI is now available. Thanks @smthemex for the contribution.
  • [2024.07.13] πŸ”₯ Thanks NewGenAI for the video installation tutorial.
  • [2024.07.13] πŸ”₯ We release our pose&audio driven codes and models.
  • [2024.07.12] πŸ”₯ WebUI and GradioUI versions are released. We thank @greengerong @Robin021 and @O-O1024 for their contributions.
  • [2024.07.12] πŸ”₯ Our paper is in public on arxiv.
  • [2024.07.09] πŸ”₯ We release our audio driven codes and models.

Gallery

Audio Driven (Sing)

s_01.mp4
s_02.mp4
s_03.mp4

Audio Driven (English)

en_01.mp4
en_03.mp4
en_05.mp4

Audio Driven (Chinese)

ch_02.mp4
ch_03.mp4
ch_04.mp4

Landmark Driven

po_01.mp4
po_02.mp4
po_03.mp4

Audio + Selected Landmark Driven

ap_04.mp4
ap_05.mp4
ap_06.mp4

(Some demo images above are sourced from image websites. If there is any infringement, we will immediately remove them and apologize.οΌ‰

Installation

Download the Codes

  git clone https://github.com/BadToBest/EchoMimic
  cd EchoMimic

Python Environment Setup

  • Tested System Environment: Centos 7.2/Ubuntu 22.04, Cuda >= 11.7
  • Tested GPUs: A100(80G) / RTX4090D (24G) / V100(16G)
  • Tested Python Version: 3.8 / 3.10 / 3.11

Create conda environment (Recommended):

  conda create -n echomimic python=3.8
  conda activate echomimic

Install packages with pip

  pip install -r requirements.txt

Download ffmpeg-static

Download and decompress ffmpeg-static, then

export FFMPEG_PATH=/path/to/ffmpeg-4.4-amd64-static

Download pretrained weights

git lfs install
git clone https://huggingface.co/BadToBest/EchoMimic pretrained_weights

The pretrained_weights is organized as follows.

./pretrained_weights/
β”œβ”€β”€ denoising_unet.pth
β”œβ”€β”€ reference_unet.pth
β”œβ”€β”€ motion_module.pth
β”œβ”€β”€ face_locator.pth
β”œβ”€β”€ sd-vae-ft-mse
β”‚   └── ...
β”œβ”€β”€ sd-image-variations-diffusers
β”‚   └── ...
└── audio_processor
    └── whisper_tiny.pt

In which denoising_unet.pth / reference_unet.pth / motion_module.pth / face_locator.pth are the main checkpoints of EchoMimic. Other models in this hub can be also downloaded from it's original hub, thanks to their brilliant works:

Audio-Drived Algo Inference

Run the python inference script:

  python -u infer_audio2vid.py
  python -u infer_audio2vid_pose.py

Audio-Drived Algo Inference On Your Own Cases

Edit the inference config file ./configs/prompts/animation.yaml, and add your own case:

test_cases:
  "path/to/your/image":
    - "path/to/your/audio"

The run the python inference script:

  python -u infer_audio2vid.py

Motion Alignment between Ref. Img. and Driven Vid.

(Firstly download the checkpoints with '_pose.pth' postfix from huggingface)

Edit driver_video and ref_image to your path in demo_motion_sync.py, then run

  python -u demo_motion_sync.py

Audio&Pose-Drived Algo Inference

Edit ./configs/prompts/animation_pose.yaml, then run

  python -u infer_audio2vid_pose.py

Pose-Drived Algo Inference

Set draw_mouse=True in line 135 of infer_audio2vid_pose.py. Edit ./configs/prompts/animation_pose.yaml, then run

  python -u infer_audio2vid_pose.py

Run the Gradio UI