- [2024.07.25] π₯π₯π₯ Accelerated models and pipe on Audio Driven are released. The inference speed can be improved by 10x (from ~7mins/240frames to ~50s/240frames on V100 GPU)
- [2024.07.23] π₯ EchoMimic gradio demo on modelscope is ready.
- [2024.07.23] π₯ EchoMimic gradio demo on huggingface is ready. Thanks Sylvain Filoni@fffiloni.
- [2024.07.17] π₯π₯π₯ Accelerated models and pipe on Audio + Selected Landmarks are released. The inference speed can be improved by 10x (from ~7mins/240frames to ~50s/240frames on V100 GPU)
- [2024.07.14] π₯ ComfyUI is now available. Thanks @smthemex for the contribution.
- [2024.07.13] π₯ Thanks NewGenAI for the video installation tutorial.
- [2024.07.13] π₯ We release our pose&audio driven codes and models.
- [2024.07.12] π₯ WebUI and GradioUI versions are released. We thank @greengerong @Robin021 and @O-O1024 for their contributions.
- [2024.07.12] π₯ Our paper is in public on arxiv.
- [2024.07.09] π₯ We release our audio driven codes and models.
s_01.mp4 |
s_02.mp4 |
s_03.mp4 |
en_01.mp4 |
en_03.mp4 |
en_05.mp4 |
ch_02.mp4 |
ch_03.mp4 |
ch_04.mp4 |
po_01.mp4 |
po_02.mp4 |
po_03.mp4 |
ap_04.mp4 |
ap_05.mp4 |
ap_06.mp4 |
οΌSome demo images above are sourced from image websites. If there is any infringement, we will immediately remove them and apologize.οΌ
git clone https://github.com/BadToBest/EchoMimic
cd EchoMimic
- Tested System Environment: Centos 7.2/Ubuntu 22.04, Cuda >= 11.7
- Tested GPUs: A100(80G) / RTX4090D (24G) / V100(16G)
- Tested Python Version: 3.8 / 3.10 / 3.11
Create conda environment (Recommended):
conda create -n echomimic python=3.8
conda activate echomimic
Install packages with pip
pip install -r requirements.txt
Download and decompress ffmpeg-static, then
export FFMPEG_PATH=/path/to/ffmpeg-4.4-amd64-static
git lfs install
git clone https://huggingface.co/BadToBest/EchoMimic pretrained_weights
The pretrained_weights is organized as follows.
./pretrained_weights/
βββ denoising_unet.pth
βββ reference_unet.pth
βββ motion_module.pth
βββ face_locator.pth
βββ sd-vae-ft-mse
β βββ ...
βββ sd-image-variations-diffusers
β βββ ...
βββ audio_processor
βββ whisper_tiny.pt
In which denoising_unet.pth / reference_unet.pth / motion_module.pth / face_locator.pth are the main checkpoints of EchoMimic. Other models in this hub can be also downloaded from it's original hub, thanks to their brilliant works:
Run the python inference script:
python -u infer_audio2vid.py
python -u infer_audio2vid_pose.py
Edit the inference config file ./configs/prompts/animation.yaml, and add your own case:
test_cases:
"path/to/your/image":
- "path/to/your/audio"
The run the python inference script:
python -u infer_audio2vid.py
(Firstly download the checkpoints with '_pose.pth' postfix from huggingface)
Edit driver_video and ref_image to your path in demo_motion_sync.py, then run
python -u demo_motion_sync.py
Edit ./configs/prompts/animation_pose.yaml, then run
python -u infer_audio2vid_pose.py
Set draw_mouse=True in line 135 of infer_audio2vid_pose.py. Edit ./configs/prompts/animation_pose.yaml, then run
python -u infer_audio2vid_pose.py