Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can not reptoduce the face and eye movement. #11

Open
OrvilleQ opened this issue Jun 21, 2024 · 1 comment
Open

Can not reptoduce the face and eye movement. #11

OrvilleQ opened this issue Jun 21, 2024 · 1 comment

Comments

@OrvilleQ
Copy link

Hello.

I am trying to reproduce the character's face as well as eye movements in the demo video you provided. Since the female hair in that video is floating all the time, I assumed that it was generated from a single still image. So I generated a similar image and expanded it to 240 batches and generated it based on the image examples in the repository. However, the generated video only has the mouth movement, not the character's face and eyes following the speech as in the demo video.

How can I achieve the effect of the demo video?

Thank you for your help.

@SamKhoze
Copy link
Owner

@OrvilleQ we are adding a new node that is based on SadTalker and you can generate video from an image, however, it only works with close-up faces, the node will be added shortly

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants