-
Notifications
You must be signed in to change notification settings - Fork 215
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to imporve accuracy/natural of lipsync #94
Comments
Update: Add interpolation between two neighboring visemes: Oculus vs Rhubarb with interpolation |
There are multiple reasons why the Rhubarb output looks worse than the Oculus output.
The best thing you can do to improve results is use a dry recording. But that still won't give you the kind of 3D animation you got with Oculus. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I made a video comparison with Oculus Lipsync: Oculus vs Rhubarb. It looks not pleasure. The next step I'll try interpolation between two visemes. Maybe it'll looks more natural after doing this. But I'm doubt I can accomplish it like Oculus.
Take a frame of videos, I found viseme weights as below:
Oculus: Video Percent: 0.0040, Visemes:[0.0218, 0.0004, 0.0001, 0.0005, 0.0009, 0.0004, 0.0001, 0.0001, 0.0334, 0.0001, 0.5889, 0.2065, 0.0023, 0.0065, 0.1380]
Rhubarb: Video Percent: 0.0040, Visemes: E
Oculus use 15 visemes: Viseme Reference, I really do not know how they calculate weights between many visemes, just know they use deep neural network.
So any plan or suggestion on rhubarb lipsync?
The text was updated successfully, but these errors were encountered: