-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
why dont use all the data from FaceForensics ? #1
Comments
Hi, If you are targetting on Low-Quality videos, of-course 100 videos are not sufficient to train the model To answer your question, Note: This screenshot is taken from the original paper Thank you |
I am glad to receive your reply. I have another question that how to split the train_set and test_set; |
Yeah, I got your point, please note that the objective of this project itself is to Detect the manipulated faces. Let's stick to your example, as you said if manipulated face video there in test split and authenticated video in train split with the same face, the MODEL should able to RECOGNIZE this. Even though sometimes both versions of faces look like to be the same for a human eye, but the model should learn to recognize based on low-level artifacts (such as the corrupted nose, eyes, lips, etc). Hence I believe data split should be random to support our objective. If you feel I misunderstood your question, please post your query again with little more explanation. Thank you |
how do you detect the face on frame; I use a lib from web, found lots of faces cant be detect! |
hi
I have a question after i download the dataset;
there is thousands of the videos ? why just choose just 100 videos?
why dont use all the data from FaceForensics ?
looking forward your reply
The text was updated successfully, but these errors were encountered: