-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FID of Tiny-ImageNet or ImageNet 64x64 #4
Comments
Hi, Thank you for your interest.
|
Hi, |
(1) Yes. |
Thanks for your help. And I notice that your work uses unconditional models (DDPM or EDM). What if these models are conditional ones (with CFG in your another repository)? DDAE (DiT-XL/2) is evaluated in an unconditional manner, but how about conditional models‘ (DDPM or EDM) results? |
The CFG models (which are joint 10% uncond + 90% cond models) yield worse representations than pure unconditional models, despite achieving SOTA generative FIDs.
|
Hi,
Thanks for your codes. But I have questions about FID when dataset is larger. If my dataset is either Tiny-ImageNet or ImageNet 64x64, how many images should I generate to calculate FID? The exact number of Tiny-ImageNet or ImageNet 64x64 (larger than 50k)? And I should change the batch number (125) and 400 (125*400=50k) in sample.py, right?
BTW, I see other codes use total_training_steps instead of epoch. What is the relationship between these?
The text was updated successfully, but these errors were encountered: