-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reproducing Figure 1 #23
Comments
I removed data augmentation (originally with random crop and padding etc.), and was able to produce similar figures now. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Impressive work! : )
I tried to reproduce the Cross-Entropy results of Figure 1, yet was unsuccessful. Specifically, the fraction of examples memorized at the end are far below than the plotted.
I used the same configurations specified in Appendix G.3. on CIFAR-10 with 40% of the labels flipped at random. For those 40% noisy data, at the ending epochs (around the 120th epoch), the fraction of memorized examples (i.e., those whose predicted label = the fake label) is only around 17%, while that of the generalized examples (i.e., predicted label = the true label) is around 67%. Also, the red line appeared to be always below the green line for the upper right plot in Figure 1.
Would you please share more details on your implementation of this part? Thank you.
The text was updated successfully, but these errors were encountered: