Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The results of running eval show only 1 digit after decimal point for acc on all tested tasks #1227

Open
lernerjenny opened this issue May 22, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@lernerjenny
Copy link

lernerjenny commented May 22, 2024

Describe the bug
The results of running eval.py show only 1 digit after decimal point for acc on all tested tasks.
If there is some configuration argument to set this, I found no mention of it.

Example:
{
"results": {
"hellaswag": {
"acc,none": 0.3,
"acc_stderr,none": 0.15275252316519466,
"acc_norm,none": 0.4,
"acc_norm_stderr,none": 0.16329931618554522
},
"arc_easy": {
"acc,none": 0.3,
"acc_stderr,none": 0.15275252316519466,
"acc_norm,none": 0.3,
"acc_norm_stderr,none": 0.15275252316519466
},
"piqa": {
"acc,none": 0.8,
"acc_stderr,none": 0.13333333333333333,
"acc_norm,none": 0.8,
"acc_norm_stderr,none": 0.13333333333333333
},
"sciq": {
"acc,none": 0.9,
"acc_stderr,none": 0.09999999999999999,
"acc_norm,none": 0.9,
"acc_norm_stderr,none": 0.09999999999999999
},
"arc_challenge": {
"acc,none": 0.2,
"acc_stderr,none": 0.13333333333333333,
"acc_norm,none": 0.2,
"acc_norm_stderr,none": 0.13333333333333333
},
},
To Reproduce
Steps to reproduce the behavior:

  1. run python deepy.py eval.py --conf_dir pythia 1B.yml --eval_tasks lambada_openai hellaswag piqa arc_easy arc_challenge winogrande sciq
  2. observe the generated result json

Expected behavior
present a configuration argument to set the number of digits after decimal point, and show above 4 digits after decimal point by default

Proposed solution
If you have an idea for how we can fix this problem, describe it here.

Screenshots
If applicable, add screenshots to help explain your problem.

Environment (please complete the following information):

  • GPUs:
  • Configs:

Additional context
Add any other context about the problem here.

@lernerjenny lernerjenny added the bug Something isn't working label May 22, 2024
@lernerjenny
Copy link
Author

lernerjenny commented Jun 4, 2024

I found the problem:

limit=10, # limit,

limit=10 causes this issue and much worse - incorrect eval results.
The following warning can be found in the lm-evaluation-harness: "--limit SHOULD ONLY BE USED FOR TESTING.REAL METRICS SHOULD NOT BE COMPUTED USING LIMIT."

@StellaAthena
Copy link
Member

I found the problem:

limit=10, # limit,

limit=10 causes this issue and much worse - incorrect eval results. The following warning can be found in the lm-evaluation-harness: "--limit SHOULD ONLY BE USED FOR TESTING.REAL METRICS SHOULD NOT BE COMPUTED USING LIMIT."

Yes, and specifically using limit 10 means only 10 items are run so it's mathematically impossible to have the other digits be non-zero :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants