-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement the ARC Challenge evaluation #15
Labels
Projects
Comments
Source: https://allenai.org/data/arc |
Implementing Evaluations
automation
moved this from To do, Evaluations to Implement
to Done
Feb 5, 2021
StellaAthena
pushed a commit
that referenced
this issue
Apr 29, 2022
Fix `mlsum` task names after split update
qmdnls
pushed a commit
to qmdnls/lm-evaluation-harness
that referenced
this issue
Aug 17, 2023
…m-task-name Fix `mlsum` task names after split update
LZY-the-boys
pushed a commit
to LZY-the-boys/lm-evaluation-harness-fast
that referenced
this issue
Sep 12, 2023
…m-task-name Fix `mlsum` task names after split update
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
From the GPT-3 paper
The evaluation code should be modeled after the interface in
lm_eval/base.py
and the example of theBoolQ
task inlm_eval/tasks/suerglue.py
The text was updated successfully, but these errors were encountered: