Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add flag to allow the evaluations to be carried out on a subset of the eval tasks #60

Closed
StellaAthena opened this issue Nov 23, 2020 · 0 comments · Fixed by #59 or #61
Closed
Assignees
Labels
feature request A feature that isn't implemented yet.

Comments

@StellaAthena
Copy link
Member

No description provided.

@StellaAthena StellaAthena added the feature request A feature that isn't implemented yet. label Nov 23, 2020
@StellaAthena StellaAthena added this to To do in New Features via automation Nov 23, 2020
@StellaAthena StellaAthena moved this from To do to In progress in New Features Nov 23, 2020
@StellaAthena StellaAthena linked a pull request Nov 23, 2020 that will close this issue
New Features automation moved this from In progress to Done Nov 23, 2020
@StellaAthena StellaAthena linked a pull request Nov 30, 2020 that will close this issue
KhalidAlt pushed a commit to asas-lab/lm-evaluation-harness that referenced this issue May 18, 2022
Support batching for API & Refactor model scripts
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request A feature that isn't implemented yet.
Projects
No open projects
New Features
  
Done
2 participants