-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Issues: EleutherAI/lm-evaluation-harness
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
AssertionError: aggregation named 'mean' conflicts with existing registered aggregation!
#1839
opened May 14, 2024 by
hunter2009pf
Llama2-hf-q40.gguf model got very poor results on lambada_openai tasks, but was fine on other tasks.
#1866
opened May 21, 2024 by
intellinjun
Add MMLU-Pro Dataset
feature request
A feature that isn't implemented yet.
good first issue
Good for newcomers
help wanted
Contributors and extra help welcome.
#1947
opened Jun 11, 2024 by
haileyschoelkopf
Check compatibility of Something isn't working.
local-completions
with VLLM (returns logits) for multiple_choice
tasks
bug
#1949
opened Jun 11, 2024 by
haileyschoelkopf
Should num_fewshot be type list?
feature request
A feature that isn't implemented yet.
#837
opened Sep 6, 2023 by
Wehzie
Stability Upstream translated task
feature request
A feature that isn't implemented yet.
#1006
opened Nov 20, 2023 by
StellaAthena
Add task variants replicating Llama 1 / 2 evaluation numbers
feature request
A feature that isn't implemented yet.
#1078
opened Dec 7, 2023 by
haileyschoelkopf
Organize / Cleanup Logging + Levels
documentation
Improvements or additions to documentation.
feature request
A feature that isn't implemented yet.
#1192
opened Dec 21, 2023 by
haileyschoelkopf
CoQA's implementation only predicts the last answer of each text
bug
Something isn't working.
good first issue
Good for newcomers
#1231
opened Jan 1, 2024 by
glerzing
Hello, I would like to know if there is a method to use "generate_until" to evaluate on the ceval or cmmlu dataset. I'm using a chat model, which adds a prompt template to make it answer questions. However, the model's answer choices (like A, B, C, D) may not necessarily be the first generated token.
#1362
opened Jan 27, 2024 by
noforit
ProTip!
no:milestone will show everything without a milestone.