-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Issues: EleutherAI/lm-evaluation-harness
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Add More Tests
feature request
A feature that isn't implemented yet.
#1827
opened May 12, 2024 by
haileyschoelkopf
Llama2-hf-q40.gguf model got very poor results on lambada_openai tasks, but was fine on other tasks.
#1866
opened May 21, 2024 by
intellinjun
--device cuda:3 not honored when using --model vllm
bug
Something isn't working.
documentation
Improvements or additions to documentation.
#1846
opened May 15, 2024 by
LGLG42
AssertionError: aggregation named 'mean' conflicts with existing registered aggregation!
#1839
opened May 14, 2024 by
hunter2009pf
Evaluation results of llama2 with lm-evaluation-harness using wikitext-2
#1833
opened May 13, 2024 by
l2002924700
Using Language Models as Evaluators
feature request
A feature that isn't implemented yet.
#1831
opened May 13, 2024 by
lintangsutawika
eval gsm8k from local dataset folder with the bug info "ValueError: BuilderConfig 'main' not found."
#1829
opened May 12, 2024 by
Jp-17
TypeError: 'NoneType' object is not iterable when using cache and loglikelihood_rolling
#1821
opened May 10, 2024 by
mdocekal
Task description newline characters removed by Jinja templating, affecting generated requests and performance
#1817
opened May 9, 2024 by
ma0li
openai.InternalServerError: the model generated invalid Unicode output
#1783
opened May 4, 2024 by
djstrong
ProTip!
What’s not been updated in a month: updated:<2024-05-23.