-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Issues: EleutherAI/lm-evaluation-harness
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
The problem of generate responses with my own trained model
#2035
by marvelcell
was closed Jun 29, 2024
YAML config was updated, but the project still remains the same as before
#2021
by 2018211801
was closed Jun 27, 2024
Does it support Triton server?
asking questions
For asking for clarification / support on library usage.
#2018
by AndyZZt
was closed Jun 25, 2024
How to enable trust_remote_code when encountered programmatically via get_task_dict?
#1980
by Jack-Khuu
was closed Jun 18, 2024
Add a way to instantiate from HF.AutoModel (again)
#1978
by dmitrii-palisaderesearch
was closed Jun 19, 2024
TemplateLM#_encode_pair() only works for HF transformers auto-models
#1966
by Birch-san
was closed Jun 14, 2024
Cannot load model 'local-chat-completions' and 'local-completions'
#1957
by awesom112
was closed Jun 12, 2024
Keep getting error: 'VLLM' object has no attribute 'AUTO_MODEL_CLASS'
#1953
by andrew0411
was closed Jun 12, 2024
Parallel GPU evaluation using simple_evaluate /evaluate functions? #1934
#1935
by PalaashAgrawal
was closed Jun 7, 2024
Parallel GPU evaluation using simple_evaluate /evaluate functions?
#1934
by Naitik1502
was closed Jun 7, 2024
--trust_remote_code does it actually do anything?
bug
Something isn't working.
#1932
by devzzzero
was closed Jun 19, 2024
Previous Next
ProTip!
Mix and match filters to narrow down what you’re looking for.