-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
openai.BadRequestError
when running lm_eval
with piqa task using vLLM's OpenAI compatible server
#1735
Comments
lm_eval
with piqa task using vLLM's OpenAI compatible serveropenai.BadRequestError
when running lm_eval
with piqa task using vLLM's OpenAI compatible server
Hi! If you change to logprobs=5, does this run correctly? and do the scores appear similar to what are expected / what your model run via HF reports? |
Hi, thanks for the response. In my environment, replacing
|
vLLM's Openai compatible server has |
Description
An error occurs when running
lm_eval
with the piqa task using vLLM's OpenAI compatible server as follows:Steps to Reproduce:
Additional Information:
According to openai docs,
However, it seems that more than 5 logprobs is specified in lm_eval, e.g. )
lm-evaluation-harness/lm_eval/models/openai_completions.py
Line 218 in 3196e90
The text was updated successfully, but these errors were encountered: