-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenAI completions model not using OpenAI Completion API properly to extract LogProbs #1967
Comments
Is this solved at #1919 or is it some other issue? |
It is a different issue. |
chimezie
added a commit
to chimezie/lm-evaluation-harness-mlx
that referenced
this issue
Jun 14, 2024
chimezie
added a commit
to chimezie/lm-evaluation-harness-mlx
that referenced
this issue
Jun 14, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
The get_results method in the lm_eval/models/openai_completions.py is not properly using the OpenAI Completion API.
Below is the API definition for LogProbs (from openai):
However, get_results in that module is defined this way:
It appears the method assumes response.logprobs.token_logprobs is both a list of log probabilities (it sets the continuation_logprobs variable to the sum of its values) and a token since it sets values in the list to a variable named token.
Per the OpenAI type hints, it should be a list of floats and the corresponding token can be determined by extracting it from response.logprobs.tokens (a list of strings) at the same index as each item in response.logprobs.token_logprobs.
The text was updated successfully, but these errors were encountered: