Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenaiCompletionsLM invokes the completions API with max_tokens set to 0 #1903

Open
chimezie opened this issue May 29, 2024 · 1 comment
Open

Comments

@chimezie
Copy link

As per the title, the completions API is invoked with max_tokens = 0, which, if properly interpreted by the server, will cause it not to generate anything (according to the API documentation, that value defaults to 16 and there is no indication that a value of 0 has any special meaning).

It seems the value of self._max_gen_toks (via the max_gen_toks property) should have been used instead.

@1m0d
Copy link

1m0d commented Jun 4, 2024

I have the same issue when using the local completions model (vLLM with OpenAI server) I am getting the following error:
openai.BadRequestError: Error code: 400 - {'object': 'error', 'message': 'max_tokens must be at least 1, got 0.', 'type': 'BadRequestError', 'param': None, 'code': 400}

Additionally, the logprobs parameter is set to 10 even though the maximum value as per the OpenAI documentation is 5.


https://platform.openai.com/docs/api-reference/completions/create#completions-create-logprobs

chimezie added a commit to chimezie/lm-evaluation-harness-mlx that referenced this issue Jun 14, 2024
chimezie added a commit to chimezie/lm-evaluation-harness-mlx that referenced this issue Jun 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants