Replies: 2 comments
-
@apepkuss You can just ask a model to generate you n-number of responses, |
Beta Was this translation helpful? Give feedback.
0 replies
-
We don't support that for now, but you can have the same result by passing an array of prompts to For example, this is equivalent to
Remember to set a high temperature |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi guys,
OpenAI defines a param
n
in the chat completion request, which is used to specify how many chat completion choices to generate for each input message. Does llama.cpp provide support for multiple chat completions? Thanks a lot!Beta Was this translation helpful? Give feedback.
All reactions