-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for llama.cpp llm evaluator #7718
Comments
Thanks for your help with this! Instead of passing generator instances, we can just expand the current approach in the following manner:
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Is your feature request related to a problem? Please describe.
As of now, Haystack's evaluators which extend LLMEvaluator only support OpenAI. I would like for support through llama.cpp to be added for local/offline/"free" evaluation.
Describe the solution you'd like
After implementing LlamaCppChatGenerator (deepset-ai/haystack-core-integrations#723), it is possible to constrain the output to json. As such, we can split up the instructions of the evaluator into ChatMessages (system for instructions/examples, user for input tuples) and then have the result be in the json format expected.
I have a WIP implementation, and I would like feedback on how to handle the integration of llama.cpp into the existing evaluators.
As of now, I am manually calling the Chat Generator in llm_evaluator to ensure it would be possible. I am hard coding the model and generation kwargs.
My two different ideas to resolve this are as follows:
Another idea but I don't think it would be ideal is to just have separate evaluator components for llama.cpp.
If I can have some feedback, I can submit a PR with revised changes in a couple of days.
Describe alternatives you've considered
N/A
Additional context
N/A
The text was updated successfully, but these errors were encountered: