Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Evals] Add choice of completion fn to args for modelgraded evals #709

Merged
merged 2 commits into from
Apr 17, 2023

Conversation

andrew-openai
Copy link
Contributor

We want to support setting the completion function used as the evaluator in modelgraded evals.

@@ -9,6 +9,7 @@ joke-animals-vs-fruits.dev.v0:
samples_jsonl: test_multiio/battles/joke_animals_vs_fruits.jsonl
eval_type: cot_classify
modelgraded_spec: battle
eval_completion_fn: gpt-3.5-turbo
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: This isn't needed since it's already the default?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added it to demonstrate how this would be done in practice

@andrew-openai andrew-openai merged commit db79fbb into main Apr 17, 2023
@andrew-openai andrew-openai deleted the andrew/completionfn-modelgraded branch April 18, 2023 00:23
Linmj-Judy pushed a commit to TablewareBox/evals that referenced this pull request Feb 27, 2024
…enai#709)

We want to support setting the completion function used as the evaluator
in modelgraded evals.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants