Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace ModelSpecs with CompletionFn #594

Merged
merged 9 commits into from
Apr 6, 2023

Conversation

jwang47
Copy link
Contributor

@jwang47 jwang47 commented Apr 5, 2023

Replace ModelSpec with CompletionFn and allow users to specify CompletionFn instances from the CLI.

Testing done:

oaievalset dummy test --max_samples 1
oaievalset gpt-3.5-turbo test --max_samples 1
oaievalset testing test --max_samples 1

@jwang47 jwang47 changed the base branch from main to evals_refactor April 5, 2023 23:15
@jwang47 jwang47 force-pushed the alvin/evals_refactor_modelspecs branch 4 times, most recently from bc4ceda to 481c456 Compare April 6, 2023 00:13
@jwang47 jwang47 force-pushed the alvin/evals_refactor_modelspecs branch from 481c456 to 59b85aa Compare April 6, 2023 00:28
@jwang47 jwang47 changed the title alvin/evals refactor modelspecs Replace ModelSpecs with CompletionFn Apr 6, 2023
@@ -26,10 +24,8 @@ def _purple(str):

def get_parser() -> argparse.ArgumentParser:
parser = argparse.ArgumentParser(description="Run evals through the API")
parser.add_argument("model", type=str, help="Name of a completion model.")
parser.add_argument("completion_fn", type=str, help="One or more CompletionFn URLs, separated by commas (,). The format of a CompletionFn URL can be two forms: 1) an OpenAI API model followed by query parameters (e.g. `gpt-3.5-turbo?api_key=..`) or 2) a path to a Python class followed by query parameters (e.g. `evals.api:OpenAICompletionFn?model=text-davinci-003`).")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried multiple completion_fns but doesnt seem to work, e.g.

oaieval text-davinci-003,gpt-3.5-turbo pattern_identification --max_samples 1

errors with

AssertionError: Got type <class 'list'>, with val <class 'dict'> for prompt, expected str or list[int] or list[str]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at the error, realize it's independent of the multiple completion_fns but bc OpenAICompletionFn does not like the chat prompt. I think we should change this assert to be the same as the one for OpenAIChatCompletionFn because to_openai_create_prompt should be able to handle converting the Chat format prompt into strings

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed to be same assert as OpenAIChatCompletionFn


def eval_sample(self, sample: Any, *_):
prompt = sample["input"]
result = self._completion_fn(
result = self.completion_fn(
prompt=prompt,
max_tokens=self.max_tokens,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similarly, max_tokens needs to be moved as not relevant to all completionfns


def eval_sample(self, test_sample, rng):
del rng
prompt, correct_answers = test_sample["input"], test_sample["ideal"]
result = self._completion_fn(
result = self.completion_fn(
prompt=prompt,
temperature=0.0, # Q: why are these hardcoded?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also these need to be moved

evals/registry.py Outdated Show resolved Hide resolved
@andrew-openai
Copy link
Contributor

LGTM, some minor comments.

I am also curious about where do we expect most people to write their implementations of CompletionFns

@jwang47 jwang47 force-pushed the alvin/evals_refactor_modelspecs branch from 8d7c964 to 9ec3693 Compare April 6, 2023 20:23
@jwang47 jwang47 force-pushed the alvin/evals_refactor_modelspecs branch from 9ec3693 to 99ca1b3 Compare April 6, 2023 21:05
@andrew-openai
Copy link
Contributor

Some issues we found:

I believe translate.py needs to be ported (remove modelspec)
I think there are some issues with --no-local-run

@jwang47 jwang47 force-pushed the alvin/evals_refactor_modelspecs branch from 842dd17 to ba8708d Compare April 6, 2023 21:19
jwang47 and others added 6 commits April 6, 2023 14:46
# Thank you for contributing an eval! ♥️

🚨 Please make sure your PR follows these guidelines, __failure to follow
the guidelines below will result in the PR being closed automatically__.
Note that even if the criteria are met, that does not guarantee the PR
will be merged nor GPT-4 access granted. 🚨

__PLEASE READ THIS__:

In order for a PR to be merged, it must fail on GPT-4. We are aware that
right now, users do not have access, so you will not be able to tell if
the eval fails or not. Please run your eval with GPT-3.5-Turbo, but keep
in mind as we run the eval, if GPT-4 gets higher than 90% on the eval,
we will likely reject since GPT-4 is already capable of completing the
task.

We plan to roll out a way for users submitting evals to see the eval
performance on GPT-4 soon. Stay tuned! Until then, you will not be able
to see the eval performance on GPT-4. We encourage partial PR's with
~5-10 example that we can then run the evals on and share the results
with you so you know how your eval does with GPT-4 before writing all
100 examples.

## Eval details 📑
### Eval name
[Insert Eval name here]

### Eval description

[Insert a short description of what your eval does here]

### What makes this a useful eval?

[Insert why this eval is worth including and any additional context]

## Criteria for a good eval ✅

Below are some of the criteria we look for in a good eval. In general,
we are seeking cases where the model does not do a good job despite
being capable of generating a good response (note that there are some
things large language models cannot do, so those would not make good
evals).

Your eval should be:

- [ ] Thematically consistent: The eval should be thematically
consistent. We'd like to see a number of prompts all demonstrating some
particular failure mode. For example, we can create an eval on cases
where the model fails to reason about the physical world.
- [ ] Contains failures where a human can do the task, but either GPT-4
or GPT-3.5-Turbo could not.
- [ ] Includes good signal around what is the right behavior. This means
either a correct answer for `Basic` evals or the `Fact` Model-graded
eval, or an exhaustive rubric for evaluating answers for the `Criteria`
Model-graded eval.
- [ ] Include at least 100 high quality examples (it is okay to only
contribute 5-10 meaningful examples and have us test them with GPT-4
before adding all 100)

If there is anything else that makes your eval worth including, please
document it below.

### Unique eval value

> Insert what makes your eval high quality that was not mentioned above.
(Not required)

## Eval structure 🏗️

Your eval should
- [ ] Check that your data is in `evals/registry/data/{name}`
- [ ] Check that your yaml is registered at
`evals/registry/evals/{name}.yaml`
- [ ] Ensure you have the right to use the data you submit via this eval

(For now, we will only be approving evals that use one of the existing
eval classes. You may still write custom eval classes for your own
cases, and we may consider merging them in the future.)

## Final checklist 👀

### Submission agreement

By contributing to Evals, you are agreeing to make your evaluation logic
and data under the same MIT license as this repository. You must have
adequate rights to upload any data used in an Eval. OpenAI reserves the
right to use this data in future service improvements to our product.
Contributions to OpenAI Evals will be subject to our usual Usage
Policies (https://platform.openai.com/docs/usage-policies).

- [ ] I agree that my submission will be made available under an MIT
license and complies with OpenAI's usage policies.

### Email address validation

If your submission is accepted, we will be granting GPT-4 access to a
limited number of contributors. Access will be given to the email
address associated with the merged pull request.

- [ ] I acknowledge that GPT-4 access will only be granted, if
applicable, to the email address used for my merged pull request.

### Limited availability acknowledgement

We know that you might be excited to contribute to OpenAI's mission,
help improve our models, and gain access to GPT-4. However, due to the
requirements mentioned above and high volume of submissions, we will not
be able to accept all submissions and thus not grant everyone who opens
a PR GPT-4 access. We know this is disappointing, but we hope to set the
right expectation before you open this PR.

- [ ] I understand that opening a PR, even if it meets the requirements
above, does not guarantee the PR will be merged nor GPT-4 access
granted.

### Submit eval

- [ ] I have filled out all required fields in the evals PR form
- [ ] (Ignore if not submitting code) I have run `pip install
pre-commit; pre-commit install` and have verified that `black`, `isort`,
and `autoflake` are running when I commit and push

Failure to fill out all required fields will result in the PR being
closed.

### Eval JSON data 

Since we are using Git LFS, we are asking eval submitters to add in as
many Eval Samples (at least 5) from their contribution here:

<details>
  <summary>View evals in JSON</summary>

  ### Eval
  ```jsonl
  INSERT_EVAL_HERE
  ```
</details>

---------

Co-authored-by: Alvin Wang <[email protected]>
@jwang47 jwang47 merged commit 7266049 into evals_refactor Apr 6, 2023
@jwang47 jwang47 deleted the alvin/evals_refactor_modelspecs branch April 6, 2023 23:11
@jwang47 jwang47 mentioned this pull request Apr 6, 2023
12 tasks
jwang47 added a commit that referenced this pull request Apr 6, 2023
- [evals] Refactor evals package to expose `completion_fn`.
- Add `record_raw_samples`
- Andrew/evals refactor (#579)
- update manifest and pyproject to support fetching data on pip install
(#592)
- we need to still use the interop for string/list[dicts] for
modelgraded evals
- refactor simple evals to not use result.prompt (#593)
- Clean up duplicate recordings
- Replace ModelSpecs with CompletionFn (#594)
- Add --registry_path CLI arg

# Thank you for contributing an eval! ♥️

🚨 Please make sure your PR follows these guidelines, __failure to follow
the guidelines below will result in the PR being closed automatically__.
Note that even if the criteria are met, that does not guarantee the PR
will be merged nor GPT-4 access granted. 🚨

__PLEASE READ THIS__:

In order for a PR to be merged, it must fail on GPT-4. We are aware that
right now, users do not have access, so you will not be able to tell if
the eval fails or not. Please run your eval with GPT-3.5-Turbo, but keep
in mind as we run the eval, if GPT-4 gets higher than 90% on the eval,
we will likely reject since GPT-4 is already capable of completing the
task.

We plan to roll out a way for users submitting evals to see the eval
performance on GPT-4 soon. Stay tuned! Until then, you will not be able
to see the eval performance on GPT-4. We encourage partial PR's with
~5-10 example that we can then run the evals on and share the results
with you so you know how your eval does with GPT-4 before writing all
100 examples.

## Eval details 📑
### Eval name
[Insert Eval name here]

### Eval description

[Insert a short description of what your eval does here]

### What makes this a useful eval?

[Insert why this eval is worth including and any additional context]

## Criteria for a good eval ✅

Below are some of the criteria we look for in a good eval. In general,
we are seeking cases where the model does not do a good job despite
being capable of generating a good response (note that there are some
things large language models cannot do, so those would not make good
evals).

Your eval should be:

- [ ] Thematically consistent: The eval should be thematically
consistent. We'd like to see a number of prompts all demonstrating some
particular failure mode. For example, we can create an eval on cases
where the model fails to reason about the physical world.
- [ ] Contains failures where a human can do the task, but either GPT-4
or GPT-3.5-Turbo could not.
- [ ] Includes good signal around what is the right behavior. This means
either a correct answer for `Basic` evals or the `Fact` Model-graded
eval, or an exhaustive rubric for evaluating answers for the `Criteria`
Model-graded eval.
- [ ] Include at least 100 high quality examples (it is okay to only
contribute 5-10 meaningful examples and have us test them with GPT-4
before adding all 100)

If there is anything else that makes your eval worth including, please
document it below.

### Unique eval value

> Insert what makes your eval high quality that was not mentioned above.
(Not required)

## Eval structure 🏗️

Your eval should
- [ ] Check that your data is in `evals/registry/data/{name}`
- [ ] Check that your yaml is registered at
`evals/registry/evals/{name}.yaml`
- [ ] Ensure you have the right to use the data you submit via this eval

(For now, we will only be approving evals that use one of the existing
eval classes. You may still write custom eval classes for your own
cases, and we may consider merging them in the future.)

## Final checklist 👀

### Submission agreement

By contributing to Evals, you are agreeing to make your evaluation logic
and data under the same MIT license as this repository. You must have
adequate rights to upload any data used in an Eval. OpenAI reserves the
right to use this data in future service improvements to our product.
Contributions to OpenAI Evals will be subject to our usual Usage
Policies (https://platform.openai.com/docs/usage-policies).

- [ ] I agree that my submission will be made available under an MIT
license and complies with OpenAI's usage policies.

### Email address validation

If your submission is accepted, we will be granting GPT-4 access to a
limited number of contributors. Access will be given to the email
address associated with the merged pull request.

- [ ] I acknowledge that GPT-4 access will only be granted, if
applicable, to the email address used for my merged pull request.

### Limited availability acknowledgement

We know that you might be excited to contribute to OpenAI's mission,
help improve our models, and gain access to GPT-4. However, due to the
requirements mentioned above and high volume of submissions, we will not
be able to accept all submissions and thus not grant everyone who opens
a PR GPT-4 access. We know this is disappointing, but we hope to set the
right expectation before you open this PR.

- [ ] I understand that opening a PR, even if it meets the requirements
above, does not guarantee the PR will be merged nor GPT-4 access
granted.

### Submit eval

- [ ] I have filled out all required fields in the evals PR form
- [ ] (Ignore if not submitting code) I have run `pip install
pre-commit; pre-commit install` and have verified that `black`, `isort`,
and `autoflake` are running when I commit and push

Failure to fill out all required fields will result in the PR being
closed.

### Eval JSON data 

Since we are using Git LFS, we are asking eval submitters to add in as
many Eval Samples (at least 5) from their contribution here:

<details>
  <summary>View evals in JSON</summary>

  ### Eval
  ```jsonl
  INSERT_EVAL_HERE
  ```
</details>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants