-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[evals] Refactor evals package to expose completion_fn
.
#515
Merged
Merged
Changes from 1 commit
Commits
Show all changes
23 commits
Select commit
Hold shift + click to select a range
d87a056
[evals] Refactor evals package to expose `completion_fn`.
hwchung27 d9c1395
Add `record_raw_samples`
hwchung27 a1c6207
Andrew/evals refactor (#579)
andrew-openai deb29d3
update manifest and pyproject to support fetching data on pip install…
andrew-openai 9b1c350
we need to still use the interop for string/list[dicts] for modelgrad…
andrew-openai c470d52
refactor simple evals to not use result.prompt (#593)
andrew-openai b691cfa
Clean up duplicate recordings
hwchung27 7266049
Replace ModelSpecs with CompletionFn (#594)
jwang47 b2a45cf
Add --registry_path CLI arg (#601)
jwang47 924d2d4
Andrew/langchain llms (#602)
andrew-openai 4401cce
rm sample freeform, some docs (#603)
andrew-openai 013d636
Update completion-fn-protocol.md
andrew-openai 08062bc
some documentation cleanup
joe-at-openai 3367006
some documentation cleanup
joe-at-openai 5e71a76
some documentation cleanup
joe-at-openai e621b6f
inner monologue example (#610)
andrew-openai 49d17ed
Update README.md
andrew-openai 1bfba77
Update run-evals.md
andrew-openai b018aff
cleanup
andrew-openai 5222f2c
Merge branch 'main' into evals_refactor_merge_main
andrew-openai 9db703d
get oaieval to run
andrew-openai 02bc2cb
address comments
andrew-openai 50114a5
bump version
andrew-openai File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
rm sample freeform, some docs (#603)
# Thank you for contributing an eval!♥️ 🚨 Please make sure your PR follows these guidelines, __failure to follow the guidelines below will result in the PR being closed automatically__. Note that even if the criteria are met, that does not guarantee the PR will be merged nor GPT-4 access granted. 🚨 __PLEASE READ THIS__: In order for a PR to be merged, it must fail on GPT-4. We are aware that right now, users do not have access, so you will not be able to tell if the eval fails or not. Please run your eval with GPT-3.5-Turbo, but keep in mind as we run the eval, if GPT-4 gets higher than 90% on the eval, we will likely reject since GPT-4 is already capable of completing the task. We plan to roll out a way for users submitting evals to see the eval performance on GPT-4 soon. Stay tuned! Until then, you will not be able to see the eval performance on GPT-4. We encourage partial PR's with ~5-10 example that we can then run the evals on and share the results with you so you know how your eval does with GPT-4 before writing all 100 examples. ## Eval details 📑 ### Eval name [Insert Eval name here] ### Eval description [Insert a short description of what your eval does here] ### What makes this a useful eval? [Insert why this eval is worth including and any additional context] ## Criteria for a good eval ✅ Below are some of the criteria we look for in a good eval. In general, we are seeking cases where the model does not do a good job despite being capable of generating a good response (note that there are some things large language models cannot do, so those would not make good evals). Your eval should be: - [ ] Thematically consistent: The eval should be thematically consistent. We'd like to see a number of prompts all demonstrating some particular failure mode. For example, we can create an eval on cases where the model fails to reason about the physical world. - [ ] Contains failures where a human can do the task, but either GPT-4 or GPT-3.5-Turbo could not. - [ ] Includes good signal around what is the right behavior. This means either a correct answer for `Basic` evals or the `Fact` Model-graded eval, or an exhaustive rubric for evaluating answers for the `Criteria` Model-graded eval. - [ ] Include at least 100 high quality examples (it is okay to only contribute 5-10 meaningful examples and have us test them with GPT-4 before adding all 100) If there is anything else that makes your eval worth including, please document it below. ### Unique eval value > Insert what makes your eval high quality that was not mentioned above. (Not required) ## Eval structure 🏗️ Your eval should - [ ] Check that your data is in `evals/registry/data/{name}` - [ ] Check that your yaml is registered at `evals/registry/evals/{name}.yaml` - [ ] Ensure you have the right to use the data you submit via this eval (For now, we will only be approving evals that use one of the existing eval classes. You may still write custom eval classes for your own cases, and we may consider merging them in the future.) ## Final checklist 👀 ### Submission agreement By contributing to Evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an Eval. OpenAI reserves the right to use this data in future service improvements to our product. Contributions to OpenAI Evals will be subject to our usual Usage Policies (https://platform.openai.com/docs/usage-policies). - [ ] I agree that my submission will be made available under an MIT license and complies with OpenAI's usage policies. ### Email address validation If your submission is accepted, we will be granting GPT-4 access to a limited number of contributors. Access will be given to the email address associated with the merged pull request. - [ ] I acknowledge that GPT-4 access will only be granted, if applicable, to the email address used for my merged pull request. ### Limited availability acknowledgement We know that you might be excited to contribute to OpenAI's mission, help improve our models, and gain access to GPT-4. However, due to the requirements mentioned above and high volume of submissions, we will not be able to accept all submissions and thus not grant everyone who opens a PR GPT-4 access. We know this is disappointing, but we hope to set the right expectation before you open this PR. - [ ] I understand that opening a PR, even if it meets the requirements above, does not guarantee the PR will be merged nor GPT-4 access granted. ### Submit eval - [ ] I have filled out all required fields in the evals PR form - [ ] (Ignore if not submitting code) I have run `pip install pre-commit; pre-commit install` and have verified that `black`, `isort`, and `autoflake` are running when I commit and push Failure to fill out all required fields will result in the PR being closed. ### Eval JSON data Since we are using Git LFS, we are asking eval submitters to add in as many Eval Samples (at least 5) from their contribution here: <details> <summary>View evals in JSON</summary> ### Eval ```jsonl INSERT_EVAL_HERE ``` </details>
- Loading branch information
commit 4401cce5dc3475f654ce9e3c5baa285a231edc56
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,37 @@ | ||
### The Completion Function Protocol | ||
|
||
Here are the interfaces needed to implement the completion function protocol. Any implementation of this interface can be used inside `oaieval`. | ||
|
||
#### CompletionFn | ||
Completion functions should implement the `CompletionFn` interface: | ||
```python | ||
class CompletionFn(Protocol): | ||
def __call__( | ||
self, | ||
prompt: Union[str, list[dict[str, str]]], | ||
**kwargs, | ||
) -> CompletionResult: | ||
``` | ||
|
||
We take a `prompt` representing a single sample from an eval. These prompts can be represented as either a text string or a list of messages in [OpenAI Chat format](https://platform.openai.com/docs/guides/chat/introduction). To work with the existing evals, Completion Function implementations would need to handle both types of inputs, but we provide helper functionality to convert Chat formatted messages into a text string if that is the preferred input for your program: | ||
```python | ||
from evals.prompt.base import CompletionPrompt | ||
|
||
# chat_prompt: list[dict[str, str]] -> text_prompt: str | ||
text_prompt = CompletionPrompt(chat_prompt).to_formatted_prompt() | ||
``` | ||
|
||
#### CompletionResult | ||
The completion function should return an object implementing the `CompletionResult` interface: | ||
```python | ||
class CompletionResult(ABC): | ||
@abstractmethod | ||
def get_completions(self) -> list[str]: | ||
pass | ||
``` | ||
The `get_completions` method returns a list of string completions. Each element should be considered a unique completion (in most cases this will be a list of length 1). | ||
|
||
#### Using your CompletionFn | ||
This is all that's needed to implement a Completion function that works with our existing Evals, allowing you to more easily evaluate your end-to-end logic on tasks. | ||
|
||
See [completion-fns.md](completion-fns.md) to see how to register and use your completion function with `oaieval`. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,49 @@ | ||
# Completion Functions | ||
|
||
## What are completion functions | ||
In [run-evals.md](run-evals.md), we learned how to make calls to `oaieval` to run an eval against a completion function. Completion Functions are generalizations of model completions, where a "completion" is some text output that would be our answer to the prompt. For example, if "Who played the girl elf in the hobbit?" is our prompt, the correct completion is "Evangeline Lilly". While we can just test a model directly to see if it generates "Evangeline Lilly", we can imagine doing numerous other operations under the hood to improve our ability to answer this question, like giving the model access to a browser to look up the answer before responding. Making it easy to implement this kind of under-the-hood operators before responding is the motivation behind building Completion Functions. | ||
|
||
## How to implement completion functions | ||
A completion function needs to implement some interfaces that make it usable within Evals. At its core, it is just standardizing inputs to be a text string or [Chat conversation](https://platform.openai.com/docs/guides/chat), and the output to be a list of text strings. Implementing this interface will allow you to run your Completion Function against any eval in Evals. | ||
|
||
The exact interfaces needed are described in detail in [completion-fn-protocol.md](completion-fn-protocol.md) | ||
|
||
We include some example implementations inside `evals/completion_fns`. For example, the [`LangChainLLMCompletionFn`](../evals/completion_fns/langchain_llm.py) implements a way to generate completions from [LangChain LLMs](https://python.langchain.com/en/latest/modules/models/llms/getting_started.html). We can then use these completion functions with `oaieval`: | ||
``` | ||
oaieval langchain/llm/flan-t5-xl test-match | ||
``` | ||
|
||
## Registering Completion Functions | ||
Once you have written a completion function, we need to make the class visible to the `oaieval` CLI. Similar to how we register our evals, we also register Completion Functions inside `evals/registry/completion_fns` as `yaml` files. Here is the registration for our langchain LLM completion function: | ||
```yaml | ||
langchain/llm/flan-t5-xl: | ||
class: evals.completion_fns.langchain_llm:LangChainLLMCompletionFn | ||
args: | ||
llm: HuggingFaceHub | ||
llm_kwargs: | ||
repo_id: google/flan-t5-xl | ||
``` | ||
Here is how it breaks down | ||
`langchain/llm/flan-t5-xl`: This is the top level key that will be used to access this completion function with `oaieval`. | ||
`class`: This is the path to your implementation of the completion function protocol. This class needs to importable within your python environment. | ||
`args`: These are arguments that are passed to your completion function when it is instantiated. | ||
|
||
|
||
### Developing Completion Functions outside of Evals | ||
It is possible to register CompletionFunctions without directly modifying the registry or code inside `Evals` by using the `--registry_path` argument. As an example, let's say I want to use `MyCompletionFn` located inside `~/my_project/`: | ||
``` | ||
my_project | ||
├── my_completion_fn.py | ||
└── completion_fns | ||
└── my_completion_fn.yaml | ||
``` | ||
|
||
If `my_project` is importable within the python environment (accessible via PYTHONPATH), we can structure `my_completion_fn.yaml` as: | ||
``` | ||
my_completion_fn: | ||
class: my_project.my_completion_fn:MyCompletionFn | ||
``` | ||
Then, we can make calls to `oaieval` using: | ||
``` | ||
oaieval my_completion_fn test-match --registry_path ~/my_project | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,11 +1,8 @@ | ||
from .api import ( | ||
CompletionFn, | ||
CompletionResult, | ||
from .api import CompletionFn, CompletionResult, DummyCompletionFn, record_and_check_match | ||
from .completion_fns.openai import ( | ||
OpenAIChatCompletionFn, | ||
OpenAICompletionFn, | ||
OpenAICompletionResult, | ||
record_and_check_match, | ||
sample_freeform, | ||
) | ||
from .data import get_csv, get_json, get_jsonl, get_jsonls, get_lines, iter_jsonls | ||
from .eval import Eval |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo: availableon