Skip to content

Commit

Permalink
Docs typos (openai#1415)
Browse files Browse the repository at this point in the history
This fixes a small typo in docs.

## Final checklist 👀

### Submission agreement

By contributing to Evals, you are agreeing to make your evaluation logic
and data under the same MIT license as this repository. You must have
adequate rights to upload any data used in an Eval. OpenAI reserves the
right to use this data in future service improvements to our product.
Contributions to OpenAI Evals will be subject to our usual Usage
Policies (<https://platform.openai.com/docs/usage-policies>).

- [x] I agree that my submission will be made available under an MIT
license and complies with OpenAI's usage policies.

### Email address validation

If your submission is accepted, we will be granting GPT-4 access to a
limited number of contributors. Access will be given to the email
address associated with the commits on the merged pull request.

- [x] I acknowledge that GPT-4 access will only be granted, if
applicable, to the email address used for my merged pull request.

### Limited availability acknowledgment

We know that you might be excited to contribute to OpenAI's mission,
help improve our models, and gain access to GPT-4. However, due to the
requirements mentioned above and the high volume of submissions, we will
not be able to accept all submissions and thus not grant everyone who
opens a PR GPT-4 access. We know this is disappointing, but we hope to
set the right expectation before you open this PR.

- [x] I understand that opening a PR, even if it meets the requirements
above, does not guarantee the PR will be merged nor GPT-4 access be
granted.

### Submit eval

- [x] I have filled out all required fields of this form
- [x] I have used **Git LFS** for the Eval JSON data
- [x] (Ignore if not submitting code) I have run `pip install
pre-commit; pre-commit install` and have verified that `mypy`, `black`,
`isort`, `autoflake` and `ruff` are running when I commit and push

Failure to fill out all required fields will result in the PR being
closed.
  • Loading branch information
krychu authored Dec 10, 2023
1 parent 3761fa0 commit efd6817
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
2 changes: 1 addition & 1 deletion docs/completion-fns.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ langchain/llm/flan-t5-xl:
```
Here is how it breaks down
`langchain/llm/flan-t5-xl`: This is the top level key that will be used to access this completion function with `oaieval`.
`class`: This is the path to your implementation of the completion function protocol. This class needs to importable within your python environment.
`class`: This is the path to your implementation of the completion function protocol. This class needs to be importable within your python environment.
`args`: These are arguments that are passed to your completion function when it is instantiated.


Expand Down
4 changes: 2 additions & 2 deletions docs/custom-eval.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ Generally, most `run` methods will follow the same pattern shown here: loading t
This method does the following:
1. Generate a prompt that contains the task statement, a few examples, and the test question.
2. Generate a completion from the model.
2. Check if the generated answer is correct.
3. Check if the generated answer is correct.
"""
stuffing = rng.sample(self.train_samples, self.train_samples_per_prompt)

Expand All @@ -93,7 +93,7 @@ Generally, most `run` methods will follow the same pattern shown here: loading t
result = self.completion_fn(prompt=prompt, temperature=0.0, max_tokens=1)
sampled = result.get_completions()[0]

evals.record_and_check_match(prompt=prompt, sampled=sampled, expected=sample["answer"])
evals.record_and_check_match(prompt=prompt, sampled=sampled, expected=test_sample["answer"])
```
You'll notice that `eval_sample` doesn't take the `recorder` as an argument. This is because `eval_all_samples` sets it to be the default recorder before calling `eval_sample`, and the recording utilities defined in `evals/record.py` use the default recorder. In this example, the `eval_sample` method passes off a lot of the heavy lifting to the `evals.check_sampled_text` utility function, which is defined in `evals/api.py`. This utility function queries the model, defined by `self.model_spec`, with the given `prompt` and checks to see if the result matches the `expected` answer (or one of them, if given a list). It then records these matches (or non matches) using the default recorder.

Expand Down

0 comments on commit efd6817

Please sign in to comment.