Skip to content

Issues: openai/evals

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

Accuracy Score
#1328 opened Aug 3, 2023 by jeyarajcs
Find claims from research paper
#1338 opened Aug 21, 2023 by ghost
Evaluate the cost of running tests
#1350 opened Sep 15, 2023 by onjas-buidl
Context window of completion functions not accounted for bug Something isn't working
#1377 opened Oct 13, 2023 by pskl
OpenAIChatCompletionFn should __init__ should accept **kwargs bug Something isn't working
#1493 opened Mar 15, 2024 by ezraporter
Setting completion function args via CLI does not work bug Something isn't working
#1504 opened Mar 27, 2024 by LoryPack
Music evals Idea for Eval These issues keep track of requests for different kinds of eval PRs
#138 opened Mar 15, 2023 by bhack
Add BigBench Tasks for evaluation Idea for Eval These issues keep track of requests for different kinds of eval PRs
#153 opened Mar 15, 2023 by Muhtasham
Create an evaluation that measures a model's ability to remember specifics about texts in it's dataset? Idea for Eval These issues keep track of requests for different kinds of eval PRs
#383 opened Mar 21, 2023 by mrconter1
pip install evals throws AssertionError bug Something isn't working
#918 opened May 4, 2023 by CholoTook
ProTip! Follow long discussions with comments:>50.