Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade openai to >=1.0.0 #1420

Merged
merged 11 commits into from
Dec 5, 2023
Merged

Upgrade openai to >=1.0.0 #1420

merged 11 commits into from
Dec 5, 2023

Conversation

etr2460
Copy link
Collaborator

@etr2460 etr2460 commented Nov 28, 2023

Migrates evals to the new version of openai-python. Ran the migration script, and then manually fixed issues with running tests/evals

Test Plan:

  • unit tests
  • run python -m evals.cli.oaievalset gpt-3.5-turbo test
  • test make_me_pay (uses solvers)
  • run python -m evals.cli.oaieval langchain/chains/llm_math bigrams --max_samples 20 --dry-run
  • run the retrieval-completionfn example

@mmtmn
Copy link
Contributor

mmtmn commented Dec 2, 2023

This is a really great PR to see the changes of the new API release, thank you for this.

@etr2460 etr2460 marked this pull request as ready for review December 5, 2023 00:27
Comment on lines -272 to -273
if hasattr(openai.error, "set_display_cause"): # type: ignore
openai.error.set_display_cause() # type: ignore
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this seems to have been removed in the new version

@@ -17,7 +17,7 @@
)


@backoff.on_exception(backoff.constant, InvalidRequestError, max_tries=3)
@backoff.on_exception(backoff.constant, BadRequestError, max_tries=3)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this seems like it would replicate the old behavior, but the old behavior doesn't seem correct since it's retrying on a bad request which i'd imagine should give the same result on subsequent tries (?)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, i agree. I was erring on the side of matching previous behavior vs. fixing anything currently broken.

That said, i can update this to InternalServerError

openai.APIError,
openai.RateLimitError,
openai.APIConnectionError,
openai.Timeout,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could we have someone who's familiar with how the new errors map to the old ones review this part? i think some 4xx shouldn't be retried (e.g. 401), and APIError is raised on both 4xx and 5xx (?)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

APIError, RateLimitError, and and APIConnectionErrors all map 1:1 here.

I think Timeout should actually be replaced with APITimeoutError

ServiceUnavailableError no longer exists, but i'll replace with InternalServerError as mentioned above.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've removed APIError as that's a grandparent exception to pretty much all errors from the API. All the others seem reasonable to keep

@etr2460 etr2460 merged commit 58ac0ff into main Dec 5, 2023
2 checks passed
jacobbieker pushed a commit to withmartian/-ARCHIVED--router-evals that referenced this pull request Jan 9, 2024
Migrates evals to the new version of openai-python. Ran the migration
script, and then manually fixed issues with running tests/evals

Test Plan:
- unit tests
- run `python -m evals.cli.oaievalset gpt-3.5-turbo test`
- test make_me_pay (uses solvers)
- run `python -m evals.cli.oaieval langchain/chains/llm_math bigrams
--max_samples 20 --dry-run`
- run the retrieval-completionfn example
Linmj-Judy pushed a commit to TablewareBox/evals that referenced this pull request Feb 27, 2024
Migrates evals to the new version of openai-python. Ran the migration
script, and then manually fixed issues with running tests/evals

Test Plan:
- unit tests
- run `python -m evals.cli.oaievalset gpt-3.5-turbo test`
- test make_me_pay (uses solvers)
- run `python -m evals.cli.oaieval langchain/chains/llm_math bigrams
--max_samples 20 --dry-run`
- run the retrieval-completionfn example
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants