-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade openai to >=1.0.0 #1420
Conversation
This is a really great PR to see the changes of the new API release, thank you for this. |
if hasattr(openai.error, "set_display_cause"): # type: ignore | ||
openai.error.set_display_cause() # type: ignore |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this seems to have been removed in the new version
@@ -17,7 +17,7 @@ | |||
) | |||
|
|||
|
|||
@backoff.on_exception(backoff.constant, InvalidRequestError, max_tries=3) | |||
@backoff.on_exception(backoff.constant, BadRequestError, max_tries=3) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this seems like it would replicate the old behavior, but the old behavior doesn't seem correct since it's retrying on a bad request which i'd imagine should give the same result on subsequent tries (?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, i agree. I was erring on the side of matching previous behavior vs. fixing anything currently broken.
That said, i can update this to InternalServerError
evals/utils/api_utils.py
Outdated
openai.APIError, | ||
openai.RateLimitError, | ||
openai.APIConnectionError, | ||
openai.Timeout, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could we have someone who's familiar with how the new errors map to the old ones review this part? i think some 4xx shouldn't be retried (e.g. 401), and APIError is raised on both 4xx and 5xx (?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
APIError, RateLimitError, and and APIConnectionErrors all map 1:1 here.
I think Timeout should actually be replaced with APITimeoutError
ServiceUnavailableError
no longer exists, but i'll replace with InternalServerError
as mentioned above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've removed APIError as that's a grandparent exception to pretty much all errors from the API. All the others seem reasonable to keep
Migrates evals to the new version of openai-python. Ran the migration script, and then manually fixed issues with running tests/evals Test Plan: - unit tests - run `python -m evals.cli.oaievalset gpt-3.5-turbo test` - test make_me_pay (uses solvers) - run `python -m evals.cli.oaieval langchain/chains/llm_math bigrams --max_samples 20 --dry-run` - run the retrieval-completionfn example
Migrates evals to the new version of openai-python. Ran the migration script, and then manually fixed issues with running tests/evals Test Plan: - unit tests - run `python -m evals.cli.oaievalset gpt-3.5-turbo test` - test make_me_pay (uses solvers) - run `python -m evals.cli.oaieval langchain/chains/llm_math bigrams --max_samples 20 --dry-run` - run the retrieval-completionfn example
Migrates evals to the new version of openai-python. Ran the migration script, and then manually fixed issues with running tests/evals
Test Plan:
python -m evals.cli.oaievalset gpt-3.5-turbo test
python -m evals.cli.oaieval langchain/chains/llm_math bigrams --max_samples 20 --dry-run