-
-
Notifications
You must be signed in to change notification settings - Fork 13.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add streaming and conversation support to gemini #1995
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello H Lohaus,
Thank you for your contribution to the project. Here's the review for your pull request titled "Add streaming and conversation support to gemini":
Pull Request Review
General Observations:
- The pull request lacks a description, which is usually helpful for providing context and the intent behind the changes. It's always good practice to include a brief summary of the changes and their purpose.
Code Review:
- The addition of
BaseConversation
tobase_provider.py
andGemini.py
seems to be consistent with the aim of adding conversation support. - The update to
REQUEST_BL_PARAM
inGemini.py
reflects a change in the backend service version, which is good for keeping the code up-to-date. - The introduction of
_snlm0e
and_sid
as class variables inGemini.py
is noted. Ensure that these are securely handled and are not exposed to potential security risks. - The modifications in
async_client.py
andclient.py
to handle instances ofBaseConversation
are appropriate and follow the existing pattern of handling special types.
Suggestions:
- Consider adding error handling for the new conversation functionality to manage potential exceptions or invalid states.
- Unit tests for the new features would be beneficial to ensure that the new functionality works as expected and to prevent future regressions.
- Documentation for the new classes and methods would be helpful for maintainers and other contributors to understand the usage and purpose of the new features.
Overall, the changes align with the goal of adding streaming and conversation support. After addressing the above points, the pull request will be in good shape for merging.
Best regards,
g4f Copilot
@@ -42,6 +43,9 @@ | |||
if isinstance(chunk, FinishReason): | |||
finish_reason = chunk.reason | |||
break | |||
elif isinstance(chunk, BaseConversation): | |||
yield chunk | |||
continue | |||
content += str(chunk) | |||
count += 1 | |||
if max_tokens is not None and count >= max_tokens: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The condition if max_tokens is not None and count >= max_tokens:
could lead to an off-by-one error. The >=
operator will cause the loop to exit when count
is equal to max_tokens
, potentially yielding one less token than intended. If the goal is to yield up to max_tokens
, use >
instead.
No description provided.