Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add streaming and conversation support to gemini #1995

Merged
merged 2 commits into from
May 22, 2024
Merged

Conversation

hlohaus
Copy link
Collaborator

@hlohaus hlohaus commented May 21, 2024

No description provided.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello H Lohaus,

Thank you for your contribution to the project. Here's the review for your pull request titled "Add streaming and conversation support to gemini":


Pull Request Review

General Observations:

  • The pull request lacks a description, which is usually helpful for providing context and the intent behind the changes. It's always good practice to include a brief summary of the changes and their purpose.

Code Review:

  • The addition of BaseConversation to base_provider.py and Gemini.py seems to be consistent with the aim of adding conversation support.
  • The update to REQUEST_BL_PARAM in Gemini.py reflects a change in the backend service version, which is good for keeping the code up-to-date.
  • The introduction of _snlm0e and _sid as class variables in Gemini.py is noted. Ensure that these are securely handled and are not exposed to potential security risks.
  • The modifications in async_client.py and client.py to handle instances of BaseConversation are appropriate and follow the existing pattern of handling special types.

Suggestions:

  • Consider adding error handling for the new conversation functionality to manage potential exceptions or invalid states.
  • Unit tests for the new features would be beneficial to ensure that the new functionality works as expected and to prevent future regressions.
  • Documentation for the new classes and methods would be helpful for maintainers and other contributors to understand the usage and purpose of the new features.

Overall, the changes align with the goal of adding streaming and conversation support. After addressing the above points, the pull request will be in good shape for merging.

Best regards,
g4f Copilot

g4f/Provider/base_provider.py Show resolved Hide resolved
g4f/Provider/base_provider.py Show resolved Hide resolved
g4f/Provider/needs_auth/Gemini.py Show resolved Hide resolved
g4f/Provider/needs_auth/Gemini.py Show resolved Hide resolved
g4f/Provider/needs_auth/Gemini.py Show resolved Hide resolved
g4f/Provider/needs_auth/Gemini.py Show resolved Hide resolved
g4f/Provider/needs_auth/Gemini.py Show resolved Hide resolved
g4f/client/async_client.py Show resolved Hide resolved
g4f/client/async_client.py Show resolved Hide resolved
@@ -42,6 +43,9 @@
if isinstance(chunk, FinishReason):
finish_reason = chunk.reason
break
elif isinstance(chunk, BaseConversation):
yield chunk
continue
content += str(chunk)
count += 1
if max_tokens is not None and count >= max_tokens:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The condition if max_tokens is not None and count >= max_tokens: could lead to an off-by-one error. The >= operator will cause the loop to exit when count is equal to max_tokens, potentially yielding one less token than intended. If the goal is to yield up to max_tokens, use > instead.

@hlohaus hlohaus merged commit 6830dfc into xtekky:main May 22, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant