Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add keep_alive from 0.1.23, use Options #22

Merged
merged 1 commit into from
Mar 27, 2024
Merged

Add keep_alive from 0.1.23, use Options #22

merged 1 commit into from
Mar 27, 2024

Conversation

kescherCode
Copy link

@kescherCode kescherCode commented Feb 6, 2024

Ollama 0.1.23 added the ability to define how long a model should stay loaded via the API. This means that this PR fixes #21.

Since options is a shared thing between the generate and chat enpoints (so not just for chat), I refactored it to be RequestOptions for GenerateCompletion and ChatRequest. This would fix #20.

The format parameter on ChatRequest and GenerateCompletion has also been implemented, addressing #19.

The Uris have been made relative, fixing #24.

… and ChatRequest. Add Format to ChatRequest and GenerateCompletion. Make request Uris relative
@awaescher
Copy link
Owner

Man, I must have totally overlooked this gem of a PR! Thanks a lot for your contribution, very welcome!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

keep_alive support in Options Options are still a string instead of a JSON object.
2 participants