-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ollama support #18
Comments
Do they have a unified output format? Any API references? |
Wouldn't it make more sense to just use the openai api spec? That adds support for many inference servers, including ollama (didn't they also just use the openai api spec?). |
@randoentity Wow, after seeing your comment I tried searching for that... and surprisingly it is (link) |
I would recommend against using the OpenAI API layer, as it's still WIP and missing many features. |
There is not really much feature we need rn. If chat completion works, I think that's good for now. |
Ok, the chat completion works so as long as remote models can be listed with the regular Ollama API, I don't see an issue. It would be nice to have vision/etc in the future though |
Support for self-hosted Ollama servers would be great
The text was updated successfully, but these errors were encountered: