-
-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Request] Embedded Ollama within lobechat #2137
Comments
Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible. |
Ollama 的运行依赖于较大的显存,而 LobeChat 是一个纯前端的 Chatbot,如果有必要,他们应该分开部署。如果仅仅是本地部署, Ollama不应该运行在 Docker 内而是使用其官方的启动器。 |
Ollama relies on large graphics memory to run, and LobeChat is a pure front-end Chatbot. If necessary, they should be deployed separately. For local deployment only, Ollama should not run inside Docker but use its official launcher. |
我感觉可能的方式是提供一个docker compose的编排文件 |
I think the possible way is to provide a docker compose orchestration file |
🥰 Feature Description
Currently, the Ollama model works by installing ollama separately. https://lobehub.com/docs/usage/providers/ollama
🧐 Proposed Solution
Embed Ollama model within the lobechat docker image or provide instructions on how to install Ollama with a separate docker images.
📝 Additional Information
No response
The text was updated successfully, but these errors were encountered: