Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Invalid Ollama configuration, please check Ollama configuration and try again #3047

Closed
richardstevenhack opened this issue Jun 26, 2024 · 6 comments
Labels
🐛 Bug Something isn't working | 缺陷

Comments

@richardstevenhack
Copy link

📦 Environment

Docker

📌 Version

0.162.17

💻 Operating System

Other Linux

🌐 Browser

Firefox

🐛 Bug Description

  1. Running ollama locally from the command line on openSUSE Tumbleweed.
  2. Running Lobe Chat new install from Docker using the standard Lobe Chat Docker run command works fine.
  3. Ollama is set up in Lobe Chat with IP http:https://172.17.0.11434/v1. I am using that IP because reportedly on Linux the Docker "internal host" option does not work. I also tried with the standard Ollama 127.0.0.1:11434 which did not pass the Connection Check. The IP used above does pass the Connection Check.
  4. However, when I select the downloaded (from Ollama) Qwen2 LLM in the selection dropdown and type "hi", I get this:
    Invalid Ollama configuration, please check Ollama configuration and try again
  5. In addition, under that message is a prompt to:
    Use custom Ollama API Key
    Enter your Ollama API Key to start the session

📷 Recurrence Steps

  1. Run Ollama from the command line with: ollama run qwen2
  2. Run Lobe Chat using the standard Docker command for Ollama:
    docker run -d -p 3210:3210 -e OLLAMA_PROXY_URL=http:https://host.docker.internal:11434/v1 lobehub/lobe-chat
  3. Substituting http:https://172.17.0.11434/v1 for "http:https://host.docker:internal:11434/v1" which does work for Docker on Linux.
  4. Select LLM model from the dropdown.
  5. Type "hi"

🚦 Expected Behavior

Expect LLM model to respond to the prompt.

📝 Additional Information

I've tried using both http:https://127.0.0.1:11434/v1. This does not pass the Connection Check with details as:
{ "host": "http:https://127.0.0.1:11434/v1", "message": "please check whether your ollama service is available or set the CORS rules", "provider": "ollama" }

I have not done setting of "CORS Rules" because Ollama is running directly on local host and other front ends such as MSTY (which does not run from Docker) can connect to it fine at the above IP address. So I suspect this is some problem related to Lobe Chat interacting with Docker networking.

@richardstevenhack richardstevenhack added the 🐛 Bug Something isn't working | 缺陷 label Jun 26, 2024
@lobehubbot
Copy link
Member

👀 @richardstevenhack

Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible.
Please make sure you have given us as much context as possible.
非常感谢您提交 issue。我们会尽快调查此事,并尽快回复您。 请确保您已经提供了尽可能多的背景信息。

@richardstevenhack
Copy link
Author

I just noticed that I screwed up the URL in the bug report. The actual IP in use is:
http:https://172.17.0.1:11434/

This does not pass the Connection Check with error:
Ollama service is unavailable. Please check if Ollama is running properly or if the cross-origin configuration of Ollama is set correctly.

The error message returned when trying to chat is now:
`Error requesting Ollama service, please troubleshoot or retry based on the following information
json

{
"error": {
"message": "NetworkError when attempting to fetch resource.",
"name": "TypeError"
},
"provider": "ollama"
}`

Now I use http:https://127.0.0.1:11434 and the Connection Check passes.

And now it works... Sigh... I don't know what the hell happened.

Consider this closed.

@richardstevenhack
Copy link
Author

And now it doesn't work again... Trying to debug ragapp, I tried to see if I could get Lobe Chat to show me how I got Lobe Chat working the other day. Except now it doesn't work again. The connection check doesn't pass.

Doing some research on Google, I see that almost no one can get Docker to connect to Ollama. It appears to be an utterly hit and miss proposition.

Open-WebUI suggested this:
`If you're experiencing connection issues, it’s often due to the WebUI docker container not being able to reach the Ollama server at 127.0.0.1:11434 (host.docker.internal:11434) inside the container . Use the --network=host flag in your docker command to resolve this. Note that the port changes from 3000 to 8080, resulting in the link: http:https://localhost:8080.

Example Docker Command:

docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http:https://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main

`

I tried that - using Lobe Chat of course - didn't work. I used the "-network=host" and the "OLLAMA_BASE_URL" and it doesn't work.

I hate Docker. It's almost impossible to make it work compared to a Flatpak or an AppImage. I have had at least three Ollama GUIs with Docker fail to work - the several I have with Flatpaks or AppImages all work fine.

@richardstevenhack
Copy link
Author

I just solved the problem, as I described in bug 3064. I reproduce my comment there here:

I just had a similar problem with the Ragapp utility; wouldn't connect to Ollama via Docker. I just solved that problem a few minutes ago by doing this:

Based on an article on the Open-WebUI Github, they suggest:

`If you're experiencing connection issues, it’s often due to the WebUI docker container not being able to reach the Ollama server at 127.0.0.1:11434 (host.docker.internal:11434) inside the container . Use the --network=host flag in your docker command to resolve this. Note that the port changes from 3000 to 8080, resulting in the link: http:https://localhost:8080/.

Example Docker Command:

docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http:https://127.0.0.1:11434/ --name open-webui --restart always ghcr.io/open-webui/open-webui:main

`
What this does for open WebUI according to the Docker documentation is:

--network=host set the Docker network to the host network (supposedly this only works on Linux.)
Then the Ollama Base URL is set to Ollama's actual host URL.
When this is done the port assignments are ignored, so it depends on what port the app is listening on. That appears to work fine with the port 8000 used by the Ragapp. You get a warning message from Docker that the port mappins are ignored.

So I executed this command for Lobe Chat:
docker run -d --network=host -p 3210:3210 -e OLLAMA_BASE_URL=http:https://127.0.0.1:11434/ lobehub/lobe-chat

Then I entered the interface at port 3210, selected Ollama as the provider and left the Interface proxy address BLANK. The check passed, I went to the Just Chat, selected qwen2, and entered "Hi" as the message. I got the response from Qwen2.

I don't know if connecting Docker to the local host network is a security risk, but since I'm running on my own machine with the usual firewalls and the like, I don't really care. Your mileage may vary.

I'll also note that I'm running on openSUSE Tumbleweed Linux. For Windows or another Linux, your mileage may vary.

@richardstevenhack
Copy link
Author

I consider this bug closed.

@lobehubbot
Copy link
Member

@richardstevenhack

This issue is closed, If you have any questions, you can comment and reply.
此问题已经关闭。如果您有任何问题,可以留言并回复。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐛 Bug Something isn't working | 缺陷
Projects
Status: Done
Development

No branches or pull requests

2 participants