-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Ollama long running generate error #63
Comments
Is this different from #55? |
Yes and no, the problem in #55 was the fact that the /api/models call had a timeout. But the /api/chat call has the same problem when using Ollama. Everything longer then just a "Test" is taking to long to respond and result into the error I have described here. |
Have you been able to narrow down the cause of this? |
No not now, I will try to create a more powerfull server with a gpu, maybe that will make ollama more reliable. |
I just saw this, I guess it's fixed here now: #81 Please give it a try it should be working now. |
Describe the bug
When using Ollama server and a Vercel deployment I´m running into some timeouts
Screenshots
Error
Error: Your function was stopped as it did not return an initial response within 25s
Additional context
Ollama server running on Hetzner Cloud with a Nginx Proxy Manager in front to proxy the ollama server to https and enable Access Lists for HTTP Basic Auth
The text was updated successfully, but these errors were encountered: