-
-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] response.undefined #2936
Comments
Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible. |
是用的官方模型吗? |
Is it an official model? |
是的,Gemini用的是Google官方的API,然后Qwen用的是阿里云官方的API |
Yes, Gemini uses Google’s official API, and Qwen uses Alibaba Cloud’s official API. |
👋 @{{ author }} |
This issue is closed, If you have any questions, you can comment and reply. |
I have the same issue. Update: I found the issue. I noticed that responses were fine in the network tab of the developer tools, but the error got triggered around the 30 second mark for the request. So I started searching for components in my setup that had a default timeout of 30. In my case, the GCP backend service connected to the loadbalancer (which had all the lobe-chat instances in a pool) has the default timeout of 30. I had to update the backend timeout like so: I still think this is a bug, because lobe-chat should not entirely remove the response generated so far and leave it in the chat. Even if the connection gets terminated abruptly. |
📦 部署环境
Vercel
📌 软件版本
v1.0.11
💻 系统环境
Windows
🌐 浏览器
Chrome
🐛 问题描述
部分模型(Gemini 1.5 Pro、Qwen2-72b-instruct、Qwen-Max-longContext等)对话生成过程中会突然出现“response.undefined”,经过测试,这个出现的频率最近变得很高,几乎每次都会出现这个问题。
📷 复现步骤
直接输入对应模型的API,然后进行对话即可
🚦 期望结果
No response
📝 补充信息
No response
The text was updated successfully, but these errors were encountered: