Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] response.undefined #2936

Open
fomalhaut1998 opened this issue Jun 19, 2024 · 8 comments
Open

[Bug] response.undefined #2936

fomalhaut1998 opened this issue Jun 19, 2024 · 8 comments
Labels
🐛 Bug Something isn't working | 缺陷

Comments

@fomalhaut1998
Copy link

📦 部署环境

Vercel

📌 软件版本

v1.0.11

💻 系统环境

Windows

🌐 浏览器

Chrome

🐛 问题描述

image
部分模型(Gemini 1.5 Pro、Qwen2-72b-instruct、Qwen-Max-longContext等)对话生成过程中会突然出现“response.undefined”,经过测试,这个出现的频率最近变得很高,几乎每次都会出现这个问题。

📷 复现步骤

直接输入对应模型的API,然后进行对话即可

🚦 期望结果

No response

📝 补充信息

No response

@fomalhaut1998 fomalhaut1998 added the 🐛 Bug Something isn't working | 缺陷 label Jun 19, 2024
@lobehubbot
Copy link
Member

👀 @fomalhaut1998

Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible.
Please make sure you have given us as much context as possible.
非常感谢您提交 issue。我们会尽快调查此事,并尽快回复您。 请确保您已经提供了尽可能多的背景信息。

@arvinxx
Copy link
Contributor

arvinxx commented Jun 19, 2024

是用的官方模型吗?

@lobehubbot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


Is it an official model?

@arvinxx arvinxx added the 🤔 Need Reproduce Further information is requested | 需要更多信息复现 label Jun 20, 2024
@fomalhaut1998
Copy link
Author

是用的官方模型吗?

是的,Gemini用的是Google官方的API,然后Qwen用的是阿里云官方的API

@lobehubbot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


Is it an official model?

Yes, Gemini uses Google’s official API, and Qwen uses Alibaba Cloud’s official API.

@lobehubbot
Copy link
Member

👋 @{{ author }}


Since the issue was labeled with 🤔 Need Reproduce, but no response in 3 days. This issue will be closed. If you have any questions, you can comment and reply.
由于该 issue 被标记为需要更多信息,却 3 天未收到回应。现关闭 issue,若有任何问题,可评论回复。

@lobehubbot lobehubbot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 24, 2024
@lobehubbot
Copy link
Member

@fomalhaut1998

This issue is closed, If you have any questions, you can comment and reply.
此问题已经关闭。如果您有任何问题,可以留言并回复。

@arvinxx arvinxx removed the 🤔 Need Reproduce Further information is requested | 需要更多信息复现 label Jun 24, 2024
@arvinxx arvinxx reopened this Jun 24, 2024
@bt-nia
Copy link

bt-nia commented Jun 25, 2024

I have the same issue.
I see this with an Azure deployment. The response comes in, but at a certain point, the whole response disappears and the error is shown.

Update: I found the issue. I noticed that responses were fine in the network tab of the developer tools, but the error got triggered around the 30 second mark for the request. So I started searching for components in my setup that had a default timeout of 30. In my case, the GCP backend service connected to the loadbalancer (which had all the lobe-chat instances in a pool) has the default timeout of 30. I had to update the backend timeout like so:
gcloud compute backend-services update $BACKEND_NAME --project=$PROJECT_NAME --timeout=80

I still think this is a bug, because lobe-chat should not entirely remove the response generated so far and leave it in the chat. Even if the connection gets terminated abruptly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐛 Bug Something isn't working | 缺陷
Projects
Status: Done
Development

No branches or pull requests

4 participants