Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Give too many <MID> <PRE> <SUF> inline response when load custom LLM model with llm-vscode-server #104

Open
bonuschild opened this issue Nov 6, 2023 · 4 comments
Labels

Comments

@bonuschild
Copy link

Environment

Phenomenon

Then set the endpoint to http:https://localhost:8000/generate and plugins works, but given the and other symbols which make the code completions won't work well any more.

Question

  • Is something wrong that cause this error?
  • Is there some exact tutorials that instructing developer to deploy a custom model that is totally available to work with llm-vscode?

Thanks for reading and thinking!

@thanhnew2001
Copy link

Hello I got a similar error, i.e. empty response even though the debug output constantly log the input:
#105

I wonder there is a way to debug so that we know what exactly happen?

Copy link
Contributor

github-actions bot commented Dec 7, 2023

This issue is stale because it has been open for 30 days with no activity.

@github-actions github-actions bot added the stale label Dec 7, 2023
@LouiFi
Copy link

LouiFi commented Feb 11, 2024

Have tried to remove this settings form the extension settings ?

image

@github-actions github-actions bot removed the stale label Feb 11, 2024
Copy link
Contributor

This issue is stale because it has been open for 30 days with no activity.

@github-actions github-actions bot added the stale label Mar 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants