-
Notifications
You must be signed in to change notification settings - Fork 133
Issues: huggingface/llm-vscode
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
There will be two inference responses after users stopping editing.
#145
opened Oct 21, 2024 by
10901008-RoryHuang
What capabilities does this extension add on top of other OSS extensions?
stale
#143
opened Jun 10, 2024 by
DanielAdari
Add support for properly interpreting
context.selectedCompletionInfo
stale
#127
opened Jan 29, 2024 by
spew
[Feature request] Adding a checker to see if a custom endpoint is working properly
stale
#106
opened Nov 8, 2023 by
remyleone
Give too many <MID> <PRE> <SUF> inline response when load custom LLM model with llm-vscode-server
stale
#104
opened Nov 6, 2023 by
bonuschild
How to generate the response from locally hosted end point in vscode?
stale
#100
opened Oct 25, 2023 by
dkaus1
Error decoding response body: expected value at line 1 column 1
stale
#99
opened Oct 24, 2023 by
jalalirs
Is it possible to run "Code attribution" against a selfhosted API?
stale
#96
opened Oct 19, 2023 by
LLukas22
For Jupyter Notebooks: send other cells to the inference endpoint
stale
#16
opened May 7, 2023 by
arjunguha
ProTip!
Updated in the last three days: updated:>2024-11-12.