-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for properly interpreting context.selectedCompletionInfo
#127
Labels
Comments
@McPatate I opened this issue here -- however, I think llm-ls will have to change something to properly support this as my understand is that llm-ls selects the code from the document that will be sent to TGI (the inference backend). |
This issue is stale because it has been open for 30 days with no activity. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I opened an https://github.com/huggingface/llm-ls/issues/66 as I expect that project will need make some changes to fix this, but I was instructed to open an issue here.
When vscode shows a popup Completion item (i.e. what they used to call intellisense: a regular language syntax or function that vscode knows about), any inline completion is supposed to start with the Completion item. That is to say, the completion item should be added to the end of the prefix. Take the following python example:
So imagine the developer is typing the
.
in the lineobj = json.
, vscode will pop up possible completions forjson
, and likely the methodloads
will be the top completion. The prefix that is sent to the LLM should use a value ofobj = json.loads
for that line. The suffix that comes after should also be included as normal. VScode will ignore any suggestion that does not start withjson.loads
so it should always be included.The range that should be returned for the
vscode.InlineCompletionItem
should be properly adjusted for this as well.The text was updated successfully, but these errors were encountered: