-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is it still intended and possible to use custom endpoint in 0.2.0? #134
Comments
same question. I can use my own custom endpoint in the previous version, but it can not work in the newest 0.2.0 version with the same configuration. |
This issue is stale because it has been open for 30 days with no activity. |
I have the same question |
This issue is stale because it has been open for 30 days with no activity. |
Sorry for the late response. What do you mean by custom endpoint? Could you share your configuration? |
@McPatate In the previous version of llm-vscode, I can use my own local deployed model. For example, I deployed my model at the port: http:https://192.168.1.73:8192/generate, then I can put the address at ModelID or Endpoint to obtain the response. But in the newest version of 0.2.0, it does not work anymore. Great thanks. |
Could you share your configuration settings, any logs or anything else that could help understand what is going on? Have you tried updating to the latest version of llm-vscode? |
This issue is stale because it has been open for 30 days with no activity. |
I'll close this issue, feel free to re-open if you are still facing an issue. |
No description provided.
The text was updated successfully, but these errors were encountered: