Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FEATURE: A "load" command for loading the embedding model #15

Open
Propheticus opened this issue May 5, 2024 · 2 comments
Open

FEATURE: A "load" command for loading the embedding model #15

Propheticus opened this issue May 5, 2024 · 2 comments
Labels
enhancement New feature or request

Comments

@Propheticus
Copy link

Propheticus commented May 5, 2024

I've made a quick bat file to automatically start LMStudio server + Load the model and then start AnythingLLM.
This works, with one caveat; I can't start the embedding model in LMStudio this way.

@echo off
::Starting LMS local API server
lms server start --cors
::Loading a model
lms load Meta-Llama-3-8B-Instruct-Q8_0.gguf --gpu max --identifier Llama-3-8B-Q8-8K
::Start Anything LLM
cmd /c start "" "%localappdata%\Programs\anythingllm-desktop\AnythingLLM.exe"
exit

Would be nice if a next version of lms could be used to load that as well.

@Propheticus
Copy link
Author

For reference, current result is that the list command sees the model, but the load command cannot find it:
image

@ryan-the-crayon ryan-the-crayon added the enhancement New feature or request label May 6, 2024
@ryan-the-crayon
Copy link
Contributor

We currently do not support loading/unloading non-LLM's with lms. We will for sure add support for it in the future. I will update here once it is supported.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants