-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Host LM Studio on server and Multi-GPU Support #31
Comments
This sounds like a fantastic idea! ⭐ I'm using RunPod for my models |
I did look in to make a docker container but i don't really have time to explore that right now |
Well, if we can make LM Studio run on Ubuntu I'm open for ideas how we can deploy |
Last time when I tried to install the LM Studio AppFile on docker Not sure how to resolve it |
Feature Request 1: Host LM Studio on Ubuntu Server
I would like to request the ability to host LM Studio on an server (not the local API). This feature would allow users to access the application via HTTP, with all components running directly on the server.
Feature Request 2: Multi-GPU Support with LLM Split
Multiple GPUs with an LLM split feature. This would enhance performance and efficiency when handling large models or workloads. I have a few AMD 580 8GB and would be great to use them for this.
The text was updated successfully, but these errors were encountered: