Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The question regarding local deployment of large-scale models #26

Open
jialinhappy opened this issue Apr 23, 2024 · 1 comment
Open

Comments

@jialinhappy
Copy link

Hello, I noticed that your interface can only be based on llama or gpt. If I want to deploy some local variants based on llama, such as qwen or yi, how can I use them?

@GasolSun36
Copy link
Collaborator

Hello, I noticed that your interface can only be based on llama or gpt. If I want to deploy some local variants based on llama, such as qwen or yi, how can I use them?

Hi,
You can refer to the official implementation code to complete this task. It should be relatively simple. We only implemented the version using llama and gpt.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants