Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use local model api #1479

Open
lemon-little opened this issue Feb 26, 2024 · 1 comment
Open

use local model api #1479

lemon-little opened this issue Feb 26, 2024 · 1 comment

Comments

@lemon-little
Copy link

When I use the following command I am not able to get any response:
image

my local model name is gpt-3.5-turbo
url is https://0.0.0.0:6006/v1

when i run python like this

import openai
openai.api_base = "https://0.0.0.0:6006/v1" # 本地ip
# openai.api_base = "https://u17621-9cba-0e882657.westc.gpuhub.com:8443/v1" #远程访问ip
openai.api_key = "none"

# # 使用流式回复的请求
# for chunk in openai.ChatCompletion.create(
#     model="gpt-3.5-turbo",
#     messages=[
#         {"role": "user", "content": "你好"}
#     ],
#     stream=True
#     # 流式输出的自定义stopwords功能尚未支持
# ):
#     if hasattr(chunk.choices[0].delta, "content"):
#         print(chunk.choices[0].delta.content, end="", flush=True)

# 不使用流式回复的请求
response = openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=[
        {"role": "user", "content": "你好"}
    ],
    stream=False
)
# print(response.choices[0].message.content)
print(response)

I can get answer

@faizanahemad
Copy link

Can you check your model hosted using a curl call first?
Like below

curl -X POST https://0.0.0.0:6006/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"messages":[{"role":"system","content":"You are a helpful assistant. Please respond to the user request."}, {"role":"user","content":"What is 2 + 3 multiplied by 6."}], "max_tokens": 32, "stop":["<|user|>", "User:"], "temperature":0.0}'

If this does work then use lm-eval as

OPENAI_API_KEY="key" lm_eval --model local-chat-completions --model_args model=MyModel,base_url=https://0.0.0.0:6006/ --tasks gsm8k

If the curl doesn't work then its an issue with your local model hosting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants