-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: custom_llm_provider not honored in litellm.py #13814
Comments
I'm not an expert in litellm, but it sounds like you found the issue 😅 I welcome a PR! |
I am new to litellm and I'm a bit scared to edit this code as I found some code that work for the wrong reasons.
litellm.validate_environment() does not return an api key so and never returns None so validate_litellm_api_key will never fail. It should probably just use validate_openai_api_key from
@krrishdholakia - you implemented #7600. do you see any issue with these changes? |
yea this would only check if the openai api key is in the environment - but a user could be trying to call anthropic/ai21/etc. what is the issue you're trying to solve? @primerano |
hey @krrishdholakia , Unrelated to my original issue I noticed that validate_litellm_api_key
litellm.validate_environment() returns a dictionary object and thus checking to see if the returned api_key is None will always be false. Also, validate_litellm_api_key has an unused api_type parameter. I was thinking that litellm always used the OPENAI_API_KEY to pass the api key so using llama_index/llama-index-legacy/llama_index/legacy/llms/openai_utils.py would work. I am new to litellm so I may be wrong here. :-) My original problem, which I also fixed in the init was that custom_llm_provider was not being set in the additional_kwargs and it caused my API call to fail. See the description area of this issue for the details of what I saw there. Thanks! |
Trying this from home with local ollama. This worked. Maybe I just need to add openai/ as a prefix to the model?
|
@krrishdholakia, I guess part of my problem is that I was thinking I should use the litellm library to talk to the litellm proxy. But in reality
The problem is that in LllamaIndex the OpenAI library will reject models that are not natively part of OpenAI (because it needs to find the context size of the model)
I think this is why I initially switched to using the litellm library. What is the right approach to interact with a LiteLLM proxy through LlamaIndex? |
Bug Description
I am running litellm as a front to Amazon Bedrock and I have the following code
litellm.py correctly checks the set custom_llm_provider for verification purposes here
but it is not added to additional_kwargs so I need to manually set it for my calls to work.
llm.additional_kwargs['custom_llm_provider'] = 'openai'
Without this setting I get
litellm.exceptions.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=mistralai/Mistral-7B-Instruct-v0.2
Should litellm.py set
additional_kwargs["custom_llm_provider"]
?Or am I just calling this all wrong? ;-)
Version
0.10.38
Steps to Reproduce
Relevant Logs/Tracbacks
litellm.exceptions.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=mistralai/Mistral-7B-Instruct-v0.2
The text was updated successfully, but these errors were encountered: