-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Issues: BerriAI/litellm
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[Bug]: Fix multi-instance key rpm limit issue
bug
Something isn't working
#4148
opened Jun 12, 2024 by
krrishdholakia
[Feature]: Azure gpt-4o support
enhancement
New feature or request
#4147
opened Jun 12, 2024 by
dcieslak19973
[Bug]: S3 Cache Misses are throwing sentry errors constantly
bug
Something isn't working
#4146
opened Jun 12, 2024 by
Manouchehri
[Feature]: Rate limit per model per key
enhancement
New feature or request
#4144
opened Jun 12, 2024 by
krrishdholakia
[Bug]: "GET /v2/model/info HTTP/1.1" 500 Internal Server Error
bug
Something isn't working
#4143
opened Jun 12, 2024 by
Lh111d
[Bug]: Langfuse prompt Object of type TextPromptClient is not JSON serializable
bug
Something isn't working
#4140
opened Jun 12, 2024 by
ivanviragine
[Bug]: Tutorial hyperlink error about 'https://litellm.vercel.app/docs/index'
bug
Something isn't working
#4135
opened Jun 11, 2024 by
CellCS
[Bug]: Mapping error for Azure TTS model
bug
Something isn't working
#4127
opened Jun 11, 2024 by
lalineanegra
[Bug]: Key RPM not working as expected
bug
Something isn't working
#4125
opened Jun 11, 2024 by
ishaan-jaff
[Feature]: Dynamic tpm quota (multiple projects)
enhancement
New feature or request
#4124
opened Jun 11, 2024 by
krrishdholakia
[Feature]: Know if cache was hit or not in the modelResponse (also in langfuse)
enhancement
New feature or request
#4109
opened Jun 11, 2024 by
rahulgoel
[Bug]: Bedrock Error: ServiceUnavailableError: BedrockException - 'text'
bug
Something isn't working
#4098
opened Jun 10, 2024 by
xandernewton
[Feature]: Handle backend api disconnect errors
enhancement
New feature or request
#4097
opened Jun 10, 2024 by
krrishdholakia
[Feature]: callback when model RPM/TPM limits are reached
enhancement
New feature or request
#4096
opened Jun 10, 2024 by
krrishdholakia
[Feature]: Admin UI - having the ability to remove users from teams
enhancement
New feature or request
#4094
opened Jun 10, 2024 by
ishaan-jaff
[Bug]: Litellm can't log traces into langfuse when using ChatOpenAI.invoke sequentially
bug
Something isn't working
#4093
opened Jun 10, 2024 by
databill86
[Feature]: Support https://... pdf files for vertex ai
enhancement
New feature or request
#4079
opened Jun 8, 2024 by
letmefocus
[Bug]: OpenAI Proxy Server: API key cannot use free models when key/team budget is exceeded
bug
Something isn't working
#4069
opened Jun 7, 2024 by
awschmeder
[Feature]: set token-level timeouts (streaming + fallbacks)
enhancement
New feature or request
#4050
opened Jun 6, 2024 by
krrishdholakia
[Bug]: Workflow failed: auto_update_price_and_context_window
bug
Something isn't working
#4044
opened Jun 6, 2024 by
paneru-rajan
[Feature]: Support '1-month' concept for budget api
enhancement
New feature or request
#4042
opened Jun 6, 2024 by
krrishdholakia
[Bug]: Something isn't working
utils.trim_messages
should have consistent tuple return value when return_response_tokens
is True
bug
#4041
opened Jun 6, 2024 by
cwang
[Feature]: Don't require 'master_key' if jwt_auth is enabled
enhancement
New feature or request
#4040
opened Jun 6, 2024 by
krrishdholakia
Previous Next
ProTip!
Find all open issues with in progress development work with linked:pr.