Skip to content

Commit

Permalink
Merge pull request BerriAI#4703 from BerriAI/litellm_only_use_interna…
Browse files Browse the repository at this point in the history
…l_use_cache

[Fix Memory Usage] - only use per request tracking if slack alerting is being used
  • Loading branch information
ishaan-jaff committed Jul 14, 2024
2 parents 7850814 + 69f74c1 commit ed5114c
Showing 1 changed file with 4 additions and 0 deletions.
4 changes: 4 additions & 0 deletions litellm/proxy/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -280,6 +280,10 @@ def _init_litellm_callbacks(self, llm_router: Optional[litellm.Router] = None):
async def update_request_status(
self, litellm_call_id: str, status: Literal["success", "fail"]
):
# only use this if slack alerting is being used
if self.alerting is None:
return

await self.internal_usage_cache.async_set_cache(
key="request_status:{}".format(litellm_call_id),
value=status,
Expand Down

0 comments on commit ed5114c

Please sign in to comment.