-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Backing off" rate limiting error message reports incorrect model kwargs #1079
Comments
Changing the class get_count(dspy.Module):
def __init__(self):
super().__init__()
self.prog = dspy.ChainOfThought(PredictCount, n=1, temperature=0.0)
def forward(self, title, text):
return self.prog(title=title, text=text)
program_event_count = get_event_count() To: class get_count(dspy.Module):
def __init__(self):
super().__init__()
self.prog = dspy.ChainOfThought(PredictCount)
def forward(self, title, text):
return self.prog(title=title, text=text)
program_count = get_count() Seems to have changed the issue. Now when I hit the rate limit, I get:
Still misreports the Is it using the correct |
Even separately adding the ...
self.prog = dspy.ChainOfThought(PredictArticleEventCount, n=3)
...
## OR
self.prog = dspy.ChainOfThought(PredictArticleEventCount, temperature=0.0) When it hits a rate limit:
|
The temperature value is expected, it gets set when bootstrapping: dspy/dspy/teleprompt/bootstrap.py Line 175 in 8e01bee
It's not obvious to me that forcing the temperature to be zero would make sense when using |
I took "bootstrap" to mean that my provided labeled training examples would be randomly sampled with replacement similar to how bootstrapping is used in Random Forests, or other "bagging" (bootstrap aggregating) models. I did not understand that "bootstrap" in this library was redefined to mean allowing the LLM more imaginative creativity when creating entirely new demos to train on. |
The way it works is that it samples from your training samples and generates output/labels for some of those samples that are then used together with samples where the output/label is the one you provided. So it's a mix of both. It is not generating completely new demos, although my previous comment sounded like that. |
When running an optimizer and hitting a rate limit, the "Backing off" message reports incorrect model
kwargs
:n
andtemperature
.Setting up the
lm
:and the optimizer:
show the
lm
kwargs
Run the optimizer:
When I hit a rate limit, I get:
Why is it reporting a different
n
andtemperature
?Is the optimizer using these instead?
How do I change them if it is?
The text was updated successfully, but these errors were encountered: