-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
understanding context length behaviors #1642
Comments
Hi! The current (intended) behavior is to simply left-truncate so things won't exceed (model max length).
(PS. this is all described for HFLM, but the other local model impls are meant to match HF as much as possible in behaviors like this.) There aren't currently any optimizations we're doing w.r.t intelligently truncating beyond these--because LMs' tokenizers are (not currently) exposed elsewhere to the tasks or construction of string inputs, it's a pain to figure out what would be truncated beforehand and e.g. only provide the max number of shots that fit while maintaining the prefixed task description and fewshot format. Would like to change this in future though. Would like to improve this behavior in future, or at minimum make it clear via logging when requests are being truncated! Hope this helps! |
Hi, I have a quick question. What is the behavior of the harness if the input examples exceed the model's sequence length in long-document tasks (and how do few-shot examples influence this)? Thank you!
The text was updated successfully, but these errors were encountered: