-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Please select a token to use as pad_token
" error for alpaca-lora-7b
model
#434
Comments
Please select a token to use as pad_token
error for alpaca-lora-7b
modelpad_token
" error for alpaca-lora-7b
model
Thanks for opening an issue!! Re: Alpaca, you can fix this error by, right after the tokenizer is initialized, setting
And this should fix the error. I can consider how we want to allow users to pass this via command line. For the latter logging message, I believe the same line as above will silence this error? You'll get the same results either way though. |
Thanks for looking into this! I tried to add the code you suggested to In fact, I found this code is already in this file in |
I did a bit of digging. The problem is that for Alpaca model Instead of doing what you suggested, I substituted that line 283 as follows:
(to my knowledge, is the eos token for Alpaca, but in any case using either This change caused another error at line 407. I changed this
with this:
Now, I can run evaluation on Alpaca (and Dolly) and get results. I'm not sure if they are correct, though. On The issue of no pad for alpaca model might be related to problems as follows: huggingface/transformers#22312 In any case, it would be good if |
Thanks for looking into this, I will fix this issue at line 407! I'll need to do some more digging on the Alpaca issue though. If it's an issue where only this alpaca upload does not work when doing the |
I don't think |
I have the same issue. |
When I run it on
alpaca-lora-7b
model this waypython main.py --model hf-causal-experimental --model_args pretrained=chainyo/alpaca-lora-7b --tasks qasper --device cuda:4
, I get an error:Same problem when I try alpaca with
squad2
dataset.Note that this dataset works fine with dolly model. I tested it with
dolly-v2-12b
(commandpython main.py --model hf-causal-experimental --model_args pretrained=databricks/dolly-v2-12b --tasks qasper --device cuda:4
).It gives tons of repeated messages like this:
But I do get the metrics at the end in the table:
The text was updated successfully, but these errors were encountered: