-
Notifications
You must be signed in to change notification settings - Fork 155
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pythia ckpts in hf config vocab_size(50304, 70m ckpt) and tokenizer.json(50257) are mismatch #65
Comments
Hi! I believe this is a bug in the evaluation harness. if you replace the following function (https://github.com/EleutherAI/lm-evaluation-harness/blob/f9eca2c8160be8c20ecc956b7ff545f880160d0e/lm_eval/models/gpt2.py#L121)
With this instead:
All should work! cc @jon-tow |
The eval harness has been patched, so this should work fine now. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I modified the https://github.com/EleutherAI/lm-evaluation-harness/blob/f9eca2c8160be8c20ecc956b7ff545f880160d0e/lm_eval/models/gpt2.py#L50
add transformers.GPTNeoXTokenizerFast,
command is:
python main.py --model gpt2 --model_args pretrained=/work/lm-evaluation-harness/ckpts/pythia-70m/step143000/models--EleutherAI--pythia-70m/snapshots/1c607732430c35e6387a86528d857887e87cae1f --tasks lambada_openai,hellaswag --device 1
traceback is:
Running loglikelihood requests
0%| | 8/45296 [00:01<1:46:22, 7.10it/s]
Traceback (most recent call last):
File "/work/lm-evaluation-harness/main.py", line 108, in
main()
File "/work/lm-evaluation-harness/main.py", line 79, in main
results = evaluator.simple_evaluate(
File "/work/lm-evaluation-harness/lm_eval/utils.py", line 161, in _wrapper
return fn(*args, **kwargs)
File "/work/lm-evaluation-harness/lm_eval/evaluator.py", line 86, in simple_evaluate
results = evaluate(
File "/work/lm-evaluation-harness/lm_eval/utils.py", line 161, in _wrapper
return fn(*args, **kwargs)
File "/work/lm-evaluation-harness/lm_eval/evaluator.py", line 247, in evaluate
resps = getattr(lm, reqtype)([req.args for req in reqs])
File "/work/lm-evaluation-harness/lm_eval/base.py", line 820, in fn
rem_res = getattr(self.lm, attr)(remaining_reqs)
File "/work/lm-evaluation-harness/lm_eval/base.py", line 185, in loglikelihood
return self._loglikelihood_tokens(new_reqs)
File "/work/lm-evaluation-harness/lm_eval/base.py", line 317, in _loglikelihood_tokens
logits = torch.gather(logits, 2, cont_toks.unsqueeze(-1)).squeeze(
RuntimeError: index 50276 is out of bounds for dimension 2 with size 50257
The text was updated successfully, but these errors were encountered: