Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

don't resize embeddings if it's already large enough #577

Merged
merged 2 commits into from
Sep 15, 2023
Merged

Conversation

winglian
Copy link
Collaborator

for example, we don't actually need to resize phi b/c the embeddings size is already much larger than the tokenizer len

>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-1_5", trust_remote_code=True)
>>> len(tokenizer)
50295
>>> model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1_5", trust_remote_code=True)
>>> model.get_input_embeddings().num_embeddings
51200
>>> 

@NanoCode012
Copy link
Collaborator

As I rmb, model.resize implicitly calls model.tie_weights. By doing this, we're not tying weights. I'm not clear whether this is still important as it was a warning message while finetuning a month ago.

It is also weird that the tokenizer's length and the model's weights have different length..

@winglian winglian merged commit 3607882 into main Sep 15, 2023
6 checks passed
@winglian winglian deleted the embeddings-sz branch September 15, 2023 19:47
mkeoliya pushed a commit to mkeoliya/axolotl that referenced this pull request Dec 15, 2023
…ollective#577)

* don't resize embeddings if it's already large enough

* make sure to tie weights, even if we aren't resizing
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants