-
Notifications
You must be signed in to change notification settings - Fork 342
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does kerbert going to support LLaMA? #172
Comments
Thanks for sharing. The model that you gave KeyBERT is meant for creating embeddings and not for performing the keyword search itself. It should be possible to integrate it within KeyBERT but since its procedure is quite different from how KeyBERT works, many parameters would not have an effect, such as |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi, I received an error once I change the model with
decapoda-research/llama-7b-hf
. Is this error derived from sentence-transformer?ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as
pad_token
(tokenizer.pad_token = tokenizer.eos_token e.g.)
or add a new pad token viatokenizer.add_special_tokens({'pad_token': '[PAD]'})
.The text was updated successfully, but these errors were encountered: