-
-
Notifications
You must be signed in to change notification settings - Fork 684
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Making this library more like Hugging Face #25
Comments
For reference, I am currently using this wrapper to run the model: https://github.com/oobabooga/text-generation-webui/blob/main/modules/RWKV.py |
|
Thanks, based on your suggestions I have added a RWKVTokenizer wrapper: oobabooga/text-generation-webui@e91f4bc |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I have expressed my interest in having RWKV officially implemented in Hugging Face in huggingface/transformers#17230.
Meanwhile, I have a distilled set of suggestions for how this library could be made more familiar to people who are already used to
transformers
andAutoModelForCausalLM
.Maybe some of these are already possible in the current version of
rwkv
. If so, I would be grateful if you could let me know how.1. Being able to load the tokenizer explicitly
with something like
and then use it with
Having the ability to count the number of tokens in a given prompt is very useful.
2. Generating text with input_ids as input rather than a string
Something like
3. Generation parameters
Many parameters are available for
model.generate()
in HF, but it seems to me that the absolutely essential ones that everyone uses are:I am aware that
alpha_frequency
andalpha_presence
are implemented, but these parameters are not usually found in presets that people have already come up with while working with other models. For this reason, havingrepetition_penalty
would be valuable.The text was updated successfully, but these errors were encountered: