-
Notifications
You must be signed in to change notification settings - Fork 978
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inquiry Regarding Documentation #1015
Comments
A lot of people come to this repo trying to use it to do inference or parameter efficient finetuning. It's primarily to discourage those users. If you wish to finetune or continue pretraining a large model originally trained with this library (or one of the few we support importing, such as LLaMA) on tens or hundreds of billions of tokens, this is an appropriate library to use. It is very much overcomplicated from the prospective of the median person interested in small scale finetuning though. |
I appreciate the explanation you provided. |
I am looking to further train the model which is based on gpt-neox with data from 2022 and 2023. The model has been trained on data up to 2021.
However, according to the documentation of gpt-neox, it is mentioned that unless starting from scratch, the library is not recommended. I would appreciate it if you could provide some insights into why such a statement exists. Could you please explain why?
Your explanation would be greatly appreciated. Thanks!
The text was updated successfully, but these errors were encountered: