We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Are there any official guidelines for resuming the training from a specific checkpoint?
Taking a look at the gpt-neox repository, I guess we need to set the "load" parameter in the config.
But I assume there is no 1:1 mapping between data chunks and checkpoints since there are 133 data splits and 143000 steps.
Are there any existing resources to ensure our setup faithfully reproduces your training?
The text was updated successfully, but these errors were encountered:
I solved this manually inspecting things. I will try to provide some reproducible instructions soon!
Sorry, something went wrong.
No branches or pull requests
Are there any official guidelines for resuming the training from a specific checkpoint?
Taking a look at the gpt-neox repository, I guess we need to set the "load" parameter in the config.
But I assume there is no 1:1 mapping between data chunks and checkpoints since there are 133 data splits and 143000 steps.
Are there any existing resources to ensure our setup faithfully reproduces your training?
The text was updated successfully, but these errors were encountered: