-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow specifying arbitrary step numbers for saving extra checkpoints #737
Conversation
Maybe it would make sense to remove the save-iterations argument entirely? Although arbitrary checkpointing is a good thing to support, in practice I anticipate log-space and linear-space checkpointing to be by far the most desired. If we can expose an easy-to-use interface for those two, with the ability to do arbitrary with some work, that might be ideal? |
This looks good to me. @StellaAthena -- Please also review this since it's a big change. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have tested this code, discussed the PR at length with @haileyschoelkopf on Discord, and am happy with it being merged.
This PR adds a neox arg
extra_save_iters
which takes a list of ints. When the training iteration is equal to an element of the list, a checkpoint is saved at that step.Tested and this works, including for iteration 0.
Note that it'll interfere with
keep_n_last_checkpoints
(last N checkpoints will still be kept, thus removing any "extra" saved ckpts that may have been desired.) Also, NeoX does in fact save a checkpoint at the last step already even if that last step in training isn't an even multiple of the save interval.This is intended as a change to supercede the "log-spaced checkpointing" we discussed a while back.
cc @Quentin-Anthony @StellaAthena