You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 17, 2021. It is now read-only.
For many models trained with stochastic gradient descent, it is often beneficial to maintain a moving average of each model variable as the model evolves over training iterations. At inference time, the moving average version of the model is used instead of the model precisely at the last training iteration. This often gives better inference performance and should be implemented in the application driver (This will likely break the current variable restoring functions).
For many models trained with stochastic gradient descent, it is often beneficial to maintain a moving average of each model variable as the model evolves over training iterations. At inference time, the moving average version of the model is used instead of the model precisely at the last training iteration. This often gives better inference performance and should be implemented in the application driver (This will likely break the current variable restoring functions).
--originally posted at https://cmiclab.cs.ucl.ac.uk/CMIC/NiftyNet/issues/141
The text was updated successfully, but these errors were encountered: