This repository has been archived by the owner on Nov 17, 2023. It is now read-only.
Fix issue of zeros gradients w.r.t. RNN bias when num_layers > 1 #17872
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
Patch for the issue #17818. The rnn operator produces zero gradients for bias when num_layers > 1. It is caused by a mistake in calculating the shift of bias pointer, where we used the size of fusion bias (i2h_bias + h2h_bias) but MXNet gives twice (i2h_bias, h2h_bias) as many as the fusion size.
Checklist
Changes
@ciyongch @pengzhao-intel @TaoLv