-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPT-2 implementation problem #334
Comments
This is an archived repository. I don't think this is the best place to ask. Good luck! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
"Hi, I am reading the GPT-2 paper and encountering a problem with the following phrase related to implementation:
'A modified initialization method is used to account for the accumulation on the residual path with model depth. We scale the weights of residual layers at initialization by a factor of 1/√N, where N is the number of residual layers.'
My problem is that we normalize after accumulation (addition then normalization). So, why do we need to scale weights? Aren't we doing this to reduce the impact of accumulation?"
The text was updated successfully, but these errors were encountered: