Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

loss stuck in overflow for RPE position embedding together with sparse attention #292

Open
sweinbach opened this issue May 4, 2021 · 1 comment
Labels
bug Something isn't working

Comments

@sweinbach
Copy link
Contributor

Describe the bug
Loss for RPE position embedding not going down

[2021-05-04 15:45:14,710] [INFO] [unfused_optimizer.py:246:_update_scale] Grad overflow on iteration: 50
[2021-05-04 15:45:14,710] [INFO] [unfused_optimizer.py:246:_update_scale] Grad overflow on iteration: 50
[2021-05-04 15:45:14,710] [INFO] [unfused_optimizer.py:247:_update_scale] Reducing dynamic loss scale from 1 to 1
[2021-05-04 15:45:14,710] [INFO] [unfused_optimizer.py:247:_update_scale] Reducing dynamic loss scale from 1 to 1
[2021-05-04 15:45:14,710] [INFO] [unfused_optimizer.py:171:step] [deepspeed] fp16 dynamic loss scale overflow! Skipping step. Attempted loss scale: 1, reducing to 1
[2021-05-04 15:45:14,710] [INFO] [unfused_optimizer.py:171:step] [deepspeed] fp16 dynamic loss scale overflow! Skipping step. Attempted loss scale: 1, reducing to 1
[2021-05-04 15:45:15,204] [INFO] [unfused_optimizer.py:246:_update_scale] Grad overflow on iteration: 51
[2021-05-04 15:45:15,204] [INFO] [unfused_optimizer.py:247:_update_scale] Reducing dynamic loss scale from 1 to 1
[2021-05-04 15:45:15,204] [INFO] [unfused_optimizer.py:246:_update_scale] Grad overflow on iteration: 51
[2021-05-04 15:45:15,204] [INFO] [unfused_optimizer.py:171:step] [deepspeed] fp16 dynamic loss scale overflow! Skipping step. Attempted loss scale: 1, reducing to 1
[2021-05-04 15:45:15,204] [INFO] [unfused_optimizer.py:247:_update_scale] Reducing dynamic loss scale from 1 to 1
[2021-05-04 15:45:15,204] [INFO] [unfused_optimizer.py:171:step] [deepspeed] fp16 dynamic loss scale overflow! Skipping step. Attempted loss scale: 1, reducing to 1
[2021-05-04 15:45:15,698] [INFO] [unfused_optimizer.py:246:_update_scale] Grad overflow on iteration: 52
[2021-05-04 15:45:15,698] [INFO] [unfused_optimizer.py:247:_update_scale] Reducing dynamic loss scale from 1 to 1
[2021-05-04 15:45:15,699] [INFO] [unfused_optimizer.py:171:step] [deepspeed] fp16 dynamic loss scale overflow! Skipping step. Attempted loss scale: 1, reducing to 1
[2021-05-04 15:45:15,699] [INFO] [unfused_optimizer.py:246:_update_scale] Grad overflow on iteration: 52
[2021-05-04 15:45:15,699] [INFO] [unfused_optimizer.py:247:_update_scale] Reducing dynamic loss scale from 1 to 1
[2021-05-04 15:45:15,699] [INFO] [unfused_optimizer.py:171:step] [deepspeed] fp16 dynamic loss scale overflow! Skipping step. Attempted loss scale: 1, reducing to 1

To Reproduce
Steps to reproduce the behavior:

  1. Use small.yml and sparse.yml
  2. change position embedding to "rpe"
  3. train using deepy.py
@sweinbach sweinbach added the bug Something isn't working label May 4, 2021
@StellaAthena
Copy link
Member

Note that this doesn't occur when running with dense attention as shown here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants