-
Notifications
You must be signed in to change notification settings - Fork 135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Weights are not updating #39
Comments
Hi @vmarar |
would this be the case using rl_loss -= (log_probs[0, top_i].item()*discounted_reward) as well? Would there still be a detachment? I have currently have cuda visible devices set to 0. Is there a way to bypass this error? Thank you! |
Yes, this is exactly what
Which line triggers this error? Could you please provide a traceback? |
I tried converting from numpy to tensor and then pointing it towards cuda:0, no error but model weights are still not changing |
Yes, that wouldn't help, because you are first detaching the loss and then creating a new node that is not connected to the existing computational graph. In this scenario, the loss is not backpropagated and that's why weights are not changing. Let me investigate the issue. Could you please provide your environment details? The versions of PyTorch, NVIDIA driver, and what GPU you are using. |
Here are the env details: Details: Thanks for your help! |
After trying some solutions, I found that after converting NumPy output to tensor output within my scoring script. AND addressing a warning of using a source tensor to create a copy in the rl_loss = torch.tensor(rl_loss, require_grad=True) line, I simply commented out this line, and weights seem to backpropagate. Is there any reason it could be wrong to comment out that line and not create a tensor from the existing rl_loss tensor and backpropagate the grad from rl_loss = rl_loss/n_batch instead? It's working but I want to make sure that I am not backpropagating useless values. Is it possible that by using a script in lieu of a predictive model , autograd only sees the output of the script which is in the form of a tensor? Therefore not leading to any breaks in the computational graph because it's only dealing with the output? |
Hi! My weights don't seem to be changing.
I printed out the grad for each of the layers and it comes back as none, the grad of the hidden layer is none as well.
In the code the only thing I changed in the reinforcement file was rl_loss -= (log_probs[0, top_i].cpu().detach().numpy()*discounted_reward) and then tried rl_loss -= (log_probs[0, top_i].item()*discounted_reward) due to an issue with cuda device = 0.
Would you know any reason why grad would be coming up as none for all of the layers? I believe this is why optimizer.step() is not working and the weights are not updating.
The text was updated successfully, but these errors were encountered: