Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nan value #6

Open
jayandral opened this issue Aug 2, 2019 · 4 comments
Open

nan value #6

jayandral opened this issue Aug 2, 2019 · 4 comments

Comments

@jayandral
Copy link

While trying to replicate adacos we find the B_avg tending to inf. can u help me with this.

m=0.5
B_avg value before inf = 8.3499e+35

Thanks

@4uiiurz1
Copy link
Owner

4uiiurz1 commented Aug 3, 2019

Are you training with your own dataset?
Can you tell me more details?

@jayandral
Copy link
Author

We are training with VGGFace2 dataset.

@ahmdtaha
Copy link

ahmdtaha commented Sep 8, 2020

I experienced this issue. It seems related to this other issue.

My fix is to change the optimizer from

optimizer = optim.SGD(filter(lambda p: p.requires_grad, model.parameters()), lr=args.lr,
            momentum=args.momentum, weight_decay=args.weight_decay)

to

from itertools import chain
optimizer = optim.SGD(filter(lambda p: p.requires_grad, chain(model.parameters(),metric_fc.parameters())),
                      lr=args.lr,momentum=args.momentum, weight_decay=args.weight_decay)

@ahmdtaha
Copy link

I found another issue that raises the nan value.

The scale variable s should be updated during training only,i.e., using the training split.
However, it is updated every time the forward method is called. Thus, it is currently updated using both the training and testing splits. I found that to raise nan value frequently.
So I changed

with torch.no_grad():
     ........
     self.s = torch.log(B_avg) / torch.cos(torch.min(math.pi/4 * torch.ones_like(theta_med), theta_med))

to

if self.training:
     with torch.no_grad():
             ........
             self.s = torch.log(B_avg) / torch.cos(torch.min(math.pi/4 * torch.ones_like(theta_med), theta_med))

self.training is already defined inside AdaCos because it is nn.Module. So there is no need to define this variable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants