-
Notifications
You must be signed in to change notification settings - Fork 156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loss become Nan after some time training #35
Comments
Wow... I also found same problem during optimization. Initialiy I think it's error on my training machine. |
I also encountered this problem when training on my own scene, the loss may become nan after several iterations in fine stage. Besides, there are also some cases that "Runtime Error: numel: integer multiplication overflow" happens during fine stage training. I am not sure if it is caused by similar reason. |
I guess that maybe the scene's bounding box is so large, and causes the error when producing the backpropagation in the Gaussian deformation field network. |
Is there any solution to solve this problem? |
In my test, set |
However, it seems that performance might be significantly affected by this approach. Are there any other solutions? |
Why do I always run again because the loss is nan? I can't even finish running it once. |
And the rendering image is white background
The text was updated successfully, but these errors were encountered: