-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about the trainning setting on DTU dataset #67
Comments
Hi, did you produce this result using our full evaluation scripts? |
Thanks for your quick reply!!! I used the same Settings as in full_eval.py, which is "--quiet -- test_iterations-1 --depth_ratio 1.0 --lambda_dist 1000". render Settings are also from full_eval.py: --quiet --skip_train --depth_ratio 1.0 --num_cluster 1. I will go through the detailed parameters I defined. Also mentioned in the article, distloss is set to 100 for outdoor scenes, but you set it to default 0 because it might cause rendering blur on someone else's custom dataset. So, to replicate the results on the Mip 360 dataset in the original paper, do I need to reset dits loss to 100 as well? Again, thanks for your fantastic work! |
From my experiments, it seems that the distortion loss has little effect on the MipNeRF360 dataset because MipNeRF's 360 has fewer illumination changes. You can report the performance with the default parameters, or plus the distortion, or even removing any regularizations, based on your experimental setting. |
Thank you very much for your help and timely reply! |
Awesome! |
Hi,
Following the Settings in your excellent article and the GOF article, we set --lambda_dist to 1000 on the DTU dataset (because we consider it an indoor scene), but the mesh extraction results differ from those in your article. Can you provide instructions for setting the correct training parameters?
The text was updated successfully, but these errors were encountered: