Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about the trainning setting on DTU dataset #67

Open
ZhuoxiaoLi opened this issue Jun 7, 2024 · 7 comments
Open

Question about the trainning setting on DTU dataset #67

ZhuoxiaoLi opened this issue Jun 7, 2024 · 7 comments

Comments

@ZhuoxiaoLi
Copy link

Hi,

Following the Settings in your excellent article and the GOF article, we set --lambda_dist to 1000 on the DTU dataset (because we consider it an indoor scene), but the mesh extraction results differ from those in your article. Can you provide instructions for setting the correct training parameters?

image

image

@hbb1
Copy link
Owner

hbb1 commented Jun 7, 2024

Hi, did you produce this result using our full evaluation scripts?

@hbb1
Copy link
Owner

hbb1 commented Jun 7, 2024

I checked this, it looks like
image

@ZhuoxiaoLi
Copy link
Author

Thanks for your quick reply!!!

I used the same Settings as in full_eval.py, which is "--quiet -- test_iterations-1 --depth_ratio 1.0 --lambda_dist 1000". render Settings are also from full_eval.py: --quiet --skip_train --depth_ratio 1.0 --num_cluster 1. I will go through the detailed parameters I defined.

Also mentioned in the article, distloss is set to 100 for outdoor scenes, but you set it to default 0 because it might cause rendering blur on someone else's custom dataset. So, to replicate the results on the Mip 360 dataset in the original paper, do I need to reset dits loss to 100 as well?

Again, thanks for your fantastic work!

@hbb1
Copy link
Owner

hbb1 commented Jun 7, 2024

From my experiments, it seems that the distortion loss has little effect on the MipNeRF360 dataset because MipNeRF's 360 has fewer illumination changes. You can report the performance with the default parameters, or plus the distortion, or even removing any regularizations, based on your experimental setting.

@ZhuoxiaoLi
Copy link
Author

Thank you very much for your help and timely reply!

@ZhuoxiaoLi
Copy link
Author

Hi,
I recently deployed 2DGS for large-scale scene reconstruction with almost no modifications (only referencing Vast Gaussian's partitioned training strategy). The extracted mesh is excellent!

4f2011a7dacfa9c828c13ab87c2392a

@hbb1
Copy link
Owner

hbb1 commented Jun 29, 2024

Awesome!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants