You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks for your great work. It really works well on my own dataset with your pretrained model.
Here I still have some questions.
The readme provides the preprocess.py of dataset co3d_V2, I wonder the some infos about the preprocessed data.
I wonder what is the metrics of depth map? I see when training, you divided it by 65535 in co3d.py. Does it mean meters after division?
And the $T$ vector in your preprocessed output is in meters?
Are the camera extrinsics $R$ C2W?
Is 'selected_seqs_test.json' the list of rgb index to make pairs?
The provided pretrained model works well on my own dataset. However, the global alignment step is time consuming.
I find that the train loss is essentially consisted of a image-pair, not including multi-view contents, which means the global alignment is a post-processing step and is not included in the training step. Is my understanding correct?
In my situation, the trainset input views' poses are the same as the testset. Is it feasible to use my own trainset to post-train your provided model and avoid global alignment in the inference step. Could you please give me some advice?
Best regards,
VillardX
The text was updated successfully, but these errors were encountered:
Hi, thanks for your great work. It really works well on my own dataset with your pretrained model.
Here I still have some questions.
Best regards,
VillardX
The text was updated successfully, but these errors were encountered: