-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fidelity and MMD metrics #106
Comments
Hi, sorry for the late reply.
We make a mistake. We directly cited the metric from "ASHF-Net: Adaptive Sampling and Hierarchical Folding Network for Robust Point Cloud Completion" during our submission for CVPR 2021, and when we recycled the paper for ICCV 2022, this mistake was neglected.
Yes, please ref to https://github.com/yuxumin/PoinTr/blob/master/KITTI_metric.py for detailed calcuation process for these two metrics. |
Thanks for your reply very much. |
Hi, I'm sorry to bother you again. |
@yuxumin Hi, bro. I am still waiting for your reply. |
I missed this issue and feel sorry for the late reply. |
The difference is only the bug in dataloader as you mentioned. |
But after comparing the code line by line, I found that there is only a difference in the naming of the function for upsample input points. So could you specifically point out the differences between these two paragraphs at the mentioned bugs. I'm sorry to trouble you. |
see hzxie/GRNet#27. |
The PCN.pth is trained on the GRNet codebase. PoinTr codebase is made after the paper accepted by ICCV. So you can not find this bug and the modification history here :) |
I can understand what you mean about the bug now. So the current difference between them is the way to upsample inputs, and is 7.26 based on PCNV2 or PCN? |
That is. PCNv2 is for snowflakeNet ... I find SnowflakeNet perform not well with default PCN dataset. |
Hi, I also trained PoinTr on the PCN dataset, but the performance of L1 is still around 7.79, unable to reach 7.26. Have you solved this issue? |
Thanks for you good work! And I look forward to receiving your answer about my some question on metrics.
Both PF-Net and PoinTr concat input and prediction as output, so it looks like using asymmetric Chamer_distance when calculate Fidelity, but why PF-Net on that metric greater than 0.
Whether to use symmetric CD and whether select the object with the minimal value of the prediction on CD from the PCNCars test dataset when calcaulate MMD.
Could you please descript the more detail of caculation on Fidelity and MMD metrics.
The text was updated successfully, but these errors were encountered: