You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This implies that the centroid and min-max values are calculated prior to cropping and resampling, which seems reasonable.
However, during inference, the centroid and min-max values are calculated on the partially cropped point cloud. From my perspective, this suggests that the shape isn't placed at the origin and scaled according to the training-time procedure.
https://github.com/yuxumin/PoinTr/blob/master/tools/inference.py#L60
Have you considered training and performing inferences with point cloud normalization consistently based on the partial point cloud? In other words, is it possible to improve results by directly applying the real-world inference process during training?
Thank you for your time and insights.
Best regards
The text was updated successfully, but these errors were encountered:
fabricecarles
changed the title
normalization at inference time
normalization at inference time vs training time
Sep 21, 2023
I have proposed a solution to the training process in issue #142, feel free to have a look and let me know if it works for you. My training and accuracy has improves since this application, getting better results than the paper in less than 250 epochs
Totaly agree @kaali-billi the current implementation is not correct and I also make some changes to train with a normalization only based on input data and not ground truth. I found that results are far better in case of real world scenario (i.e when you make inference without any ground truth).
I think the paper should mention that and author need to fix the code and recompute accuracy benchmarks @yuxumin could you let me know your point of view please ?
Thank you for this second excellent paper on PoinTr.
As I was examining your code, a question arose concerning the normalization process during training compared to inference time.
During training, you generate a partial point cloud online after applying normalization, as seen at https://github.com/yuxumin/PoinTr/blob/master/datasets/ShapeNet55Dataset.py#L45.
This implies that the centroid and min-max values are calculated prior to cropping and resampling, which seems reasonable.
However, during inference, the centroid and min-max values are calculated on the partially cropped point cloud. From my perspective, this suggests that the shape isn't placed at the origin and scaled according to the training-time procedure.
https://github.com/yuxumin/PoinTr/blob/master/tools/inference.py#L60
Have you considered training and performing inferences with point cloud normalization consistently based on the partial point cloud? In other words, is it possible to improve results by directly applying the real-world inference process during training?
Thank you for your time and insights.
Best regards
The text was updated successfully, but these errors were encountered: