-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
More details on training dataset #9
Comments
Hi @haiphamcse, that's a good question. We use the 6FPS variant which already leads to 286'228 images. |
You are welcome. That's a good question. We will provide PCOD on 3DPW and AGORA in an updated version of our paper on arXiv. Thanks for pointing this out. |
Hi there, thank you for the quick response. May I ask is the code for reproducing Multi-HMR quantitative results (especially human depth estimation) coming out soon? Our team is currently developing methods based on Multi-HMR but are having a hard time reproducing the results. |
Hi @haiphamcse , we do not plan to release the evaluation code for the moment. It is possible that you get bad performance because you do not give the ground-truth camera parameters (focal length, principal point). In our demo code we set these parameters to the default value, please update them here. |
Hi there, sorry for the late reply. We are still currently (struggling and) adapting Multi-HMR to existing benchmarks (3DPW, AGORA, etc). One of the problems we encountered was how to obtain head box GT to get human queries. Here we tried to benchmark Multi-HMR with the provided ckpt on 3DPW and use GT primary keypoints. We obtain the following results (which is far from the reported results). P/S: we did not input the focal length into Multi-HMR |
Hi, since Multi-HMR is taking as input the entire image you do not need to use the GT primary keypoints compared to single-erson methods. Just feed the image to Multi-HMR and it will output a list of detected persons with their associated vertices. |
Hi, how can you evaluate the 3DPW SMPL pose? Since the multi-hmr model predicts the SMPL-X pose, I guess that you will convert the SMPL-X mesh to SMPL mesh like this here, or using any other method? Thank you |
Good question @nguyenquivinhquang indeed we followed this strategy. |
Hi @fabienbaradel, I noticed that you're using this code matching the predicted and ground truth poses. However, I observed that this function sets missing_punish to the ground truth pose error, which doesn't correspond to any predicted pose. Have you utilized missing_punish? If so, could you please explain why it's set to 150 and if there's any related paper that mentions this approach? |
Hi @nguyenquivinhquang , yes we followed this evaluation protocol for CMU Panoptic similar to what ROMP/BEV are doing. I think that this way of taking missing detection into account comes from Coherent reconstruction of multiple humans from a single image, similar to what Monocular 3D Pose and Shape Estimation of Multiple People in Natural Scenes - The Importance of Multiple Scene Constraints is doing in the initial paper. |
Hi @fabienbaradel, I tried implementing the evaluation code based on your suggestion. However, the result on the 3DPW test set I reproduced is as follows: I also observed that the person's height from the model is lower than the person's height from the ground truth. I hope that you can release the validation code for 3DPW or provide some suggestions for improving the accuracy. |
Hi @nguyenquivinhquang , thanks for sharing the table. Please take into account that results in the paper are obtained after fine-tuning on the 3DPW training data. The first line that you share (Multi-HMR (pretrained) wo.punish) is already giving decent results, the PA-MPJPE is good and corresponds to what we get w/o finetuning on 3DPW-train. However the MPJPE and PVE are a bit higher than what we get. |
Hi @nguyenquivinhquang @haiphamcse , I just want to let you know that today we are releasing the training and evaluation code of Multi-HMR 😄 |
Thank you for releasing the training and evaluation code. Also, congratulations on having your paper accepted to ECCV 😄 |
Thank you @nguyenquivinhquang 😃 |
Hi there, Could you release the dataloader for the MuPoT and CMU datasets? Moreover, I tried to run the evaluation code with checkpoint 672L, and the result was quite different from the number you reported in the paper, with PA-MPJPE being 61.9. Therefore, could you release the PA-MPJPE evaluation code as well? Thank you. |
Hi @nguyenquivinhquang , we do not plan to release the dataloader for MuPoTS and CMU for the moment, maybe in a near future. I will let you know. Which joints are you using for computing these metrics? You should not take the SMPL joints but rather take the H36M joints maybe the error come from here. I will try to add this metrics and the associated PA soon. We will update the paper soon on arXiv with updated number specially about the universal model that we are releasing, stay tuned. |
Hi @nguyenquivinhquang , I did few edits to add MPJPE and PA-MPJPE computed on H36M-14 joints. |
Sorry for the late response; I have rerun and got the result as close as in the paper. Thank you very much 😄 |
great news! 😄 |
Hi there, I tried to run the evaluation code with your checkpoint multiHMR_896_L, but the result was different from what I expected. Comparing it to the universal model reported in the paper, the MPJPE metric seems to differ. Could you help me understand why this is happening?? |
Hi @DavidBlack-cmu , PA-MPJPE and PVE are on the same range but indeed the MPJPE is different. Sorry about that, I will update the paper with new numbers. |
Hi there, loved your work! I want to ask a bit more detail about your training dataset. From the paper it is seen that Multi-HMR uses AGORA and BEDLAM. However, how did you train Multi-HMR with the 30FPS BEDLAM or only 6FPS variant?
The text was updated successfully, but these errors were encountered: