-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Predicted labels #46
Comments
I'm not sure, but if everything is configured correctly, there may be something wrong with the code, Or if you have any new findings or logs, can you provide them to analyze the specific problems? |
I encountered the same problem. |
@Qia98 , when evaluating , are the predicted labels correct compared to the gt ? or logical ? are you encountering same issue too ? |
@longzw1997 Any suggestions? It looks like there may be a small problem but I don't know where it is.
|
It looks like during the evaluation, the code did not import the correct class name. Have 'label_list' and 'use_coco_eval = False' in cfg_odvg.py been modified? |
yes. |
so the evaluation results normal now? |
no I modified them from the beginning , but still same issue. |
I find that the the evaluation result of coco is the sam whether using groundingdino_swint_ogc.pth or groundingdino_swinb_cogcoor.pth. |
@SamXiaosheng , I think it depends on the dataset you're using. If your dataset contains like referring expressions , you will find that Groundingdino_swinb performs better because it was trained on RefCOCO while the swint variant wasn't. check this table |
@Hasanmog @longzw1997 @BIGBALLON I suspected there was something wrong in the evaluation code during the training process, so I re-wrote an evaluation code in coco format and ran it in the official code base(I used this code to train and obtained the weight, then I eval it using my evaluate function based on official code). The mAP accuracy was very high (about [email protected] 0.90, map.5 :.95 0.70, but in training log [email protected] no more than 0.1). |
And I suspect that in this code base, the output of the model is correct, but the _res_labels used to calculate the mAP are incorrect, so the problem may arise in the process of converting the output of the model to _res_labels in json file format. |
Hi, @Qia98 I agree with your viewpoint and, if you find the time, please feel free to create a pull request to address this issue. 😄 |
Hi, I also encounterd the same problem, when I visualize the detection results, I found that location of bounding boxes is correct, but the categories are usually incorrect. Is there something wrong with BERT or it is because of other reasons? |
I debugged the function of evaluation, and I found the issue may be due to post-processing, seen in models/GroundingDINO/groundingdino.py(class PostProcess) |
@junfengcao feel free to create a pull request 😄 |
May I ask if this problem has been solved by anyone? I've encountered the same problem and have troubleshot a number of situations but have not solved the problem. |
|
Hello ,
@aghand0ur and I used your code to train on a custom dataset(20 classes) , everything went fine.
I modified the
evaluate
function to suit this specific task , when testing on my test dataset(changed it to coco format) , the coco results are really low , although visualizing samples showed impressive results.I printed out the labels being predicted during evaluation , it is never returning correct prediction while the bounding boxes are quiet good.
I placed label_list containing the categories in the cfg_odvg.py
any idea/tips where the source of the problem could be ?
The text was updated successfully, but these errors were encountered: