Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Code for user-initialized setting #23

Closed
Flowerfan opened this issue Oct 24, 2020 · 6 comments
Closed

Code for user-initialized setting #23

Flowerfan opened this issue Oct 24, 2020 · 6 comments

Comments

@Flowerfan
Copy link

Hi Achal,

The paper reports the results for user-initialized methods. Could you please release related code?

@achalddave
Copy link
Collaborator

Yes, of course! I've been meaning to do this for a while and kept putting it off. I've quickly pushed my code for running single-object trackers based on the pysot repo, with documentation for the steps here: https://github.com/TAO-Dataset/tao/blob/master/docs/trackers.md#single-object-trackers

To use your own tracker, you can do the following:

  1. Implement your tracker by inheriting from the base Tracker class. You can look at how the PysotTracker is implemented here.
  2. Update init_tracker here and add an option for your tracker here to add support for your tracker.

I did this fairly hastily so please let me know if you run into any issues!

@Flowerfan
Copy link
Author

Hi, @achalddave

Thank you for your great job!

I have tested the SiamRPN++ model using the first frame as the init frame following the instructions. However, I only get 25.34% mAP on the TAO validation set (988 videos), which is much lower than 29.7% reported in Table 5. The model I have downloaded is siamrpn_r50_l234_dwxcorr from pysot zoo.
Are there any hyperparameters I should tune to reproduce the result?

@achalddave achalddave reopened this Jan 28, 2021
@achalddave
Copy link
Collaborator

That is strange. We did not tune hyperparameters, to my recollection. I'll look into this soon and get back to you. If possible, could you send me your results file, either here or over email?

@Flowerfan
Copy link
Author

Flowerfan commented Jan 29, 2021

That is strange. We did not tune hyperparameters, to my recollection. I'll look into this soon and get back to you. If possible, could you send me your results file, either here or over email?

I have attached both training and validation results to google drive here. I found that the instruction script uses the train.json for evaluation. So I tested the model on training data and the mAP is 28.4%, which is close to 29.7% now. Please check them when you are available. Thank you.

@achalddave
Copy link
Collaborator

achalddave commented Jan 31, 2021

Ah, my bad, there is one key hyperparameter that requires tuning: a confidence threshold. Single object trackers report boxes for all frames, even if they are underconfident in some frames. Thus, you need to specify a confidence threshold, and remove predictions for frames with low confidences.

To do this, we tuned the score threshold on the training set, as described in Table 16 of the supplementary:
Screen Shot 2021-01-31 at 12 09 58 PM

Your result of 28.4 on training set is pretty close to my result with a score threshold of 0 (28.6AP). Can you try evaluating on the validation set using a score threshold of 0.7, which is the threshold we found was optimal on the train set as shown in the above table? Once your results match, I will update the docs as well.

EDIT: I just evaluated your validation results using the following command with a score threshold of 0.7, and was able to replicate results closer to in the paper (29.3 vs. 29.7 in the paper). I'll update the docs with this command!

> python scripts/evaluation/evaluate.py \
    /path/to/validation/annotations.json ~/FlowerFan_val_results_siamrpnpp.json \
    --config-updates THRESHOLD 0.7 SINGLE_OBJECT.ENABLED True
[...]
AP,AP-short,AP-med,AP-long,AR,AR-short,AR-med,AR-long,path
29.26,19.14,14.80,32.49,30.13,19.35,15.36,33.41,

@Flowerfan
Copy link
Author

I got the same performance with the threshold. It is very much appreciated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants