Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

讨论下,GFL对比FL在误报率上有什么变化 #20

Open
connorchen opened this issue Dec 27, 2020 · 3 comments
Open

讨论下,GFL对比FL在误报率上有什么变化 #20

connorchen opened this issue Dec 27, 2020 · 3 comments

Comments

@connorchen
Copy link

如题,GFL可能会带来更高的AP,但是误报率呢
我用Nanodet训练自己的数据集,AP确实比yolov3高很多,但是误报率很高,猜想和GFL的置信度表达有关,可否解下疑惑,多谢

@implus
Copy link
Owner

implus commented Dec 28, 2020

按理说 AP 是通过score 排序,越多正确的排在前面AP越高,所以不太应该会导致误报率变高啊,有点奇怪。。。是不是需要调整一下卡的阈值呢?

@connorchen
Copy link
Author

嗯 我又看了下PR曲线,多出来的AP可能是牺牲precision换来的recall,试了下,调高阈值,误报率下去了,但是AP也下去了。

@connorchen
Copy link
Author

connorchen commented Dec 29, 2020

还有就是类别-置信度联合表达这个设定,算是一种集大成表示,但是太依赖IOU了,如果IOU有偏差,既影响类别判断,也影响置信度判断,不够鲁棒。这个是我个人理解,如果有什么不对的地方请指正。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants