Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

request for the data in Image Captioning experiment #22

Closed
ChrisRBXiong opened this issue Nov 21, 2019 · 9 comments
Closed

request for the data in Image Captioning experiment #22

ChrisRBXiong opened this issue Nov 21, 2019 · 9 comments

Comments

@ChrisRBXiong
Copy link

Hi, would you mind release the data used in Image Captioning experiment (human judgments of twelve submission entries from the COCO 2015 Captioning Challenge) in your paper? Thanks a lot!

@felixgwu
Copy link
Collaborator

Hi @ChrisRBXiong,

We follow the setup of Learning to Evaluate Image Captioning.
The human scores are from the COCO leaderboard. You'll see the M1 & M2 human scores if you click the "Challenge2015" button.
You may contact the authors if you have additional questions regarding their setting.

Best,
Felix

@GaryYufei
Copy link

Hi, just a follow here. Any idea about how to get those system outputs?

@felixgwu
Copy link
Collaborator

felixgwu commented Jun 5, 2020

Hi @GaryYufei,

Please contact the authors of Learning to Evaluate Image Captioning for the data set and experimental setup.

Best,
Felix

@GaryYufei
Copy link

GaryYufei commented Jun 5, 2020 via email

@GaryYufei
Copy link

I am sorry @felixgwu, but I got no reply from the authors of "Learning to Evaluate Image Captioning". I think I understand all the setup from that paper and the only missing data is the system outputs on COCO 2014 Val Set (I think it should here, but it seems to be down). Any pointers?

@felixgwu
Copy link
Collaborator

felixgwu commented Jun 8, 2020

Hi @GaryYufei, Here is a link to the val set outputs: https://drive.google.com/file/d/1HUrcgLXTNUY9ZbJ6eegb2V92Hfh17j9e
I hope this helps.

@GaryYufei
Copy link

Cool. That's exactly what I want! Thank you very much

@PierreColombo
Copy link

@GaryYufei were you able to reproduce the results ?
I got 40k captions where in the MoverScore paper they mention 5k examples

@yavuzdrmzksr
Copy link

Hi @felixgwu, I downloaded the val set outputs from the link you provided (https://drive.google.com/file/d/1HUrcgLXTNUY9ZbJ6eegb2V92Hfh17j9e). As far as I can see, "junhua.mao" and "mRNN_share.JMao" submissions have the same outputs. Are those submissions correct? In Figure 7 of "Learning to Evaluate Image Captioning" paper they state they compute different results for "m-RNN" and "m-RNN (Baidu/ UCLA)".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants