Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement the HeadQA evaluation #127

Closed
leogao2 opened this issue Feb 6, 2021 · 0 comments
Closed

Implement the HeadQA evaluation #127

leogao2 opened this issue Feb 6, 2021 · 0 comments

Comments

@leogao2
Copy link
Contributor

leogao2 commented Feb 6, 2021

We present HEAD-QA, a multi-choice question answering testbed to encourage research on complex reasoning. The questions come from exams to access a specialized position in the Spanish healthcare system, and are challenging even for highly specialized humans. We then consider monolingual (Spanish) and cross-lingual (to English) experiments with information retrieval and neural techniques. We show that: (i) HEAD-QA challenges current methods, and (ii) the results lag well behind human performance, demonstrating its usefulness as a benchmark for future work.

https://arxiv.org/abs/1906.04701

@leogao2 leogao2 added this to To do, Evaluations to Implement in Implementing Evaluations via automation Feb 6, 2021
@leogao2 leogao2 mentioned this issue Feb 6, 2021
2 tasks
@leogao2 leogao2 moved this from To do, Evaluations to Implement to In Progress in Implementing Evaluations Feb 8, 2021
@leogao2 leogao2 closed this as completed Feb 13, 2021
Implementing Evaluations automation moved this from In Progress to Done, evaluations Feb 13, 2021
@leogao2 leogao2 assigned jon-tow and unassigned jon-tow Feb 13, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Implementing Evaluations
  
Done, evaluations
Development

No branches or pull requests

3 participants