Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add switch to QA pred head for ranking by confidence scores #836

Merged
merged 3 commits into from
Aug 18, 2021
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Explain different kinds of scores in doc string of QAPredictionHead
  • Loading branch information
julian-risch committed Aug 18, 2021
commit 83122bfd1958dd8bc1bd454676ab99e64596fc2c
12 changes: 11 additions & 1 deletion farm/modeling/prediction_head.py
Original file line number Diff line number Diff line change
Expand Up @@ -931,6 +931,16 @@ def forward(self, X):
class QuestionAnsweringHead(PredictionHead):
"""
A question answering head predicts the start and end of the answer on token level.

In addition, it gives a score for the prediction so that multiple answers can be ranked.
There are three different kinds of scores available:
1) (standard) score: the sum of the logits of the start and end index. This score is unbounded because the logits are unbounded.
It is the default for ranking answers.
2) confidence score: also based on the logits of the start and end index but scales them to the interval 0 to 1 and incorporates no_answer.
It can be used for ranking by setting use_confidence_scores_for_ranking to True
3) calibrated confidence score: same as 2) but divides the logits by a learned temperature_for_confidence parameter
so that the confidence scores are closer to the model's achieved accuracy. It can be used for ranking by setting
use_confidence_scores_for_ranking to True and temperature_for_confidence!=1.0. See examples/question_answering_confidence.py for more details.
"""

def __init__(self, layer_dims=[768,2],
Expand Down Expand Up @@ -964,7 +974,7 @@ def __init__(self, layer_dims=[768,2],
:type duplicate_filtering: int
:param temperature_for_confidence: The divisor that is used to scale logits to calibrate confidence scores
:type temperature_for_confidence: float
:param use_confidence_scores_for_ranking: Whether to sort answers by confidence score (normalized between 0 and 1) or by standard score (unbounded)
:param use_confidence_scores_for_ranking: Whether to sort answers by confidence score (normalized between 0 and 1) or by standard score (unbounded)(default).
:type use_confidence_scores_for_ranking: bool
"""
super(QuestionAnsweringHead, self).__init__()
Expand Down