-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for top-k metrics with num_return_sequences>1 #1117
Comments
Hi! We don't currently support Could you explain exactly what "top-K evaluation" means in this context? If you'd be interested in opening a PR to help support this or streamline evaluation in the |
Thanks for the quick reply! I'm aiming to evaluate the top-k exact-match metric (check if there's a ground-truth hit in top-k lowest-perplexity candidates generated using beam-search + num_return_sequences=k). In that case, I'll see if I can work on incorporating this feature and open a PR afterwards.
|
Hi, sorry to disturb you, just wondering have you worked out this? |
Hi, I wonder if lm-eval supports top-k evaluation when generating with beam_search with num_return_sequences>1 ?
The text was updated successfully, but these errors were encountered: