Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support writing out predictions #32

Closed
zphang opened this issue Sep 17, 2020 · 0 comments
Closed

Support writing out predictions #32

zphang opened this issue Sep 17, 2020 · 0 comments
Labels
feature request A feature that isn't implemented yet.

Comments

@zphang
Copy link
Contributor

zphang commented Sep 17, 2020

We should refactor the prompt-creation code to write out predictions. What format this is isn't as important as actually having them.

@StellaAthena StellaAthena added the feature request A feature that isn't implemented yet. label Oct 23, 2020
@StellaAthena StellaAthena added this to To do in Implementing Evaluations via automation Oct 23, 2020
@StellaAthena StellaAthena added this to To do in New Features Oct 23, 2020
New Features automation moved this from To do to Done Jan 4, 2021
StellaAthena pushed a commit that referenced this issue Apr 29, 2022
Remove stopping_criteria and set max_generation_length to 64
qmdnls pushed a commit to qmdnls/lm-evaluation-harness that referenced this issue Aug 17, 2023
…m_xsum

Remove stopping_criteria and set max_generation_length to 64
LZY-the-boys pushed a commit to LZY-the-boys/lm-evaluation-harness-fast that referenced this issue Sep 12, 2023
…m_xsum

Remove stopping_criteria and set max_generation_length to 64
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request A feature that isn't implemented yet.
Projects
No open projects
New Features
  
Done
Development

No branches or pull requests

2 participants