You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Dec 16, 2022. It is now read-only.
Wouldn't it make sense for NoOpTrainer to return training and validation (and "best_validation") metrics (but not loss)?
For example, if I train several models and consume the TrainerBase API, I'd expect to get the training and validation metrics returned by the train method (e.g., accuracy, but not "loss" because it could possibly not make sense), seamlessly, regardless it's a Trainer or a NoOpTrainer implementation.
An alternative would be to always call evaluate on every set I want it to. But I think it's more practical if I could get the metrics of a no-op trainer right away, without calling evaluate.
Thoughts? I'm willing to send a PR if you folks think it's a good idea.
The text was updated successfully, but these errors were encountered:
Thanks for the offer, @bryant1410! I've mostly been using the NoOpTrainer and then evaluating separately, but we think what you've outlined makes sense. We'd be happy to review your PR. One initial request, could you hide that feature behind a flag for when that feature is not required? Thanks!
Wouldn't it make sense for
NoOpTrainer
to return training and validation (and "best_validation") metrics (but not loss)?For example, if I train several models and consume the
TrainerBase
API, I'd expect to get the training and validation metrics returned by thetrain
method (e.g., accuracy, but not "loss" because it could possibly not make sense), seamlessly, regardless it's aTrainer
or aNoOpTrainer
implementation.An alternative would be to always call
evaluate
on every set I want it to. But I think it's more practical if I could get the metrics of a no-op trainer right away, without callingevaluate
.Thoughts? I'm willing to send a PR if you folks think it's a good idea.
The text was updated successfully, but these errors were encountered: