You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 21, 2024. It is now read-only.
Some documentation is lacking / inaccurate for evaluating against pre-trained models.
What documentation should be provided?
The evaluation code uses a config based on the -–model parameter, not the config provided by model_id. This should be clarified.
The datasets that you use for evaluation needs to have at least 3 subjects to be able to have at least 1 for training, 1 for validation and 1 for testing. This is quite cumbersome because our users probably want to evaluate on all their data. Documentation should be clearer on how to do this (inference service / other workarounds?).
Clarify that the validation the evaluation dataset needs to have the same structures as the model, or the checks will fail.
Is there an existing issue for this?
Issue summary
Some documentation is lacking / inaccurate for evaluating against pre-trained models.
What documentation should be provided?
-–model
parameter, not the config provided bymodel_id
. This should be clarified.example command that currently works:
AB#8800
The text was updated successfully, but these errors were encountered: