You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 17, 2021. It is now read-only.
While I was able to understand training and inference actions from the configuration file documentation, the evaluation action was less clear. For starters, it's not mentioned in the overview.
Intuitively, I'd expect the evaluation to either
run inference as configured (maybe without creating inference output)
or read the output from a prior run of inference, where such output is found according to the inference config section
before evaluating against the ground truth data set of the custom application section.
Issue
Running the classification application (btw not listed in the config doc) however, the evaluation reports perfect scores in save_csv_dir. I assume this SO question is the same problem.
This is because comparison against labels (in the case of classification application at least) defaults to labels when inferred is not found, implemented in add_inferred_output_like.
If I simply define inferred to point to the inferred.csv written by the prior inference run
it works as expected (2.)
So I infer that inferred in not correctly inferred when running evaluation 😇, not respecting save_seg_dir.
About comparing to label instead
I guess that is a good idea for testing / dry runs, but should it default to that silently? When inferred isn't found I'd expect at least a log entry that alerts me to that. In this case I only see that inferred.csv pointing to label files is written again to the right save_seg_dir actually overwriting the correct one from prior inference.
The text was updated successfully, but these errors were encountered:
Documentation lacking
While I was able to understand training and inference actions from the configuration file documentation, the evaluation action was less clear. For starters, it's not mentioned in the overview.
Intuitively, I'd expect the evaluation to either
before evaluating against the ground truth data set of the custom application section.
Issue
Running the classification application (btw not listed in the config doc) however, the evaluation reports perfect scores in
save_csv_dir
. I assume this SO question is the same problem.This is because comparison against
labels
(in the case of classification application at least) defaults tolabels
wheninferred
is not found, implemented in add_inferred_output_like.If I simply define
inferred
to point to the inferred.csv written by the prior inference runit works as expected (2.)
So I infer that
inferred
in not correctly inferred when running evaluation 😇, not respecting save_seg_dir.About comparing to
label
insteadI guess that is a good idea for testing / dry runs, but should it default to that silently? When
inferred
isn't found I'd expect at least a log entry that alerts me to that. In this case I only see that inferred.csv pointing to label files is written again to the rightsave_seg_dir
actually overwriting the correct one from prior inference.The text was updated successfully, but these errors were encountered: