Skip to content
This repository has been archived by the owner on Mar 17, 2021. It is now read-only.

Evaluation doesn't consider save_seg_dir of Inference and compares lables to labels #289

Open
C-nit opened this issue Nov 20, 2018 · 1 comment

Comments

@C-nit
Copy link

C-nit commented Nov 20, 2018

Documentation lacking

While I was able to understand training and inference actions from the configuration file documentation, the evaluation action was less clear. For starters, it's not mentioned in the overview.

Intuitively, I'd expect the evaluation to either

  1. run inference as configured (maybe without creating inference output)
  2. or read the output from a prior run of inference, where such output is found according to the inference config section

before evaluating against the ground truth data set of the custom application section.

Issue

Running the classification application (btw not listed in the config doc) however, the evaluation reports perfect scores in save_csv_dir. I assume this SO question is the same problem.

This is because comparison against labels (in the case of classification application at least) defaults to labels when inferred is not found, implemented in add_inferred_output_like.

If I simply define inferred to point to the inferred.csv written by the prior inference run

[inferred]
csv_file = model_dir/save_seg_dir/inferred.csv

it works as expected (2.)
So I infer that inferred in not correctly inferred when running evaluation 😇, not respecting save_seg_dir.

About comparing to label instead

I guess that is a good idea for testing / dry runs, but should it default to that silently? When inferred isn't found I'd expect at least a log entry that alerts me to that. In this case I only see that inferred.csv pointing to label files is written again to the right save_seg_dir actually overwriting the correct one from prior inference.

@lixin14222
Copy link

Hello,when I configure the Inference section during the training process, no csv_file output,why is this ? Can you tell me how to solve the problem?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants