Skip to content
This repository has been archived by the owner on Mar 21, 2024. It is now read-only.

Support for running inference independently from running training #452

Open
asantamariapang opened this issue May 4, 2021 · 0 comments
Open
Assignees
Labels
architecture Anything around possible extensions or re-structuring of code workflow Improvements to workflows (recovery, speed)
Projects

Comments

@asantamariapang
Copy link
Contributor

asantamariapang commented May 4, 2021

Currently to run inference/evaluation on a test dataset:

  1. Need to provide run recover ID
  2. Need to set "-train=False"

New support will run inference in a new dataset so that:

  1. Test dataset will not be split (train,val,test)
  2. Model runs in whole test dataset (no split),

AB#3997

@asantamariapang asantamariapang added good first issue Good for newcomers architecture Anything around possible extensions or re-structuring of code workflow Improvements to workflows (recovery, speed) labels May 4, 2021
@asantamariapang asantamariapang self-assigned this May 4, 2021
@asantamariapang asantamariapang added this to Planned in InnerEye via automation May 4, 2021
@asantamariapang asantamariapang added workflow Improvements to workflows (recovery, speed) and removed good first issue Good for newcomers workflow Improvements to workflows (recovery, speed) labels May 4, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
architecture Anything around possible extensions or re-structuring of code workflow Improvements to workflows (recovery, speed)
Projects
No open projects
InnerEye
  
Planned
Development

No branches or pull requests

1 participant