This repository has been archived by the owner on Mar 21, 2024. It is now read-only.
Support for running inference independently from running training #452
Labels
architecture
Anything around possible extensions or re-structuring of code
workflow
Improvements to workflows (recovery, speed)
Projects
Currently to run inference/evaluation on a test dataset:
New support will run inference in a new dataset so that:
AB#3997
The text was updated successfully, but these errors were encountered: