You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Tiatoolbox has several pre-trained models helpful for data processing. However, models differ in how they handle input and output, making them confusing to use (especially when customizing):
Activation functions apply at different steps for different models. Sometimes in forward method (e.g. CNNModel), sometimes forward returns a raw layer output and the transformation applies in infer_batch (e.g. UNetModel).
Moreover, activation functions are hardcoded. To customize, you can't simply change an attribute; you must overwrite the whole method (different for each model).
Data normalization distributes across methods: HoVerNet uses it in forward, UNetModel in _transform, MicroNet in preproc, and vanilla models rely on the user to do so.
Data preprocessing also lacks consistency. Even though it should happen in preproc_func/_preproc functions, UNetModel uses its own _transform, unrelated to the standard methods. Yet, its behavior could implement in _preproc.
What to do
Refactoring the code will significantly improve readability:
Decompose the pipeline into small granular methods in ModelABC: one method for normalization, activation function as an attribute, etc.
Explain ModelABC methods in their documentation: does infer_batch rely on postproc_func? Can infer_batch be used for training? How?
Reorganize custom model methods to match the new ModelABC structure.
Add a new page to the documentation explaining the Tiatoolbox models pipeline: how is it related to the PyTorch pipeline? How to evaluate a model? How to train a model?
The text was updated successfully, but these errors were encountered:
John-P
changed the title
Models inconsistency
Models Inconsistency
Mar 16, 2023
Description
Tiatoolbox has several pre-trained models helpful for data processing. However, models differ in how they handle input and output, making them confusing to use (especially when customizing):
forward
method (e.g.CNNModel
), sometimesforward
returns a raw layer output and the transformation applies ininfer_batch
(e.g.UNetModel
).HoVerNet
uses it inforward
,UNetModel
in_transform
,MicroNet
inpreproc
, and vanilla models rely on the user to do so.preproc_func
/_preproc
functions,UNetModel
uses its own_transform
, unrelated to the standard methods. Yet, its behavior could implement in_preproc
.What to do
Refactoring the code will significantly improve readability:
ModelABC
: one method for normalization, activation function as an attribute, etc.ModelABC
methods in their documentation: doesinfer_batch rely
onpostproc_func
? Caninfer_batch
be used for training? How?ModelABC
structure.The text was updated successfully, but these errors were encountered: