You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 17, 2021. It is now read-only.
but in the instruction of configuration file https://niftynet.readthedocs.io/en/dev/config_spec.html
"Array of three integers specifies the input window size. Setting it to single slice, e.g., spatial_window_size=64, 64, 1, yields a 2-D slice window."
So which is is correct? or should I chop the 3D scan into 2D images by myself?
Thank you!
The text was updated successfully, but these errors were encountered:
The spatial_window_size should have the dimensions of data you're working with, so for 2D it would (h, w). The unet_histology demo illustrates this using the 2D unet implementation.
Hi, Niftynet team,
I'm trying to train a 2D unet with every single slice of a 3D CT scan.
According to your instruction,
https://niftynet.readthedocs.io/en/dev/window_sizes.html
"setting spatial_window_size = (h, w, 1) will generate a 2.5D windows, "
but in the instruction of configuration file
https://niftynet.readthedocs.io/en/dev/config_spec.html
"Array of three integers specifies the input window size. Setting it to single slice, e.g., spatial_window_size=64, 64, 1, yields a 2-D slice window."
So which is is correct? or should I chop the 3D scan into 2D images by myself?
Thank you!
The text was updated successfully, but these errors were encountered: