-
Notifications
You must be signed in to change notification settings - Fork 404
Cannot perform evaluation over PROMISE12 dataset #332
Comments
I am facing the same issue with 3D Unet for two class segmentation. Could someone help? Thanks! |
Here is my configuration file: ############################ input configuration sections [seg] [label] ############################## system configuration sections [NETWORK] volume level preprocessingvolume_padding_size = 44 [TRAINING] [INFERENCE] [EVALUATION] ############################ custom configuration sections Here is the error: File "/anaconda3/envs/NiftyNet/lib/python3.6/site-packages/niftynet/evaluation/segmentation_evaluations.py", line 160, in metric_from_binarized |
Hello, Any update on this issue? |
Hello, I have trained my segmentation network over the PROMISE12 dataset using the dense_vnet, but when I try to perform evaluation between the PROMISE12 groundtruth and my inferred volumes there appears this error:
File "C:\Users\NiftyNet\niftynet\evaluation\segmentation_evaluations.py", line 160, in metric_from_binarized
conf_mat = np.array([[np.sum(land(lnot(seg), lnot(ref))),
ValueError: operands could not be broadcast together with shapes (320,320,20,1,1) (512,512,54,1,1)
I understand that its because the PROMISE12 files have (512,512,54,1,1) dimension and my inferred volumes have (320,320,20,1,1) dimension. Shouldn't it be doing resize automatically? Also, it is strange to me that the inferred volumes have (512,512,54,1,1) dimension when I specified them to be (64,64,64,1,1) on my config.ini file. Thanks for the help!
Ps. here is my config file:
[seg]
path_to_search = C:/Users/INFERENCE/5000
filename_contains = _niftynet_out
filename_not_contains =
spatial_window_size = (64,64,64)
axcodes=(S,L,P)
interp_order = 3
[label]
path_to_search = C:/Users/Promise12_LABELS
filename_contains = Case, _segmentation
filename_not_contains =
spatial_window_size = (64,64,64)
axcodes=(S,L,P)
interp_order = 0
[SYSTEM]
cuda_devices = ""
num_threads = 2
num_gpus = 1
model_dir = C:/Users/models
[NETWORK]
name = vnet
activation_function = prelu
batch_size = 1
volume_padding_size = 0
histogram_ref_file = ../standardisation_models.txt
norm_type = percentile
cutoff = (0.01, 0.99)
normalisation = True
whitening = True
normalise_foreground_only=True
foreground_type = otsu_plus
multimod_foreground_type = and
window_sampling = resize
queue_length = 8
[INFERENCE]
save_seg_dir = ./INFERENCE/5000
output_interp_order = 3
spatial_window_size = (64,64,64)
inference_iter = 5000
[EVALUATION]
save_csv_dir = C:/Users/models
evaluations = dice,jaccard
[SEGMENTATION]
inferred = seg
label = label
output_prob = False
num_classes = 2
label_normalisation = True
The text was updated successfully, but these errors were encountered: