Skip to content
This repository has been archived by the owner on Mar 17, 2021. It is now read-only.

Vnet 3D segmentation: Train loss remains 0.99 all the time #275

Open
qianjiangcn opened this issue Nov 2, 2018 · 2 comments
Open

Vnet 3D segmentation: Train loss remains 0.99 all the time #275

qianjiangcn opened this issue Nov 2, 2018 · 2 comments

Comments

@qianjiangcn
Copy link

Hi
I am trying to train 3D segmentation network and using the Vnet config file in the github repo.
But the result seems that it is not training at all since the loss remains all the time.
Why would this happen and am I missing anything?
Here is the config.ini I use.
[T1]
path_to_search = ../data
filename_contains = T1
filename_not_contains =
spatial_window_size = (32, 32, 32)
pixdim = (1.0, 1.0, 1.0)
axcodes=(A, R, S)
interp_order = 3

[parcellation]
path_to_search = ../data
filename_contains = Mask
filename_not_contains =
spatial_window_size = (32, 32, 32)
pixdim = (1.0, 1.0, 1.0)
axcodes=(A, R, S)
interp_order = 0

############################## system configuration sections
[SYSTEM]
cuda_devices = ""
num_threads = 2
num_gpus = 1
model_dir = ./models/model_vnet

[NETWORK]
name = vnet
activation_function = prelu
batch_size = 1
decay = 0
reg_type = L2

volume level preprocessing

volume_padding_size = 21

histogram normalisation

histogram_ref_file = ./example_volumes/monomodal_parcellation/standardisation_models.txt
norm_type = percentile
cutoff = (0.01, 0.99)
normalisation = True
whitening = True
normalise_foreground_only=True
foreground_type = otsu_plus
multimod_foreground_type = and

queue_length = 1
window_sampling = uniform

[TRAINING]
sample_per_volume = 32
rotation_angle = (-10.0, 10.0)
scaling_percentage = (-10.0, 10.0)
lr = 0.0001
loss_type = Dice
starting_iter = 0
save_every_n = 5
max_iter = 10000
max_checkpoints = 20

[INFERENCE]
border = (5, 5, 5)
#inference_iter = 10
save_seg_dir = ./output/vnet
output_interp_order = 0
spatial_window_size = (0, 0, 3)

############################ custom configuration sections
[SEGMENTATION]
image = T1
label = parcellation
output_prob = False
num_classes = 160
label_normalisation = True

The training looks like:

INFO:niftynet: training iter 1319, loss=1.0 (0.137058s)
INFO:niftynet: training iter 1320, loss=0.9937499761581421 (0.134881s)
INFO:niftynet: iter 1320 saved: /ifshome/qjiang/Downloads/vnet/models/model_vnet/models/model.ckpt
INFO:niftynet: training iter 1321, loss=0.9937499761581421 (0.146447s)
INFO:niftynet: training iter 1322, loss=0.9941431879997253 (0.156982s)
INFO:niftynet: training iter 1323, loss=0.9937499761581421 (0.149890s)
INFO:niftynet: training iter 1324, loss=0.9937499761581421 (0.155439s)
INFO:niftynet: training iter 1325, loss=0.9937499761581421 (0.148608s)
INFO:niftynet: iter 1325 saved: /ifshome/qjiang/Downloads/vnet/models/model_vnet/models/model.ckpt
INFO:niftynet: training iter 1326, loss=0.993877649307251 (0.139796s)
INFO:niftynet: training iter 1327, loss=0.9937499761581421 (0.133824s)
INFO:niftynet: training iter 1328, loss=0.9941365122795105 (0.139889s)
INFO:niftynet: training iter 1329, loss=0.9937499761581421 (0.142670s)
INFO:niftynet: training iter 1330, loss=0.9939338564872742 (0.141735s)
INFO:niftynet: iter 1330 saved: /ifshome/qjiang/Downloads/vnet/models/model_vnet/models/model.ckpt

Thank you very much!

@ericspod
Copy link
Collaborator

This may be related to #208. If your segmentations are really tiny then the background can dominate the dice calculation. Read through the discussion there to see if they have any relevant solutions.

@anmol4210
Copy link

Try changing the loss function from to GDSC

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants