Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Any training tutorials? #584

Open
Pascal-bucketbreaker opened this issue Jan 23, 2024 · 4 comments
Open

Any training tutorials? #584

Pascal-bucketbreaker opened this issue Jan 23, 2024 · 4 comments

Comments

@Pascal-bucketbreaker
Copy link

Task (what are you trying to do/register?)

I am trying to train my own model for 1000*1000 size image but the model actually don't learn anything.
The loss went down to 0.02 and converged, but the 'moved' image is exactly same as 'moving' image.

What have you tried

I changed the size of model.
from [(32,32,32,32) (32,32,32,32,32,16)]
to
[(256,)*4 (256,)*8]

It have very little change, if not changed at all.

Details of experiments

Please carefully specify details about your experiments. If you are training, when what is the setup? What loss are you using? What does the convergence look like? If you are registering, please show example inputs and outputs. etc.

It just copied from https://colab.research.google.com/drive/1zaDnAJGUokS0knqWttuTgrRJMb6zxukI?usp=sharing
I changed the input size and the model size, nothing else.

P.S. another way is to resize the warp field to use the warp field for 1024*1024 image.
However, when I use the code
warp_model = vxm.networks.Transform(in_shape, interp_method='nearest') registered[max_idx]=cv.resize(images[max_idx],(256,256)) warped_seg = warp_model.predict([registered[i], warp])

it keep going out error as
Data cardinality is ambiguous: x sizes: 256, 1 Make sure all arrays contain the same number of samples.

@adalca
Copy link
Collaborator

adalca commented Jan 23, 2024

Well https://tutorial.voxelmorph.net shows training mechanisms.

I wouldn't use a model with [(256,)*4 (256,)*8] -- with that many features models tend to be much harder to train. I would start with a small model and see how far that gets you.

P.S. another way is to resize the warp field to use the warp field for 1024*1024 image.
aside from resizing you need to also multiply the warp by the resize factor, otherwise you'll be moving in the wrong units

Data cardinality is ambiguous: x sizes: 256, 1 Make sure all arrays contain the same number of samples.
What is the shape of registered[i] and of warp?

Also, how are your images normalized?

@Pascal-bucketbreaker
Copy link
Author

The image regestered[i] is 8-bit grayscale image.
warp_model = vxm.networks.Transform((256,256), interp_method='nearest')
If I use warped_seg = warp_model.predict((registered[i], warp)), it will be error:
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'

Another case, if I use
tf_input = conform(registered[i]/255) warped_seg = warp_model.predict((tf_input, warp))
It as same get the error
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'

And when I run the code
print(np.shape(registered[i])) print(np.shape(tf_input))
It get the result
(256,256)
(1,256,256,1)
So I can't get the correct result under any circumstances.

@adalca
Copy link
Collaborator

adalca commented Jan 24, 2024

I don't think you checked the shape of the warp in the code above?

the input image to the model should be (1, 256, 256, 1) due to how tensorflow works
the warp should be (1, 256, 256, 2)

if you have both of those of the right shape and floats, I think they should work. Doing this in a fresh colab works fine:

!pip install voxelmorph 
import voxelmorph as vxm
import numpy as np
vol = np.random.random((1, 256, 256, 1))
warp = np.random.random((1, 256, 256, 2))
warp_model = vxm.networks.Transform((256,256))
vol_moved = warp_model((vol, warp))

@Pascal-bucketbreaker
Copy link
Author

Yes, at least the code above can run.
I'd try both methods later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants