Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add high-level brain MRI transforms for preprocessing #428

Closed
fepegar opened this issue Jan 26, 2021 · 21 comments
Closed

Add high-level brain MRI transforms for preprocessing #428

fepegar opened this issue Jan 26, 2021 · 21 comments
Labels
enhancement New feature or request

Comments

@fepegar
Copy link
Owner

fepegar commented Jan 26, 2021

馃殌 Feature

Add some high-level brain MRI transforms for preprocessing.

Motivation

Arguably, there are three preprocessing steps performed on most multimodal neuroimaging studies:

  1. Coregistration of the different modalities
  2. Brain segmentation/skull-stripping
  3. Registration to a reference space, typically MNI

I typically use NiftyReg for registration and ROBEX for brain extraction. As discussed with @ReubenDo, other modern, robust and more Python-friendly tools that could be used for this are SimpleElastix or ANTsPy for registration and HD-BET for brain extraction, similar to what's done in MRIPreprocessor.

A related transform that might be useful is a defacing transform. The simplest way to do this is using a mask defined in the MNI space, therefore this is very related to the MNI registration.

Pitch

Add new preprocessing transforms, not meant to be used during training. Some ideas: BrainExtraction, ToMNI, Coregister. The requirements would be optional, rising an error with an informative message asking the user to install them using pip.

@romainVala, do you think this would be useful?

@fepegar fepegar added the enhancement New feature or request label Jan 26, 2021
@sarthakpati
Copy link
Contributor

sarthakpati commented Mar 13, 2021

I would also recommend adding a ToSRI transform, as the SRI template is also heavily used, especially in BraTS (and related studies).

Additionally, perhaps another option for brain extractor would be useful?

@romainVala
Copy link
Contributor

Hi fernando

sorry if I miss other questions, I did not check torchio email for a while ... (and as I get all the email, I do not see the one directly adress to me ...)

I agree those are very common step for neuroimaging studies, but not sure if they are so much used for deep learning strategy.
It is often that one wish to have a network independant of brain extracted or not, robust to misalignment error, so I would avoid using it for training. (there will be execution time issu is used in training)

but you point out that it is not meant for training ... so what is the use case you have in mind ?

@romainVala
Copy link
Contributor

BUT there are case where there may be no other choice
For instance multimodal input, You may need to coregister the inputs volumes
Recently I get some strange result with nifty reg and I finally wen with fsl flirt utility, but I have heard several times of SimpeElastix, which is wildly use in other than MR communiity
It seems to bee versy fast, so I will be interested to test it

The main advantage to include it in torchio, is to automatically handel the write to file if needed (for nifty reg) but would be nice to have a solution with memory map (may be possible with SimpleElaxtix ?)

I recently came to an issue where I need to do a coregistration (motion corrupted volume, have a global displacement regard to original volume, and I am still unsucess to predict it, (despite the Shaw proposition)

@fepegar
Copy link
Owner Author

fepegar commented Mar 15, 2021

These would be utilities for preprocessing before training, for convenience. Each transform could have different backends, to use the user's preferred method.

Here is some code I adapted from MRIPreprocessor:

https://github.com/fepegar/resseg/blob/06b28889d65780eadc07a405c695598dcb028bbb/resseg/cli/resseg_mni.py#L19-L39

@romainVala
Copy link
Contributor

I have some use-case too where I need to register 2 images
for now I just build the fsl filrt command line and then used python subprocess.run
which make the job very quickly
I could also have used fslpy which make wrapper of some fsl function to python (similar to ants)

But what the goal next, to make a proper torchio transform (your example is just a main function ... ) ?
should we made one transform for each tools (ants, fsl ...) or a generic one,
How to deal with output directories (such command line often need the data on the disk, which mean we need to write the data (if it comes from a torchio transform too) and read the result from disk get back the result in the torchio subject ?
How deep do we go with all parameters ... a way to get it flexible ?

seems a good idea to me, but I am affraid it will ask a lot of dev ... ?

@fepegar
Copy link
Owner Author

fepegar commented May 20, 2021

I think a generic registration tool is too much. Why would you use TorchIO for that? I was just thinking of very specific usage cases: 1) registering a brain MRI to the MNI space and 2) segmenting the brain from a brain MRI.

@romainVala
Copy link
Contributor

romainVala commented May 20, 2021

for the same reason as, you, ...
Even if you stay with your specific case, there is no perfect tool for the 2 task, so since it will failled some times, I would like to test other tools, or more control over the parameters ...

and about registration I need to register the motion corrupt image with the original one, so I add it now within my torchio transform but it looks like dirty code ...

but just registration it is quickly difficult to apply the correct affine if you play with different tools, (because of different convention), so we should start with one ...

@fepegar
Copy link
Owner Author

fepegar commented May 20, 2021

for the same reason as, you, ...

Well, I wouldn't use TorchIO as a generic registration tool.

Even if you stay with your specific case, there is no perfect tool for the 2 task, so since it will failled some times, I would like to test other tools, or more control over the parameters ...

Sure, that could maybe be controlled with different backends contributed by users and custom kwargs for each one.

and about registration I need to register the motion corrupt image with the original one, so I add it now within my torchio transform but it looks like dirty code ...

Aren't you adding the artifacts yourself? Why do you need registration?

@romainVala
Copy link
Contributor

I agree TorchIO is not meant to be a generic registration tool, but since you have a use case, you will need some genericity. I think we agree on that. (one just need to define how generic it will be)
Once you have a coregistration to mni, then why not allow a coregistration to a given image, (this is just an additional input parameter to change, (path to a template in your case, and 'image_key' to torchio subject in my case)

When working with multichannel images (T1 T2 ect ...) one may wish that all channel are realign before going to a deep network. This make sense that the voxel values need to be realign in order to gain information of multicontrast inputs
(I just reviewed a Billot synthetic and Super Resolution work on neuroimage and this is indeed the only preprocessing needed)

Motion corruption is a an other case may be more specific: I can not find a theoretical solution to perfectly demean the motion time course, and I do observer coregistration misalignment between the original and the artifacted volume. I already discuss there #85 . I test the solution published by Shaw but it is an approximation ...
Since I want to learn the motion artefact (at the voxel level, or at the volume level with global difference metric (L1 L2 ...) I do not want to have global displacement. (the regression task of my network is then to learn this metric difference, (motion severity)

@romainVala
Copy link
Contributor

I was also used to niftyreg, fsl, but it is anoying to have to write the file on the disque ...

SimpleElastix

it worth the try. very good, fast, no file on the disk, and a sitk base !
should be much more easy to do a transform

@romainVala
Copy link
Contributor

may be I should use a new issue, but I'll do a PR if I can go further

So I start witht a basic Rigid coregistration with

elastixImageFilter = sitk.ElastixImageFilter()
elastixImageFilter.SetFixedImage(sitk.ReadImage("fixedImage.nii"))
elastixImageFilter.SetMovingImage(sitk.ReadImage("movingImage.nii"))
elastixImageFilter.SetParameterMap(sitk.GetDefaultParameterMap("rigid"))
elastixImageFilter.Execute()

it works fine but I can not find how to derived the affine
I am not used to itk formalism, but it may be simple ... I just ask here in case you have a short idea of it

sitk.PrintParameterMap(elastixImageFilter.GetTransformParameterMap() ) 
ParameterMap 0: 
  (CenterOfRotationPoint 0.650002 17.500000 18.650000)
  (CompressResultImage "false")
  (ComputeZYX "false")
  (DefaultPixelValue 0.000000)
  (Direction 1.000000 0.000000 0.000000 0.000000 -1.000000 0.000000 0.000000 0.000000 1.000000)
  (FinalBSplineInterpolationOrder 3.000000)
  (FixedImageDimension 3.000000)
  (FixedInternalImagePixelType "float")
  (HowToCombineTransforms "Compose")
  (Index 0.000000 0.000000 0.000000)
  (InitialTransformParametersFileName "NoInitialTransform")
  (MovingImageDimension 3.000000)
  (MovingInternalImagePixelType "float")
  (NumberOfParameters 6.000000)
  (Origin -89.849998 126.000000 -71.849998)
  (ResampleInterpolator "FinalBSplineInterpolator")
  (Resampler "DefaultResampler")
  (ResultImageFormat "nii")
  (ResultImagePixelType "float")
  (Size 182.000000 218.000000 182.000000)
  (Spacing 1.000000 1.000000 1.000000)
  (Transform "EulerTransform")
  (TransformParameters 0.000018 -0.000087 0.000036 9.998980 10.012400 -10.002100)
  (UseDirectionCosines "true")

any idea how to derive the affine matrix from this (with correct convention)
the euler 6 parameters ( angle and translation) are there
TransformParameters 0.000018 -0.000087 0.000036 9.998980 10.012400 -10.002100

@fepegar
Copy link
Owner Author

fepegar commented Oct 15, 2021

I've never used SimpleElastix, I'm not sure. I shared an example with ANTSpy here, it works fine: #428 (comment)

@romainVala
Copy link
Contributor

thanks for the ants example, it is exactly what I need, and it does the job ..
from the small test (applying a random affine to the image, and using ants or elastix to coregister them)
elastix can make it in 1.9 s, and ants in 6.8 s (fsl flirt in 27 s and with a non perfect result !)

since I will need it at training time, it is worth to try a little bit,
ok I will have to dig into SimpleITK, to get it

@fepegar
Copy link
Owner Author

fepegar commented Oct 15, 2021

1.9 s? Wow!

You're going to perform registrations during training? Very brave :)

@romainVala
Copy link
Contributor

well, I still have the same problem of global displacement after motion simulation (I should submitt an article soon, just on that, the results, is that the solution proposed by Shaw is not working, so I have to go with registration)

So I want to test if adding a registrations helps.
but 2s more, it should be ok

@fepegar
Copy link
Owner Author

fepegar commented Oct 15, 2021

Looking forward to those papers! TorchIO has received a couple of citations from ARAMIS, but not from the CENIR :)

@romainVala
Copy link
Contributor

and @GReguig or I will also do a PR for a new motion transform, (with coregistration, among other adds)
but still a few things to finish first ...

@romainVala
Copy link
Contributor

but to come back to SimpleElastix, I think the only thing I need, is a euler to affine function. for the affine case, (for non linear... later bspline to ...)

that is indeed what you do in Resample ... and this is in the itk world (can you make it a more explicit function ?).

I have one here from spm, so the nifti world ...

i try to make both match /// (grrr) \\

@romainVala
Copy link
Contributor

I think it is something like this :

def euler_to_affine(Euler_angle, Translation, v_center):
    #euler angle in radian
    rigid_euler = sitk.Euler3DTransform(v_center,Euler_angle[0],Euler_angle[1],Euler_angle[2],Translation)
    A1 = np.asarray(rigid_euler.GetMatrix()).reshape(3,3)
    c1 = np.asarray(rigid_euler.GetCenter())
    t1 = np.asarray(rigid_euler.GetTranslation())
    affine = np.eye(4)
    affine[:3,:3] = A1
    affine[:3,3] = t1+c1 - np.matmul(A1,c1)
    # aff = tio.io._from_itk_convention(affine)
    return affine

Note that I do not do the last conversion

 tio.io._from_itk_convention(affine)

this is because tio.Affine, is applying an affine in the SITK convention (LPS), but I am not sure of it, and I open an explicit question there #693 (comment)

@ryancinsight
Copy link

ryancinsight commented Mar 11, 2022

I personally highly recommend SimpleElastix:

reading from image

def make_affine(image):
    # get affine transform in LPS
    affine = np.eye(4,dtype=np.float64)
    affine[:3, :3] = np.array(image.GetDirection()).reshape(3, 3).__imul__(np.array(image.GetSpacing()))
    affine[:3, 3] = image.GetOrigin()
    return affine

reading from elastix output file

def elastix_affine(foundTrnPath: str = None):
    with open(foundTrnPath, "r") as f:
        for line in f:
            if "(TransformParameters" in line:
                params = np.array(
                    line.split("ers ")[1].split(")")[0].split(" "), np.float64
                )
            if "(CenterOfRotationPoint" in line:
                cnt = np.array(
                    line.split("int ")[1].split(")")[0].split(" "), np.float64
                )
    trn = sitk.Euler3DTransform()
    trn.SetParameters((params[0], params[1], params[2], 0, 0, 0))
    C = np.matrix(np.array([[cnt[0]], [cnt[1]], [cnt[2]]]))
    M = np.eye(4,dtype=np.float64)
    M[:3, :3] = np.matrix(np.reshape(np.array(trn.GetMatrix(),dtype=np.float64), [3, 3],order='F'))
    M[:3, 3:4] = C - M[:3, :3] * C - M[:3, :3] * np.matrix(np.array([[params[3]], [params[4]], [params[5]]]))
    return M

so process I use looks more like this overall to apply:

elastixImageFilter.SetFixedImage(CT)
elastixImageFilter.SetMovingImage(PRE)
elastixImageFilter.Execute()
sitk.WriteParameterFile(elastixImageFilter.GetTransformParameterMap()[0], f'{OUTREG}/REGPRE2CT.txt')
M = elastix_affine(f'{OUTREG}/REGPRE2CT.txt').dot(make_affine(PRE))
PRE.SetDirection(M[0:3, 0:3].ravel())
PRE.SetOrigin(M[0:3, 3])

@fepegar
Copy link
Owner Author

fepegar commented Feb 14, 2024

I think these are out of scope for this library.

@fepegar fepegar closed this as not planned Won't fix, can't repro, duplicate, stale Feb 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants