Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BIDS conversion of anatomical scans #693

Open
eort opened this issue Feb 4, 2021 · 12 comments
Open

BIDS conversion of anatomical scans #693

eort opened this issue Feb 4, 2021 · 12 comments

Comments

@eort
Copy link
Contributor

eort commented Feb 4, 2021

Hi,

I was trying to use mne-bids to BIDS-format anatomical scans and integrate it with the MEG part of the data set. In the process, I got a bit confused. My idea was to first create this bids dataset and later deal with the co-registration of head-, meg-, and mri spaces. So, basically, I expected to be able to produce a nifti file and associated side cars (scans.tsv and a sub-XX_T1w.json), based on the header info. However, mne-bids seems to produce only the nifti without any sidecars, unless I provide transform and/or landmark information, right?

I guess that makes sense from an MEG/MRI integrative point of view, but isn't it somewhat restricting to prevent all sidecars from being produced, because only some information are missing? I mean, all the MRI-related scanning parameters, as well as the scans.tsv should be independent from co-registration and could be processed regardless of whether transform info is present. In contrast, from your mri example, I get the impression that the only side cars that are produced in the mri-scan conversion are related to the co-registration, but maybe that is just for sake of illustration.

Aside from that, I am also unsure whether this approach is completely in line with the BIDS derivative vs. raw principle. Isn't co-registration a preprocessing step that is done on raw data and its products (transform files, etc.) should therefore be a derivative? Admittedly, I have only a rough idea of the process of co-registration, so maybe I am talking nonsense.

Finally, I was also wondering whether it would make sense, to provide support to read dicom scans. Right now, I have to convert a dicom to a nifti before I can use it with mne_bids. That extra step is a little awkward in the process. The api of write_anat says that the source image "Can be in any format readable by nibabel". Dicoms cannot directly be loaded with nibabel, but need some special treatment (see here). Not sure whether this can be integrated into the mne-bids framework?

@sappelhoff
Copy link
Member

Just noticed that perhaps #617 is related

@eort eort changed the title BIDS converstion of anatomical scans BIDS conversion of anatomical scans Feb 12, 2021
@eort
Copy link
Contributor Author

eort commented Mar 22, 2021

However, mne-bids seems to produce only the nifti without any sidecars, unless I provide transform and/or landmark information, right?

Running the conversion with example data, I realized that the only information being written to the T1w.json are the coordinates of the anatomical landmarks. So, it makes sense, that when the trans file is left out, no json sidecar will be written (as it would be empty anyway).

Right now, I have to convert a dicom to a nifti before I can use it with mne_bids

By now I also have realized that I need to run freesurfer anyway to get the surfaces and be able to produce the trans.fif file. So, my idea what this write_anat function is supposed to do was somewhat off.

After being a bit more familiar with the matter, I see following things that could/should be done in the context of bidsifying anatomical data.

  • add the anatomical recording to the scans.tsv. I'm close to saying this is a must. At least, I don't see a reason not to. If a recording is added to a BIDS dataset, and a scans.tsv is present, it should also be listed there, no?
  • extract more metadata from the anatomy and write it to the sidecar. Normally, the sidecars for anatomical images include a lot more metadata about the recording (see the example below). If you choose to add an implementation for this, the question is then whether to use existing BIDS tools, such as heudiconv or to extract the necessary information yourself. I haven't looked into it, but I would say these information should be present in the nifti header and should be accessible with nibabel.

I believe i have read somewhere that writing derivatives is not a priority right now, but if this changes one day, it would make sense to add the Freesurfer output folder as a derivative to the dataset. Particularly as files in these folders are needed for some further analyses in the source space. In any case, adding dicom support to mne-bids is most likely overkill and not worth the effort.

{
  "AcquisitionMatrixPE": 256,
  "AcquisitionNumber": 1,
  "AcquisitionTime": "17:42:44.577500",
  "BaseResolution": 256,
  "BodyPartExamined": "BRAIN",
  "ConsistencyInfo": "N4_VE11C_LATEST_20160120",
  "ConversionSoftware": "dcm2niix",
  "ConversionSoftwareVersion": "v1.0.20201224",
  "DeviceSerialNumber": "166064",
  "DwellTime": 7.8e-06,
  "EchoTime": 0.00245,
  "FlipAngle": 9,
  "ImageOrientationPatientDICOM": [-0.0749788, 0.997185, 0, 0, 0, -1],
  "ImageType": [
    "ORIGINAL",
    "PRIMARY",
    "M",
    "NORM",
    "DIS3D",
    "DIS2D"
],
  "ImagingFrequency": 123.259,
  "InPlanePhaseEncodingDirectionDICOM": "ROW",
  "InstitutionAddress": "XXX",
  "InstitutionName": "XXX",
  "InstitutionalDepartmentName": "Department",
  "InversionTime": 0.9,
  "MRAcquisitionType": "3D",
  "MagneticFieldStrength": 3,
  "Manufacturer": "Siemens",
  "ManufacturersModelName": "Prisma",
  "Modality": "MR",
  "ParallelReductionFactorInPlane": 2,
  "PartialFourier": 1,
  "PatientPosition": "HFS",
  "PercentPhaseFOV": 100,
  "PercentSampling": 100,
  "PhaseEncodingSteps": 255,
  "PhaseResolution": 1,
  "PixelBandwidth": 250,
  "ProcedureStepDescription": "MR  Sch\u00e4del",
  "ProtocolName": "t1_mprage_tra_iso_neu",
  "PulseSequenceDetails": "%SiemensSeq%\\tfl",
  "ReceiveCoilActiveElements": "HE1-4;NE1,2",
  "ReceiveCoilName": "HeadNeck_20",
  "ReconMatrixPE": 256,
  "ReconstructionMethod": "\u00b7t\u00fd\u007f",
  "RefLinesPE": 24,
  "RepetitionTime": 2,
  "SAR": 0.0530676,
  "ScanOptions": "IR",
  "ScanningSequence": "GR\\IR",
  "SequenceName": "*tfl3d1_16",
  "SequenceVariant": "SK\\SP\\MP",
  "SeriesDescription": "t1_mprage_tra_iso_neu",
  "SeriesNumber": 2,
  "ShimSetting": [1012, -11573, -11477, 34, 1, -11, -25, -6],
  "SliceThickness": 1,
  "SoftwareVersions": "syngo MR E11",
  "StationName": "AWP166064",
  "TxRefAmp": 263.338}

@sappelhoff
Copy link
Member

+1 to both your suggestions (especially the first one, which should be relatively straight forward)

@adam2392
Copy link
Member

Chiming in here, +1 for both suggestions as well.

Re adding more metadata from the Nifti file, that would be awesome. From my limited exp working with heudiconv, no package robustly extracts the BIDS metadata from a Nifti file (only from dicoms), so if we could have that as part of write_anat, that would make that function very awesome!

@eort
Copy link
Contributor Author

eort commented Mar 23, 2021

Cool.

(especially the first one, which should be relatively straight forward)

Yup, I think so, too. The only thing that will be interesting is extracting the recording date, which might be a nice exercise to then also extract other information for the json sidecar.

no package robustly extracts the BIDS metadata from a Nifti file (only from dicoms)

Okay, I'll have a look into the options. But would adding heudiconv as a dependency be an option, or would it be better to keep mne-bids as lightweight as possible?

@adam2392
Copy link
Member

Re heudiconv: I don't think we need as dependency. I suppose if possible, see how they extract metadata and what they extract to see if can just replicate that in write_anat by reading in the nifti.hdr?

I think there is some loss of info from .dicoms -> .nii though, so it might not be possible to do it for all types of data though...

@eort
Copy link
Contributor Author

eort commented Apr 12, 2021

Updating scans.tsv

Normally, the scans.tsv already exists when the anatomical scan is being read. So, I could write some code to read, update it and write it back to file again (as I already do with my own data). But, I guess the cleaner way would be to extend

def update_sidecar_json(bids_path, entries, verbose=True):

to also be able to update .tsv files and not just .json files. So, would it make more sense to first invest some time to add .tsv support to that function, instead of diving straight into the specific write_anat issue?

@eort
Copy link
Contributor Author

eort commented Apr 12, 2021

I think there is some loss of info from .dicoms -> .nii though, so it might not be possible to do it for all types of data though...

Indeed, the nifti header is much less comprehensive than the dicom header. Most relevant to this issue here, nifti headers don't seem to have information on acquisition time and date. Therefore, there is not much that can be done here with write_anat. In order to add date/time to scans.tsv and populate the anat sidecar with metadata we would need access to the original dicom. From there, I can see three scenarios:

1) process niftis and dicoms
Let the write_anat function do its thing with the niftis as it currently does. Add support for reading dicom images and extracting relevant metadata that can then be used to populate scans.tsv and the json sidecar.

pros:

  • we would not mess with actually producing the nifti. So, no need to reinvent the "make-nifti" wheel
  • mne-bids would be able to produce "complete" data sets, wrt anatomical scans
  • with nibabel (using pydicom) it seems that extracting dicom header information is rather manageable

con:

  • for my dicom files (Siemens), extraction of header information was easy, but dicoms seem to be quite a diverse datatype and nibabel itself warns UserWarning: The DICOM readers are highly experimental, unstable, and only work for Siemens time-series at the moment. Please use with caution.. So, it might actually be tricky to make it work for the majority of users
  • users would have to need to have access to both image types, and they should match

2) only process dicoms
Let mne-bids handle the entire conversion process, so not just extract metainformation for the sidecar, but also dicom2nifti conversion.

pros:

  • only one image needs to be passed to mne-bids
  • everything would be nice and neat

cons:

  • If the dicom2nifti conversion is done in external package (e.g. heudiconv), that would a quite a big dependency.
  • if the conversion is done with mne-bids itself, that would be a lot of work to do something that other packages are already doing (and probably much better)

3) only process niftis, and makes use of pre-bidsified anatomical images
Suggest that users do the BIDS-conversion of the anatomical image with their favorite tool (e.g. heudiconv), and pass the path to resulting BIDS directory to mne_bids, which then integrates the anat hierarchy with the mne-bids made hierarchy and extends the scans.tsv and json sidecar with the missing information. (That's my current approach).

pros:

  • keeps things simple: no need to deal with dicoms at all
  • still, the result would be a "complete" dataset

cons:
- requires quite a bit of explanation to the documentation (but that is probably true for all options)

4) leave everything as is and pretend this issue never existed
Instead we could just inform/warn in the documentation that the information on the anatomical scan is incomplete and we recommend users to do something about it (maybe with giving some pointers on how to do it).

My preference is 3, 1, 4, 2 in that order. In any case, I think this issues deserves some additional information in the docs. Maybe as a info/warn box in the tutorial on how to write anatomical images. And maybe some nibabel-inspired disclaimer that this is experimental.

@adam2392
Copy link
Member

Imo (not a strong opinion tho), 3) seems the most manageable and easiest solution. We should point to the relevant heudiconv documentation for going from:

  • dicoms -> BIDS imaging folders, or dicoms -> nifti images with sidecar jsons
  • take those BIDS imaging folders and use write_anat to add some additional things encoded in the raw?

However, if heudiconv goes from dicoms -> BIDS imaging folders... what's the point of write_anat? It seems the only workflow that enables at that point is the writing of landmarks?

@adam2392
Copy link
Member

to also be able to update .tsv files and not just .json files. So, would it make more sense to first invest some time to add .tsv support to that function, instead of diving straight into the specific write_anat issue?

For reference: #634

updating tsv is a can of worms :p, but if we can do it, I am very excited.

@eort
Copy link
Contributor Author

eort commented Apr 12, 2021

to also be able to update .tsv files and not just .json files. So, would it make more sense to first invest some time to add .tsv support to that function, instead of diving straight into the specific write_anat issue?

For reference: #634

updating tsv is a can of worms :p, but if we can do it, I am very excited.

haha, yeah, that will be fun. But well, depending on what we decide on regarding the dicom isse, it might not be necessary to mess with tsv files (at least for this issue)

@eort
Copy link
Contributor Author

eort commented Apr 12, 2021

Imo (not a strong opinion tho), 3) seems the most manageable and easiest solution. We should point to the relevant heudiconv documentation for going from:

* dicoms -> BIDS imaging folders, or dicoms -> nifti images with sidecar jsons

* take those BIDS imaging folders and use `write_anat` to add some additional things encoded in the `raw`?

However, if heudiconv goes from dicoms -> BIDS imaging folders... what's the point of write_anat? It seems the only workflow that enables at that point is the writing of landmarks?

Right. Currently all that write_anat seems to do is writing the landmarks, renaming the nifti image and put it into the BIDS hierarchy. So, if we went for 3, only the landmark routine would be left, but instead, two BIDS directories would have to be merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants