Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Invalid transformations for some Phase1/Phase2 fieldmaps #83

Closed
ins0mniac2 opened this issue Jan 29, 2020 · 25 comments · Fixed by #88
Closed

Invalid transformations for some Phase1/Phase2 fieldmaps #83

ins0mniac2 opened this issue Jan 29, 2020 · 25 comments · Fixed by #88
Labels
bug Something isn't working GRE - phasediff

Comments

@ins0mniac2
Copy link

ins0mniac2 commented Jan 29, 2020

We encounter an error (log below) in phmap2rads that we have associated with a certain type of fieldmap acquisition but don't know why it fails. Specifically, the when the BOLD scan and the fieldmap scan are both acquired axially there is no error, but when the fieldmap is acquired sagitally the error happens. The error message below is followed by examples of fieldmaps that work and don't work.

========================

File: /DATA/fmriprep-1.5.5/fmriprep/sub-R/log/20200124-061243_5ea8ae31-a295-496d-a586-748a17830a8f/crash-20200124-062515-fmriprep-phmap2rads-2f64b28a-ad9b-497a-b4a0-1242888fad3c.txt

Working Directory: /DATA/fmriprep-1.5.5/work_sub-R/fmriprep_wf/single_subject_R_wf/func_preproc_ses_05_task_taskmotor_run_01_wf/sdc_estimate_wf/phdiff_wf/phmap2rads
Inputs:

    in_file: ['/DATA/fmriprep-1.5.5/bids/sub-R/ses-05/fmap/sub-R_ses-05_run-01_phasediff.nii.gz']

Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 336, in _send_procs_to_workers
    self.procs[jobid].run(updatehash=updatehash)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 516, in run
    result = self._run_interface(execute=True)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 1373, in _run_interface
    stop_first=str2bool(self.config["execution"]["stop_on_first_crash"]),
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 1295, in _collate_results
    "Subnodes of node: %s failed:\n%s" % (self.name, "\n".join(msg))
Exception: Subnodes of node: phmap2rads failed:
Subnode 0 failed
Error: Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/utils.py", line 94, in nodelist_runner
    result = node.run(updatehash=updatehash)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 516, in run
    result = self._run_interface(execute=True)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 635, in _run_interface
    return self._run_command(execute)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 741, in _run_command
    result = self._interface.run(cwd=outdir)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 395, in run
    runtime = self._run_interface(runtime)
  File "/usr/local/miniconda/lib/python3.7/site-packages/sdcflows/interfaces/fmap.py", line 266, in _run_interface
    newpath=runtime.cwd)
  File "/usr/local/miniconda/lib/python3.7/site-packages/sdcflows/interfaces/fmap.py", line 673, in au2rads
    data[data < 0] = - np.pi * data[data < 0] / data[data < 0].min()
  File "/usr/local/miniconda/lib/python3.7/site-packages/numpy/core/_methods.py", line 32, in _amin
    return umr_minimum(a, axis, None, out, keepdims, initial)
ValueError: zero-size array to reduction operation minimum which has no identity

When creating this crashfile, the results file corresponding
to the node could not be found.

===================
A fieldmap acquisition that works has the following output from c3d -info:
Image #1: dim = [144, 144, 59]; bb = {[172.569 172.569 -78.3001], [522.569 522.569 80.9999]}; vox = [2.43056, 2.43056, 2.7]; range = [-250, 249.881]; orient = LPI

A fieldmap acquisition that producess the error above has the following output from c3d -info:
Image #1: dim = [59, 144, 144]; bb = {[78.3001 172.569 -172.569], [237.6 522.569 177.431]}; vox = [2.7, 2.43056, 2.43056]; range = [-250, 249.881]; orient = LPI

@effigies
Copy link
Member

effigies commented Feb 6, 2020

@ins0mniac2
Copy link
Author

Thanks @effigies !

@effigies
Copy link
Member

effigies commented Feb 6, 2020

@ins0mniac2 Are you able to share the fieldmap file (and JSON sidecar) in question? That will help. also, the vendor, if that's not part of the metadata.

@mattcieslak Would you or anybody from your team be able to have a look at this? Or suggestions for another reviewer? Fieldmaps are not my wheelhouse, and I suspect the resolution will be faster with someone more familiar.

@effigies effigies added bug Something isn't working GRE - phasediff labels Feb 6, 2020
@mattcieslak
Copy link
Collaborator

@ins0mniac2 it would be great to see the actual data causing this error. I have an idea where the problem might be and would like to check on your problematic data

@mattcieslak
Copy link
Collaborator

If possible, one of the cases where it is working as expected would be good to see too

@ins0mniac2
Copy link
Author

@mattcieslak would it be sufficient to share just the phase difference image ?

@ins0mniac2
Copy link
Author

@mattcieslak : we will have to get permission to share data, so I was wondering if looking at just phase difference image will be enough or you also need to see the magnitude image(s) ? It's easier to get permission for the phase as it's devoid of facial features.

@mattcieslak
Copy link
Collaborator

could you share one of each phase maps, one that works and one that doesn't? In the meantime, could you try reorienting the problematic phase map in the BIDS directory before running sdcflows? FSL expects all inputs to be in RPI (LAS+) orientation. Using afni it would be something like 3dresample -orient RPI ...

@effigies
Copy link
Member

effigies commented Feb 12, 2020

Turns out I can reproduce this issue with my own data. This is from a Philips Achieva 3T.

In [1]: import nibabel as nb                                                                        

In [2]: import numpy as np                                                                          

In [3]: from scipy.stats import mode                                                                

In [4]: im = nb.load('/data/bids/qnl/repeat_change/sub-01/fmap/sub-01_phase1.nii.gz')               

In [5]: data = im.get_fdata(dtype=np.float32)                                                       

In [6]: data.dtype                                                                                  
Out[6]: dtype('float32')

In [7]: data.min()                                                                                  
Out[7]: -3.1415992

In [8]: data.max()                                                                                  
Out[8]: 3.1404881

In [9]: data -= mode(data, axis=None)[0][0]                                                         

In [10]: data.min()                                                                                 
Out[10]: 0.0

In [11]: data.max()                                                                                 
Out[11]: 6.2820873

@effigies effigies changed the title SDC error for fieldmap-based correction Invalid transformations for some Phase1/Phase2 fieldmaps Feb 13, 2020
@ins0mniac2
Copy link
Author

Thanks guys. I'll wait for the fix, but the phase difference images dcm2niix creates when we get the error vs. when we do not differ in terms of the range of values in our data. They are pretty similar such as

data.min()
-250.0
data.max()
249.8779

@effigies
Copy link
Member

Okay. So the current proposed fix ought to work better than my initial fix.

Can you show the histogram plot of your data matrix? Just to give a better sense of whether this will work.

@ins0mniac2
Copy link
Author

Attached are histograms of two phasediff images -- one works (the second one) and the other doesn't. They are similar, the major difference being an axial vs. sagittal acquisition as far as I can tell (end of my original message has output of c3d -info). I realize my previous comment was confusing and may not have been clear. I meant to say "the phase difference images dcm2niix creates when we get the error vs. when we do not DO NOT differ in terms of the range of values in our data".

image

image

@effigies
Copy link
Member

My guess is that the mode of the first is the minimum and the mode of the second is ~0, and that's why one works and one doesn't.

But mainly what I was looking for was verifying that there is a mode at 0 and min ~ -max. As long as that's true, the fix should work.

@ins0mniac2
Copy link
Author

And you are right.

mode(dgood, axis=None)[0][0]
-0.061049707
mode(dbad, axis=None)[0][0]
-250.0
And to me it looks like an unfortunate indirect consequence of the sagittal acquisition. In the attached screenshot, the colormap goes from pink (-250) to blue (250) with white representing (near) zero. The sagittal acquisition has many more "non-zero" values from the neck area at every slice preventing the mode from being ~0. That's my guess by looking at the images.
image

@dorianps
Copy link

@effigies will the bugfix go in 1.5.9 or 20.0.0? We have all the other data processed with 1.5.* so I am hoping I don't need to jump to 20.* just for these subjects.

Thank you.

@effigies
Copy link
Member

We're aiming for 20.0.0 on Monday, but I can try to make a 1.5.9 (I think that's what we're up to) backport. I'll be on vacation next week, so I can't commit to finishing anything I don't get lined up today until the week after. (Logging in, bumping versions, and checking CI is something I can do in my spare time, but serious fiddling is unlikely.)

@effigies
Copy link
Member

effigies commented Feb 14, 2020

@dorianps I'm actually concerned that the current proposed fix will sufficiently change the behavior for unaffected datasets that we can't use it strictly as a bug-fix.

Here's a compromise fix that can fit into 1.5.x (@oesteban, I would appreciate your input here):

  • We detect the case where mode == min, find the second mode and use that.
  • Otherwise keep the algorithm identical.

The full fix will have to go in a minor release. (20.0.0 or 20.1.0, depending on whether this is ready to go for Monday.)

Also cc @mgxd about a potential breaking change and targeting 20.0.0 or 20.1.0.

@oesteban
Copy link
Member

The compromise solution is possibly the only solution, +1000.

Thanks a lot for getting this in good shape

@ins0mniac2
Copy link
Author

I don't know the order of computations, but to me the problem could be solved by using a brain mask before the mode operation. Don't we have that computed based on the brain mask from the magnitude image ?

@ins0mniac2
Copy link
Author

Otherwise we are at the mercy of the acquisition FOV and whatever junk is lurking there to affect the first/second mode of data.

@oesteban
Copy link
Member

I don't know the order of computations, but to me the problem could be solved by using a brain mask before the mode operation. Don't we have that computed based on the brain mask from the magnitude image ?

I like @effigies approach because it is really inexpensive (the more you can do without prior knowledge such as a mask, the better).

What about a bit more robust version of his proposal:

  • Calculate the first mode, but masking out all intensities outside the range ( 0.05 * data.min(), 0.95 * data.max() )

@dorianps
Copy link

Cool, thanks for the fixes. Can someone ping this thread when the change is included in a new docker tag so I can try it on the failed datasets?

P.s. Hoping on a 1.5.9, or 1.6.0, not sure if 20.0 is fully in line with the rest of the data we have produced with 1.5.*.

@effigies
Copy link
Member

@dorianps Looks like we'll have a 1.5.9 tonight or tomorrow, mostly depending on how long tests take to run and whether I have insomnia.

@dorianps
Copy link

dorianps commented Feb 15, 2020 via email

@dorianps
Copy link

Just a follow-up. I ran v1.5.9 on the failed subjects and all phmap2rad errors are gone. I also checked the SDC and looks good on those subjects.

I am now left with the last 3 cases with errors, which are on the despike node. I have reported it at #33 but no response.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working GRE - phasediff
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants