Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support dynamic distortion correction with DOCMA #36

Open
tsalo opened this issue Oct 3, 2019 · 9 comments · May be fixed by #438
Open

Support dynamic distortion correction with DOCMA #36

tsalo opened this issue Oct 3, 2019 · 9 comments · May be fixed by #438
Labels
Gathering feedback We would like to hear from the community about this particular issue - What are your thoughts?

Comments

@tsalo
Copy link
Contributor

tsalo commented Oct 3, 2019

With complex-valued, multi-echo data (i.e., ME-EPI with both magnitude and phase information reconstructed), it is possible to estimate and correct for distortion on a volume-by-volume basis. I'm definitely not an expert in this area, but recently I've been reading a bit about it and there is a method called DOCMA that uses the phase data from the first two echoes to compute a phase-difference map and perform distortion correction on each volume of a time series, independently. The math definitely goes over my head, but it looks like the method is roughly equivalent to simply splitting the time series up into individual volumes and then performing standard distortion correction using the first two echoes as the field map. If that is the case, then I think this could be implemented fairly easily once #30 is merged.

Other methods for dynamic distortion correction that I've come across, including the UMPIRE multi-echo phase unwrapping method, seem to have special requirements, like uneven, highly specific echo times or alternating echo times for a single-echo protocol. DOCMA, on the other hand, seems to be applicable to any multi-echo sequence with phase information.

EDIT: After further investigation, I don't think the phasediff workflow will work as needed for dynamic distortion correction because the skullstripping performed in the workflow seems to be static (i.e., it treats 4D data as 3D and only produces one 3D mask). The whole procedure will probably require a splitting node wrapping around the workflow to separate all of the 4D datasets into 3D volumes before calculating and applying the field maps.

@tsalo
Copy link
Contributor Author

tsalo commented Dec 5, 2019

@oesteban In order to build a DOCMA workflow, I'm trying to make a node that splits 4D files into 3D ones, and I'd prefer to have a function that flexibly allows any inputs so I don't need a separate node for each file I want to split.

def split_files(*, volume_idx, **kwargs):
    """
    Split an input 4D file and write out the set of 3D files.
    """
    import os.path as op
    from nilearn.image import index_img
    from nipype.utils.filemanip import split_filename

    out_files = []
    for in_file in kwargs.values():
        _, base, _ = split_filename(in_file)
        out_file = op.abspath(base + "_{0:05d}.nii.gz".format(volume_idx))
        img = index_img(in_file, volume_idx)
        img.to_filename(out_file)
        out_files.append(out_file)
    return out_files

With the above, I can do the following to make my splitting node:

buffernode = pe.Node(niu.IdentityInterface(
    fields=['file1_4d', 'file2_4d', 'file3_4d', 'volume_idx']),
    iterables=[('volume_idx', np.arange(num_trs, dtype=int))],
    name='buffernode')

split_4d_to_3d = pe.Node(
    interface=niu.Function(['file1_4d', 'file2_4d', 'file3_4d'],
                           ['files1_3d', 'files2_3d', 'files3_3d'],
                           split_files),
    name='split_4d_to_3d')

However, the function split_files requires that the kwargs retain their order, which is only implemented in Python 3.6+, so my question is: do you know of a good Python 3.5+ alternative or, if not, is it preferable to bump up requirements to Python 3.6+ or to have repetitive nodes where the function only splits one file and there's a separate node in the workflow for each file?

@oesteban
Copy link
Member

oesteban commented Dec 5, 2019

Maybe:

buffernode = pe.Node(niu.IdentityInterface(
    fields=['file1_4d', 'file2_4d', 'file3_4d', 'volume_idx']),
    iterables=[('volume_idx', np.arange(num_trs, dtype=int))],
    name='buffernode')

split_4d_to_3d = pe.MapNode(
    interface=niu.Function(['file', 'volume_idx'], ['file'], split_files),
    name='split_4d_to_3d', iterfield=['file', 'volume_idx'])

join_files = pe.JoinNode(...)

@tsalo
Copy link
Contributor Author

tsalo commented Dec 11, 2019

@oesteban Okay I have a workflow that successfully runs, although the unwarping results are nonsensical (see below). Should I open a draft PR so you can review the code directly or would you prefer to continue discussion here?

First volume's field map

First volume unwarped

@tsalo
Copy link
Contributor Author

tsalo commented Jul 24, 2020

I'm trying out ROMEO for the unwrapping, and possibly even B0-map generation, so hopefully I'll be able to move forward on this soon.

@oesteban oesteban added the Gathering feedback We would like to hear from the community about this particular issue - What are your thoughts? label Mar 5, 2021
@tsalo
Copy link
Contributor Author

tsalo commented Dec 6, 2023

@vanandrew has developed a very cool dynamic distortion method for complex-valued, multi-echo BOLD data called MEDIC (https://github.com/vanandrew/medic_analysis). There's a new preprint about it available here, and it's integrated into the Dosenbach lab's preprocessing pipeline (https://github.com/DosenbachGreene/processing_pipeline). I think it can be safely considered a superior alternative to DOCMA.

EDIT: One additional consideration is that dynamic distortion correction would probably be best applied before HMC, unlike static distortion correction.

@DVSneuro
Copy link

Since there's the "gathering feedback" tag here, I just wanted throw my two cents in and say that the framewise SDC from Andrew Van (@vanandrew) seems like it would be a great addition to FMRIprep, but I am not sure how it interacts with the Echo Time Difference check in the BIDS validator. In our initial tests, we couldn't get this to run since our echoes are about 17 ms apart and the BIDS validator looks like it's expecting a max of 10 ms. More here: https://neurostars.org/t/multi-echo-fmaps-echo-1-and-2-difference-unreasonable/28846

Thanks!
David

@effigies
Copy link
Member

The BIDS validator check can be relaxed if there are false positives. The idea was just to warn people they almost certainly made a mistake during conversion or curation.

@tsalo
Copy link
Contributor Author

tsalo commented Mar 27, 2024

FWIW I don't think SDCFlows will work if you just take the first two echoes from one of your volumes and rename them as field map files (which is when the BIDS validator would notice your echo time difference, I believe). The phase data are probably highly wrapped, which I believe is what ROMEO is particularly good at handling, but ROMEO isn't what SDCFlows uses for unwrapping. I could definitely be wrong though.

@DVSneuro
Copy link

Yes, there's definitely a ton of wrapping with the phase images. I'm not yet 100% sure what MEDIC is doing under the hood, but the output we got was quite reasonable at least in terms of how the images looked.

We had four echoes going into MEDIC, but it doesn't output a corresponding .json file with the metadata (and I was assuming FMRIprep was using the .json file to infer parameters about the scan, like the echo time difference). In the case of a multi-echo sequence with four echoes where MEDIC produces a great fmap image for each time point, I'm just not sure how FMRIprep should handle the fmap. If we had a proper .json file for any of these fmaps (ideally the first one, though), it seems like FMRIprep could quickly and easily have task- and run-specific fmaps that could be applied to each bold image. Of course, I realize that the major advance with Andrew's paper is doing this in a framewise fashion, but I know that will take more time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Gathering feedback We would like to hear from the community about this particular issue - What are your thoughts?
Projects
None yet
4 participants