Official PyTorch implementation of: Ziad Al-Haj Hemidi, Christian Weihsbach, Mattias P. heinrich, "IM-MoCo: Self-supervised MRI Motion Correction using Motion-Guided Implicit Neural Representations", MICCAI (2024)
Paper | Supplementary Material
In this work, we propose a self-supervised MRI motion correction method that leverages motion-guided implicit neural representations to learn the motion patterns and correct the motion artifacts in MRI scans. The pre-trained klD-Net takes a motion-corrupted k-space
and outputs a motion mask, which is post-processed to yield a list of movement groups
indicated by the different colors in
We assess the Image quality of the motion correction compared to state-of-the-art methods for two motion scenarios.
- Python
- PyTorch
- torchvision
- h5py
- numpy
See the requirements.txt
file for the full list of dependencies.
Create a new conda environment and activate it:
mamba env create -n immoco python=3.10
mamba activate immoco
Install the required packages:
pip install -r requirements.txt
For Hash-grid encoding we need to install tiny-cuda in the activated environment as well:
pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
- Download the NYU fastMRI $T_2$_weighted Brain dataset.
- Run the following scripts to prepare the datasets(Todo: 🚧 Under construction 🚧):
python src/utils/prepareData.py
For training, please refer to the src/train
folder and, for example, run:
python src/train/train_kld_net.py
Pre-trained models can be downloaded using the download script:
python src/utils/download_pretrained_models.py
This will download the pre-trained models to the models
directory.
Note: IM-MoCo (ours) has no pre-trained weights since it is an instance-based method.
To run tests please run the scripts under src/test
as follows:
python src/test/test_immoco.py
If you find this work helpful for your research, please cite the following paper:
@inproceedings{IM-MoCo,
title={IM-MoCo: Self-supervised MRI Motion Correction using Motion-Guided Implicit Neural Representations},
author={Ziad Al-Haj Hemidi and Christian Weihsbach and Mattias P. Heinrich},
booktitle={MICCAI},
year={2024}
}
This project is licensed under the MIT License - see the LICENSE file for details.