Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix image links #3

Merged
merged 4 commits into from
Apr 12, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions _episodes/01-neuroimaging-fundamentals.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ keypoints:

## Types of MR Scans

![mr-scan-types]({{ site.url }}/fig/mr_scan_types.png){:class="img-responsive"}
![mr-scan-types](../fig/mr_scan_types.png){:class="img-responsive"}

For this tutorial, we'll be focusing on T1w and resting state fMRI scans.

Expand All @@ -34,7 +34,7 @@ For this tutorial, we'll be focusing on T1w and resting state fMRI scans.
| MINC | .mnc | Montreal Neurological Institute |
| NRRD | .nrrd | |

<img src="../static/images/dicom_to_nifti.png" alt="Drawing" align="middle" width="300px"/>
![dicom-to-nifti](../fig/dicom_to_nifti.png){:class="img-responsive"}

From the MRI scanner, images are initially collected in the DICOM format and can be converted to NIfTI using [dcm2niix](https://github.com/rordenlab/dcm2niix).

Expand Down
81 changes: 40 additions & 41 deletions _episodes/02-intro-nilearn.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ keypoints:

Nilearn is a functional neuroimaging analysis and visualization library that wraps up a whole bunch of high-level operations (machine learning, statistical analysis, data cleaning, etc...) in easy-to-use commands. The neat thing about Nilearn is that it implements Nibabel under the hood. What this means is that everything you do in Nilearn can be represented by performing a set of operations on Nibabel objects. This has the important consequence of allowing you, yourself to perform high-level operations (like resampling) using Nilearn then dropping into Nibabel for more custom data processing then jumping back up to Nilearn for interactive image viewing. Pretty cool!

# Setting up
# Setting up

The first thing we'll do is to important some Python modules that will allow us to use Nilearn:

Expand All @@ -37,7 +37,7 @@ First let's grab some data from where we downloaded our **FMRIPREP** outputs:

~~~
fmriprep_dir='../data/ds000030/derivatives/fmriprep/{subject}/{mod}/'
t1_dir = fmriprep_dir.format(subject='sub-10788', mod='anat')
t1_dir = fmriprep_dir.format(subject='sub-10788', mod='anat')
func_dir = fmriprep_dir.format(subject='sub-10788', mod='func')
~~~
{: .language-python}
Expand Down Expand Up @@ -74,7 +74,7 @@ os.listdir(t1_dir
### Basic Image Operations

In this section we're going to deal with the following files:
1. `sub-10788_T1w_preproc.nii.gz` - the T1 image in native space
1. `sub-10788_T1w_preproc.nii.gz` - the T1 image in native space
2. `sub-10788_T1w_brainmask.nii.gz` - a mask with 1's representing the brain, and 0's elsewhere

~~~
Expand All @@ -90,7 +90,7 @@ plot.plot_anat(T1)
~~~
{: .language-python}

![image-title-here]({{ site.url }}/fig/t1_img.png){:class="img-responsive"}
![image-title-here](../fig/t1_img.png){:class="img-responsive"}

Try viewing the mask as well!

Expand All @@ -109,7 +109,7 @@ plot.plot_anat(invert_img)
~~~
{: .language-python}

![image-title-here]({{ site.url }}/fig/invert_img.png){:class="img-responsive"}
![image-title-here](../fig/invert_img.png){:class="img-responsive"}

### Applying a Mask
Let's extend this idea of applying operations to each element of an image to multiple images. Instead of specifying just one image like the following:
Expand All @@ -120,7 +120,7 @@ We can specify multiple images by tacking on additional variables:

`img.math_img('a+b', a=img_a, b=img_b)`

The key requirement here is that when dealing with multiple images, that the *size* of the images must be the same. The reason being is that we're deaing with **element-wise** operations. That means that some voxel (i,j,k) in `img_a` is being paired with some voxel (i,j,k) in `img_b` when performing operations. So every voxel in `img_a` must have some pair with a voxel in `img_b`; sizes must be the same.
The key requirement here is that when dealing with multiple images, that the *size* of the images must be the same. The reason being is that we're deaing with **element-wise** operations. That means that some voxel (i,j,k) in `img_a` is being paired with some voxel (i,j,k) in `img_b` when performing operations. So every voxel in `img_a` must have some pair with a voxel in `img_b`; sizes must be the same.

We can take advantage of this property when masking our data using multiplication. Masking works by multipling a raw image (our `T1`), with some mask image (our `bm`). Whichever voxel (i,j,k) has a value of 0 in the mask multiplies with voxel (i,j,k) in the raw image resulting in a product of 0. Conversely, any voxel (i,j,k) in the mask with a value of 1 multiplies with voxel (i,j,k) in the raw image resulting in the same value. Let's try this out in practice and see what the result is:

Expand All @@ -130,20 +130,20 @@ plot.plot_anat(masked_T1)
~~~
{: .language-python}

![*image-title-here]({{ site.url }}/fig/masked_t1.png){:class="img-responsive"}
![*image-title-here](../fig/masked_t1.png){:class="img-responsive"}

As you can see areas where the mask image had a value of 1 were retained, everything else was set to 0

> ## Exercise #1
> Try applying the mask such that the brain is removed, but the rest of the head is intact!
>
>
> > ## Solution
> > ~~~
> > inverted_mask_t1 = img.math_img('a*(1-b)', a=T1, b=bm)
> > plot.plot_anat(inverted_mask_t1)
> > ~~~
> > {: .language-python}
> > ![*image-title-here]({{ site.url }}/fig/inverted_mask_t1.png){:class="img-responsive"}
> > ![*image-title-here](../fig/inverted_mask_t1.png){:class="img-responsive"}
> {: .solution}
{: .challenge}

Expand All @@ -157,18 +157,18 @@ Recall from the previous lesson using Nibabel to explore neuroimaging data:

Image that you have two images, one is a `256x256` JPEG image, the other is a `1024x1024` JPEG image. If you were to load up both images into paint, photoshop, or whatever then you can imagine that the first JPEG image would show up to be a lot smaller than the second one. To make it so that both images perfectly overlay each other one thing you could do is to resize the image, maybe by shrinking the larger high-resolution JPEG (1024x1024) down to the smaller low-resolution JPEG.

This JPEG problem is analogous our situation! The T1 image has smaller voxels (higher resolution), and the functional image has larger voxels (lower resolution). Both images represent the same real object and so must be the same size (in mm). Therefore you need more T1 voxels to represent a brain compared to the functional image! You have a mismatch in the dimensions! To fix this issue we need to **resize** (or more accurately **resample**) our images so that the dimensions match (same number of voxels).
This JPEG problem is analogous our situation! The T1 image has smaller voxels (higher resolution), and the functional image has larger voxels (lower resolution). Both images represent the same real object and so must be the same size (in mm). Therefore you need more T1 voxels to represent a brain compared to the functional image! You have a mismatch in the dimensions! To fix this issue we need to **resize** (or more accurately **resample**) our images so that the dimensions match (same number of voxels).

> ## Resampling
> Resampling is a method of *interpolating* in between data-points. When we stretch an image
> Resampling is a method of *interpolating* in between data-points. When we stretch an image
> we need to figure out what goes in the spaces that are created via stretching - this is what
> resampling does!
> Similarily, when we squish an image, we have to toss out some pixels - resampling
> in this context figures out how to replace values in an image to best represent
> resampling does!
> Similarily, when we squish an image, we have to toss out some pixels - resampling
> in this context figures out how to replace values in an image to best represent
> what the original larger image would have looked like
{: .callout}

Let's implement **resampling** so that our functional image (called EPI) matches our T1 image.
Let's implement **resampling** so that our functional image (called EPI) matches our T1 image.

For this section, we'll use two new files:

Expand All @@ -179,8 +179,8 @@ mni_epi = os.path.join(func_dir,'sub-10788_task-rest_bold_space-MNI152NLin2009cA
{: .language-python}

Where:
- `mni_T1` now is the standardized T1 image
- `mni_epi` now is the standardized EPI image
- `mni_T1` now is the standardized T1 image
- `mni_epi` now is the standardized EPI image

First let's load in our data so we can examine it in more detail, remember Nilearn will load in the image as a nibabel object:

Expand All @@ -201,13 +201,13 @@ EPI dimensions (65, 77, 49, 152)
~~~
{: .output}

This confirms our theory that the T1 image has a lot more voxels than the EPI image. Note that the 4th dimension of the EPI image are timepoints which we can safely ignore for now.
This confirms our theory that the T1 image has a lot more voxels than the EPI image. Note that the 4th dimension of the EPI image are timepoints which we can safely ignore for now.

We can resample an image using nilearn's `img.resample_to_img` function, which has the following structure:

`img.resample_to_img(source_img,target_img,interpolation)`
- `source_img` the image you want to sample
- `target_img` the image you wish to *resample to*
- `target_img` the image you wish to *resample to*
- `interpolation` the method of interpolation

> ## Interpolation
Expand All @@ -227,20 +227,20 @@ print("EPI dimensions", mni_epi_img.shape)
{: .language-python}

~~~
Resampled T1 dimensions (65, 77, 49)
Resampled T1 dimensions (65, 77, 49)
EPI dimensions (65, 77, 49, 152)
~~~
{: .output}

![image-title-here]({{ site.url }}/fig/resamp_t1.png){:class="img-responsive"}
![image-title-here](../fig/resamp_t1.png){:class="img-responsive"}

As you might notice, we have a blockier version of our T1 image -- we've reduce the resolution to match that of the EPI image.
As you might notice, we have a blockier version of our T1 image -- we've reduce the resolution to match that of the EPI image.

> ## Challenge
> Using the **Native T1** and **Resting State in T1 space** do the following:
> 1. Resample the Native T1 to match the Resting State image
> 2. Replace the brain in the T1 image with the first frame of the resting state brain
>
>
> Some files you'll need
> ~~~
> ex_T1 = os.path.join(t1_dir,'sub-10788_T1w_preproc.nii.gz')
Expand All @@ -250,50 +250,49 @@ As you might notice, we have a blockier version of our T1 image -- we've reduce
> ~~~
> {: .language-python}
> > ## Solution
> >
> >
> > ~~~
> > #Resample
> > resamp_t1 = img.resample_to_img(source_img=ex_T1,target_img=ex_func,interpolation='continuous')
> >
> > resamp_t1 = img.resample_to_img(source_img=ex_T1,target_img=ex_func,interpolation='continuous')
> >
> > #Step 2: We need to resample the mask as well!
> > resamp_bm = img.resample_to_img(source_img=ex_bm,target_img=resamp_t1,interpolation='nearest')
> >
> >
> > #Step 3: Mask out the T1 image
> > removed_t1 = img.math_img('a*(1-b)',a=resamp_t1,b=resamp_bm)
> >
> >
> > #Visualize the resampled and removed brain
> > plot.plot_anat(removed_t1)
> >
> >
> > ~~~
> > {: .language-python}
> >
> > ![image-title-here]({{ site.url }}/fig/removed_t1.png){:class="img-responsive"}
> >
> >
> > ![image-title-here](../fig/removed_t1.png){:class="img-responsive"}
> >
> > ~~~
> > #Load in the first frame of the resting state image
> > func_img = img.load_img(ex_func)
> > first_func_img = func_img.slicer[:,:,:,0]
> >
> >
> > #Mask the functional image and visualize
> > masked_func = img.math_img('a*b', a=first_func_img, b=ex_func_bm)
> > plot.plot_img(masked_func)
> > ~~~
> > {: .language-python}
> >
> > ![image-title-here]({{ site.url }}/fig/masked_func.png){:class="img-responsive"}
> >
> >
> > ![image-title-here](../fig/masked_func.png){:class="img-responsive"}
> >
> > #Now overlay the functional image on top of the anatomical missing the brain
> > ~~~
> > combined_img = img.math_img('a+b', a=removed_t1, b=masked_func)
> > plot.plot_anat(combined_img)
> > ~~~
> > {: .language-python}
> >
> >
> > ![image-title-here]({{ site.url }}/fig/combined_img.png){:class="img-responsive"}
> >
> >
> >
> > ![image-title-here](../fig/combined_img.png){:class="img-responsive"}
> >
> {: .solution}
{: .challenge}

{% include links.md %}

Loading