Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated episode 3 #22

Merged
merged 2 commits into from
May 22, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 6 additions & 2 deletions _episodes/01-Image_Modalities.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,13 +23,17 @@ keypoints:
7. The tissue specific differences in T1 and T2 relaxation times is what enables us to _see_ anatomy from image contrast. The final image contrast depends on when you _listen_ to the signal (design parameter: echo time (TE)) and how fast you repeat the _tilt-relax_ process i.e. RF pulse freuqency (design parameter: repetition time (TR)).


### T1 and T2 relaxation
## T1 and T2 relaxation
Here we see signal from two different tissues as the nuclei are tilted and realigned.
The figure on the left shows a single nucleus (i.e. tiny magnet) being tilted away and then precessing back to the the initial alighment along B<sub>0</sub>. The figure on the right shows the corresponding registered T1 and T2 signal profiles for two different "tissues". The difference in their signal intensties results in the image contrast.

![MR_relax](https://user-images.githubusercontent.com/7978607/112332334-08750c80-8c90-11eb-90fc-33956c037a1c.gif)

### T1w, T2w, and PD acquisition
## Brain tissue comparison

![relax_tissue_contrast](../fig/episode_1/relax_tissue_contrast.png))

## T1w, T2w, and PD acquisition

| | TE short | TE ~ T2 of tissue of interest|
| :-------------: | :----------: | :-----------: |
Expand Down
22 changes: 16 additions & 6 deletions _episodes/02-Image_Preproc_Part1.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ variations in the sensitivity of the reception coil, and the interaction between
- [FSL FAST](https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FAST) (_Note:FSL FAST is a multi-purpose segmentation tool that includes the bias field correction._)


> ## Bias field correction
> ### Bias field correction quiz
>
> What is the difference between bias field and image noise?
>
Expand All @@ -43,7 +43,7 @@ variations in the sensitivity of the reception coil, and the interaction between


### ANTs N4 correction

(a) Acquired T1w image (b) Estimated the bias field which can then be used to “correct” the image. (c) Bias field viewed as a surface to show the low frequency modulation.
![N4_bias](../fig/episode_2/N4_bias.jpeg)

#### Side-note: [ANTs](http://stnava.github.io/ANTs/) is a software comprising several tools and image processing algorithms. ANTs can be run independently or we can import ANTs scripts in python using [nypype](https://nipype.readthedocs.io/en/latest/) library.
Expand Down Expand Up @@ -73,14 +73,15 @@ n4.cmdline


### Impact of correction (_source: [Despotović et al.](https://www.hindawi.com/journals/cmmm/2015/450341/)_)
The top figure panel shows original and bias field corrected MR image slices. The middle figure panel shows the difference in intensty histograms for the two image slices. And the bottom figure panel shows the impact on subsequent image processing task of bias correction.
The top figure panel shows original and bias field corrected MR image slices. The middle figure panel shows the difference in the intensty histograms for the two image slices. And the bottom figure panel shows the impact of bias correction on a subsequent image segmentation task.

![bias_correction](../fig/episode_2/Despotovic_bias_correction.png)


### Visualizing "before" and "after" (see [this notebook](../code/2_sMRI_image_cleanup.ipynb) for detailed example.)
~~~
from nipype.interfaces.ants.segmentation import BrainExtraction
import nibabel as nib
from nilearn import plotting
~~~
{: .language-python}
import nibabel as nib
Expand Down Expand Up @@ -113,13 +114,22 @@ the cerebral cortex and subcortical structures, including the brain stem and cer
- [FSL brain extraction tool (BET)](https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/BET)


- Note 1: At this point we are NOT trying to extract the brain sulci and gyri (i.e. cortical folds). We are just creating a simple brain mask for computational purposes, which need not capture the precise brain anatomy. Thus you may see some marrow and membrain included in the extracted brain.
- Note 2: Brainstem and spinal cord are continuous so a rather arbitrarily cut-off point is selected.

#### Example brain extractions pass / fail
| Pass | Fail |
| :-------------: | :-----------: |
| ![nilearn_brain_orig](../fig/episode_2/BET_Brain_mask_QC_pass.png) | ![nilearn_brain_extract](../fig/episode_2/BET_Brain_mask_QC_fail.png) |

_Source: FSL Introduction to Brain Extraction_

> ## Bias field correction
> ## Brain extraction quiz
>
> Apart from stripping off non-brain tissue, what can brain-mask be used for?
>
> > ## Solution
> > Brain mask offers information about total brain volume - which can be used for quality control (i.e. identifying algorithm failures) as well as for brain-specific correction (normalization) for downstream statistical models. Althugh intracranial volume is more commonly used for the latter purpose.
> > Brain mask offers information about total brain volume - which can be used for quality control (i.e. identifying algorithm failures) as well as for brain-specific correction (normalization) for downstream statistical models. Althugh intracranial volume is better and more commonly used measure for the latter purpose.
> >
> >
> {: .solution}
Expand Down
25 changes: 16 additions & 9 deletions _episodes/03-Image_Preproc_Part2.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,18 +3,19 @@ title: "Image preprocessing with smriprep (Part 2: spatial normalization)"
teaching: 20
exercises: 10
questions:
- "What are reference coordinate systems"
- "What are 'templates', 'atlases'?"
- "What is spatial normalization?"
- "What are 'templates', 'spaces', 'atlases'?"
objectives:
- "Understand reference spaces and registration process"
keypoints:
- "spatial normalization offer a way to map and compare brain anatomy across individuals, modalities, and timepoints"
- "Reference coordinate spaces and spatial normalization offer a way to map and compare brain anatomy across modalities, individuals, and studies"
---
## You Are Here!
![course_flow](../fig/episode_3/Course_flow_3.png)

## Why do we need spatial normalization
- Compare and combine brain images across subjects and studies
- Compare and combine brain images across modalities, individuals, and studies

## What do we need for spatial normalization
- A reference frame: A 3D space that assigns x,y,z coordinates to anatomical regions (independent of voxel dimensions!).
Expand Down Expand Up @@ -68,19 +69,18 @@ For examples:
- image coordinate: (0,0,0) ~ anatomical location: (100mm, 50mm, -25mm)
- The spacing between voxels along each axis: (1.5mm, 0.5mm, 0.5mm)

![slicer_coordinate_systems](../fig/episode_3/Slicer_Wiki_Voxel_Spacing.png)
<img src="../fig/episode_3/Slicer_Wiki_Voxel_Spacing.png" alt="Drawing" align="middle" width="500px"/>

#### _Image [source](https://www.slicer.org/wiki/Coordinate_systems)_



> ## Coordinate systems
>
> What happens when you downsample a MR image?
>
> > ## Solution
> > Downsampling reduces the number of total voxels in the image. Consequently the voxel-spacing is increased as more anatomical space is "sampled" by any given voxel.
> > Note that the new intensity values of the resampled voxels are determined by type of interpolation used.
> > Note that the new intensity values of the resampled voxels are determined based on the type of interpolation used.
> >
> {: .solution}
{: .challenge}
Expand Down Expand Up @@ -138,19 +138,21 @@ For examples:
- Transfomrations
- Image similarity metrics: correlation ratio (CR), cross-correlation (CC), mutual information (MI)
- Linear: global feature aligment
- Rigid (6 parameters): rotation, translation
- Rigid (6 parameters): rotation, translation
- Affine (12 parameters): rotation, translation, scaling, skewing
- Nonlinear (a.k.a elastic): local feature aligment via warping
- Computationally intensive deformation models with large number of parameters
- Employ diffeomorphic models that preserve toplogy and source-target symmetry

_Note: Linear registrations are often used as a initialization step for non-linear registration._

![registration_cartoon](../fig/episode_3/Registration.png)

- Commonly used algorithms

| Algorithm | Deformation | ~ parameters |
| :-------------: | :----------: | :-----------: |
| FLIRT | Linear | 9 |
| FSL FLIRT | Linear | 9 |
| ANIMAL | Non-linear (Local translation) | 69K |
| DARTEL Toolbox | Non-linear (diffeomorphic) | 6.4M |
| ANTs (SyN) | Non-linear (bi-directional diffeomorphic) | 28M |
Expand All @@ -168,7 +170,8 @@ For examples:
![nonlinear_deform_process](../fig/episode_3/Silcer_DeformOnly.gif)


> ## Image registration

> ## Image registration quiz
>
> What would the information from non-linear deformation would tell you about the subject?
>
Expand Down Expand Up @@ -213,4 +216,8 @@ Subject space to refernce space mapping:

![nilearn_reg](../fig/episode_3/nilearn_registration.png)


### Subject space vs refernce space: use cases
![subject_vs_ref_space](../fig/episode_3/Subject_vs_common_space.png)

{% include links.md %}
136 changes: 126 additions & 10 deletions code/1_sMRI_modalities.ipynb

Large diffs are not rendered by default.

18 changes: 9 additions & 9 deletions code/3_sMRI_spatial_norm.ipynb

Large diffs are not rendered by default.

Binary file added fig/episode_1/relax_tissue_contrast.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added fig/episode_2/BET_Brain_mask_QC_fail.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added fig/episode_2/BET_Brain_mask_QC_pass.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added fig/episode_3/FS_subject_T1.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified fig/episode_3/Registration.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added fig/episode_3/Subject_vs_common_space.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added fig/episode_3/annot-destrieux.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added fig/episode_3/subject_T1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added fig/episode_3/subject_space_segmentation.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.