diff --git a/_episodes/05-Statistical_Analysis.md b/_episodes/05-Statistical_Analysis.md index e773a39b..1902e60b 100644 --- a/_episodes/05-Statistical_Analysis.md +++ b/_episodes/05-Statistical_Analysis.md @@ -66,11 +66,10 @@ plotting.plot_prob_atlas(atlas_filename, title="Pauli 2017") #### 5.1.1.2. Regional volumetric analysis Software such as FSL and FreeSurfer can be used for segmentation of regions of interest. For us to conveniently use such pipelines on this platform, we use [NiPype](https://nipype.readthedocs.io/en/0.12.0/index.html) which is an open-source Python project that provides a uniform interface to existing neuroimaging software. - Several interfaces available through NiPype maybe used for this task. We will be looking at one example for the sake of simplicity. #### • Using FSL to segment a region of interest -We use a single sMRI from the haxby dataset and make a copy of it in our current working directory to make processing easier. +We use a single sMRI ([```structural.nii.gz```](/_extras/5_OtherFiles/structural.nii.gz)) from the haxby dataset and make a copy of it in our current working directory to make processing easier. ``` haxby_dataset = datasets.fetch_haxby() anat_file = haxby_dataset.anat[0] @@ -213,141 +212,6 @@ For this subject, the segmented left hippocampal volume is: ``` 4287 mm^3 ``` Once a dataset is processed, the volumes of each ROI can be collected and included in a .csv file (or other formats you prefer). As processing takes time, for this example we use processed freesurfer outputs for ROI that is available on the OASIS website. The summarized freesurfer outputs from the OASIS1 dataset can be downloaded [here](/_extras/5_OtherFiles/OASIS_FS_ASEG.csv). - -Several interfaces available through NiPype maybe used for this task. We will be looking at one example for the sake of simplicity. - -##### Using FSL to segment a region of interest -We use a single sMRI from the haxby dataset and make a copy of it in our current working directory to make processing easier. -``` -haxby_dataset = datasets.fetch_haxby() -anat_file = haxby_dataset.anat[0] -``` -``` -import shutil -shutil.copyfile(anat_file,os.path.join(os.getcwd(),'structural.nii.gz')) -``` -FSL First can be used for segmentation of the required structures. For this example, we focus on the left hippocampus (a subcortical structure). -``` -first = fsl.FIRST() -first.inputs.in_file = 'structural.nii.gz' -first.inputs.out_file = 'segmented.nii.gz' -first.inputs.list_of_specific_structures=['L_Hipp'] -res = first.run() -``` -If you need to segment for all the structures provided, simply comment ```first.inputs.list_of_specific_structures=['L_Hipp']```. -Once the process is complete, the segmented output can be viewed as follows: -``` -T1w_img =nib.load('structural.nii.gz') -seg_labels=nib.load('segmented-L_Hipp_first.nii.gz') -from nilearn import plotting -plotting.plot_roi(roi_img=seg_labels, bg_img=T1w_img, alpha=0.9, cmap="cool",dim=-.5); -``` -Within your working directory, there will be several output files created by FSL. Here, the _first.nii.gz_ is the original output, while the _corr.nii.gz_ files may have had boundary correction applied to them (depending on the structure). -(Fun fact: the value assigned to _dim_ in _plotting_roi_ controls the visibility of the background image). - - -The volume of the segemented region can be found using _imagestats_. - -(You will also be able to notice a considerable difference in volume estimations in _first.nii.gz_ and _corr.nii.gz_.) - -``` -import nibabel.imagestats as imagestats -imagestats.mask_volume(nib.Nifti1Image(seg_labels.get_fdata(), np.eye(4))) -``` - -``` -OUT[]: 4189.0 -``` -💡 **Exercise 5.1**: Can you follow the same steps above to segment a different structure of interest? - -##### Using FreeSurfer to segment regions of interest -Similar to FSL, FreeSurfer can be used through the NiPype interface as well. The ```recon-all``` process on freesurfer allows us to obtain all or any part of the cortical reconstruction process. This process is fairly time consuming. The code below, adapted from the [NiPype beginners guide](https://miykael.github.io/nipype-beginner-s-guide/prepareData.html), can be used to achieve this. - -``` -# Import modules -import os -from os.path import join as opj -from nipype.interfaces.freesurfer import ReconAll -from nipype.interfaces.utility import IdentityInterface -from nipype.pipeline.engine import Workflow, Node - -# Specify important variables -experiment_dir = '~/nipype_tutorial' # location of experiment folder -data_dir = opj(experiment_dir, 'data') # location of data folder -fs_folder = opj(experiment_dir, 'freesurfer') # location of freesurfer folder - -subject_list = ['sub001', 'sub002', 'sub003', 'sub004', 'sub005', 'sub006', - 'sub007', 'sub008', 'sub009','sub010'] # subject identifier - -T1_identifier = 'struct.nii.gz' # Name of T1-weighted image - -# Create the output folder - FreeSurfer can only run if this folder exists -os.system('mkdir -p %s'%fs_folder) - -# Create the pipeline that runs the recon-all command -reconflow = Workflow(name="reconflow") -reconflow.base_dir = opj(experiment_dir, 'workingdir_reconflow') - -# Some magical stuff happens here (not important for now) -infosource = Node(IdentityInterface(fields=['subject_id']), - name="infosource") -infosource.iterables = ('subject_id', subject_list) - -# This node represents the actual recon-all command -reconall = Node(ReconAll(directive='all', subjects_dir=fs_folder), name="reconall") - -# This function returns for each subject the path to struct.nii.gz -def pathfinder(subject, foldername, filename): - from os.path import join as opj - struct_path = opj(foldername, subject, filename) - return struct_path - -# This section connects all the nodes of the pipeline to each other -reconflow.connect([(infosource, reconall, [('subject_id', 'subject_id')]), - (infosource, reconall, [(('subject_id', pathfinder, - data_dir, T1_identifier), - 'T1_files')]), - ]) - -# This command runs the recon-all pipeline in parallel (using 8 cores) -reconflow.run('MultiProc', plugin_args={'n_procs': 8}) - -``` -If you want to use the same dataset made available by nipype, you can download and arrange the data by running [this script](/5_OtherFiles/5_Download_NiPypeTutorial_Data.sh). -Also, make sure that your ```$FREESURFER_HOME``` (```path/to/freesufer/location```) and ```$SUBJECTS_DIR``` (```/path/to/subjects/outputs``` e.g. ```SUBJECTS_DIR=~/nipype_tutorial/freesurfer```) paths are set properly. - -Once the process in complete, your folder structure containing original data and output files will be available in your working directory/SUBJECTS_DIR. -
Click here to take a look at the overview of the folder structure. - -``` -nipype_tutorial -|_data - |_sub001 - |_sub002 - ... - |_sub010 -|_freesurfer - |_sub001 - |_label - |_mri - |_scripts - |_stats - |_surf - |_tmp - |_touch - |_trash - ... - |_sub010 -``` -
- -Considering a single subject: The required stats could be found within the respective folders. Segmentation statistics of subcortical structures can be found in _aseg.stats_ . When using FreeSurfer, the segmented left hippocampal volume is: ``` 4287 mm^3 ``` - -##### Volumetric Analysis: ROI differences in Young, Middle Aged, Nondemented and Demented Older Adults -Once a dataset is processed, the volumes of each ROI can be collected and included in a .csv file (or other formats you prefer). - -As processing takes time, for this example we use processed freesurfer outputs for ROI that is available on the OASIS website. The summarized freesurfer outputs from the OASIS1 dataset can be downloaded [here](/5_OtherFiles/OASIS_FS_ASEG.csv). - Older adults who are demented at the time of scanning and those who are progressing have been given a Clinical Dementia Rating (CDR). We can observe the ROI volumetric differences in adults and how these volumes vary based on their CDR. For this example, we consider 6 regions of interest: Left/Right Amygdala, Hippocampus and Lateral ventricle. @@ -371,7 +235,6 @@ sns.boxplot(ax=axes[1, 2], data=oasis_aseg, x='CDR', y='Left-Lateral-Ventricle V We can observe that the ROI volumes are smaller when subject is likely to have a higher CDR. - 💡 **Exercise 5.2**: Can you find the effect size for the ROIs in adults over 60? (Download the .csv [here](/_extras/5_OtherFiles/OASIS_FS_ASEG_OVER60.csv)).
@@ -399,7 +262,6 @@ The output we got looks like: Click [here](/_extras/5_OtherFiles/5_RelatedStudies_statAnalysis.md) to look at releated analysis from studies! - ### 5.1.2. Cortical surface parcellations Cortical surfaces can be parcellated into anatomically and functionally meaningful regions. This fascilitates identification and characterization of morphometric and connectivity alterations in the brain that may occur as a result of a disease or aging. For example utilities in the FreeSurfer software includes a technique for automatically assigning a neuroanatomical label to each location on a cortical surface model based on probabilistic information estimated from a manually labeled training set. As this procedure incorporates both geometric information derived from the cortical model, and neuroanatomical convention, the result is a complete labeling of cortical sulci and gyri. @@ -422,7 +284,6 @@ plotting.plot_surf_roi(fsaverage['pial_left'], roi_map=parcellation,hemi='left', ``` - #### 5.1.2.2. Regional cortical thickness analysis #### • Using FreeSurfer to find Cortical thicknesses @@ -545,7 +406,7 @@ The process using **SPM** mainly involves three steps: **1. Using NewSegment** -To observe the outputs over these tasks, we have used a single T1w image ```structural.nii```. +To observe the outputs over these tasks, we have used a single T1w image [```structural.nii```](/_extras/5_OtherFiles/structural.nii). This lets us separate structural images into different tissue classes. Tissues types identified by this process are grey matter (c1), white matter (c2) and CSF (c3), where an output image with the filename format _cXstructural.nii_ will be generated for each tissue type. @@ -582,7 +443,8 @@ Outputs from this step will be as follows: This step is performed to increase the accuracy of inter-subject alignment by modelling the shape of each brain using numerous parameters (specifically, 3 parameters per voxel). During this process, gray matter between images are aligned, while simultaneously aligning white matter. Average template data are generated as a result, and data are iteratively assigned to this. The inputs to this stage are the “rc1” and “rc2” images, that are generated in the previous step. The outputs generated during this stage include the “u_rc1” file, as well as a series of template images. -This step can be processed using the following code. +Repeat step one for two structural images [```structural_1.nii```](/_extras/5_OtherFiles/structural_2.nii) and [```structural_2.nii```].(/_extras/5_OtherFiles/structural_2.nii). +Then process the generated output rcX files using the following code. ``` rc1_NewSeg1 = nib.load('rc1structural_1.nii') @@ -733,4 +595,3 @@ show() Related citations: \ "[Voxel-Based Morphometry—The Methods](https://www.fil.ion.ucl.ac.uk/~karl/Voxel-Based%20Morphometry.pdf)", John Ashburner and Karl J. Friston, NeuroImage(2000). \ "[Voxel Based Morphometry of the Human Brain: Methods and Applications](https://www.fil.ion.ucl.ac.uk/spm/doc/papers/am_vbmreview.pdf)", Andrea Machelli, Cathy Price, Karl Friston, John Ashburner, Curr. Med. Imaging Reviews (2005). - diff --git a/_extras/5_OtherFiles/structural.nii b/_extras/5_OtherFiles/structural.nii new file mode 100644 index 00000000..574877a9 Binary files /dev/null and b/_extras/5_OtherFiles/structural.nii differ diff --git a/_extras/5_OtherFiles/structural.nii.gz b/_extras/5_OtherFiles/structural.nii.gz new file mode 100644 index 00000000..c2b493d9 Binary files /dev/null and b/_extras/5_OtherFiles/structural.nii.gz differ diff --git a/_extras/5_OtherFiles/structural_1.nii b/_extras/5_OtherFiles/structural_1.nii new file mode 100644 index 00000000..17a3fd7f Binary files /dev/null and b/_extras/5_OtherFiles/structural_1.nii differ diff --git a/_extras/5_OtherFiles/structural_2.nii b/_extras/5_OtherFiles/structural_2.nii new file mode 100644 index 00000000..0d7e8b95 Binary files /dev/null and b/_extras/5_OtherFiles/structural_2.nii differ diff --git a/fig/episode_5/5_VBM_PP_DARTELinput.png b/fig/episode_5/5_VBM_PP_DARTELinput.png index 11bf27ed..4a564e41 100644 Binary files a/fig/episode_5/5_VBM_PP_DARTELinput.png and b/fig/episode_5/5_VBM_PP_DARTELinput.png differ diff --git a/fig/episode_5/5_VBM_PP_NewSeg.png b/fig/episode_5/5_VBM_PP_NewSeg.png index 86b181c2..59b185ce 100644 Binary files a/fig/episode_5/5_VBM_PP_NewSeg.png and b/fig/episode_5/5_VBM_PP_NewSeg.png differ