diff --git a/_episodes/01-neuroimaging-fundamentals.md b/_episodes/01-neuroimaging-fundamentals.md index 3d58c5fb..ecceea4f 100644 --- a/_episodes/01-neuroimaging-fundamentals.md +++ b/_episodes/01-neuroimaging-fundamentals.md @@ -20,7 +20,7 @@ keypoints: ## Types of MR Scans -![mr-scan-types]({{ site.url }}/fig/mr_scan_types.png){:class="img-responsive"} +![mr-scan-types](../fig/mr_scan_types.png){:class="img-responsive"} For this tutorial, we'll be focusing on T1w and resting state fMRI scans. @@ -34,7 +34,7 @@ For this tutorial, we'll be focusing on T1w and resting state fMRI scans. | MINC | .mnc | Montreal Neurological Institute | | NRRD | .nrrd | | -Drawing +![dicom-to-nifti](../fig/dicom_to_nifti.png){:class="img-responsive"} From the MRI scanner, images are initially collected in the DICOM format and can be converted to NIfTI using [dcm2niix](https://github.com/rordenlab/dcm2niix). diff --git a/_episodes/02-intro-nilearn.md b/_episodes/02-intro-nilearn.md index 5227cd8a..d9b49bcf 100644 --- a/_episodes/02-intro-nilearn.md +++ b/_episodes/02-intro-nilearn.md @@ -16,7 +16,7 @@ keypoints: Nilearn is a functional neuroimaging analysis and visualization library that wraps up a whole bunch of high-level operations (machine learning, statistical analysis, data cleaning, etc...) in easy-to-use commands. The neat thing about Nilearn is that it implements Nibabel under the hood. What this means is that everything you do in Nilearn can be represented by performing a set of operations on Nibabel objects. This has the important consequence of allowing you, yourself to perform high-level operations (like resampling) using Nilearn then dropping into Nibabel for more custom data processing then jumping back up to Nilearn for interactive image viewing. Pretty cool! -# Setting up +# Setting up The first thing we'll do is to important some Python modules that will allow us to use Nilearn: @@ -37,7 +37,7 @@ First let's grab some data from where we downloaded our **FMRIPREP** outputs: ~~~ fmriprep_dir='../data/ds000030/derivatives/fmriprep/{subject}/{mod}/' -t1_dir = fmriprep_dir.format(subject='sub-10788', mod='anat') +t1_dir = fmriprep_dir.format(subject='sub-10788', mod='anat') func_dir = fmriprep_dir.format(subject='sub-10788', mod='func') ~~~ {: .language-python} @@ -74,7 +74,7 @@ os.listdir(t1_dir ### Basic Image Operations In this section we're going to deal with the following files: -1. `sub-10788_T1w_preproc.nii.gz` - the T1 image in native space +1. `sub-10788_T1w_preproc.nii.gz` - the T1 image in native space 2. `sub-10788_T1w_brainmask.nii.gz` - a mask with 1's representing the brain, and 0's elsewhere ~~~ @@ -90,7 +90,7 @@ plot.plot_anat(T1) ~~~ {: .language-python} -![image-title-here]({{ site.url }}/fig/t1_img.png){:class="img-responsive"} +![image-title-here](../fig/t1_img.png){:class="img-responsive"} Try viewing the mask as well! @@ -109,7 +109,7 @@ plot.plot_anat(invert_img) ~~~ {: .language-python} -![image-title-here]({{ site.url }}/fig/invert_img.png){:class="img-responsive"} +![image-title-here](../fig/invert_img.png){:class="img-responsive"} ### Applying a Mask Let's extend this idea of applying operations to each element of an image to multiple images. Instead of specifying just one image like the following: @@ -120,7 +120,7 @@ We can specify multiple images by tacking on additional variables: `img.math_img('a+b', a=img_a, b=img_b)` -The key requirement here is that when dealing with multiple images, that the *size* of the images must be the same. The reason being is that we're deaing with **element-wise** operations. That means that some voxel (i,j,k) in `img_a` is being paired with some voxel (i,j,k) in `img_b` when performing operations. So every voxel in `img_a` must have some pair with a voxel in `img_b`; sizes must be the same. +The key requirement here is that when dealing with multiple images, that the *size* of the images must be the same. The reason being is that we're deaing with **element-wise** operations. That means that some voxel (i,j,k) in `img_a` is being paired with some voxel (i,j,k) in `img_b` when performing operations. So every voxel in `img_a` must have some pair with a voxel in `img_b`; sizes must be the same. We can take advantage of this property when masking our data using multiplication. Masking works by multipling a raw image (our `T1`), with some mask image (our `bm`). Whichever voxel (i,j,k) has a value of 0 in the mask multiplies with voxel (i,j,k) in the raw image resulting in a product of 0. Conversely, any voxel (i,j,k) in the mask with a value of 1 multiplies with voxel (i,j,k) in the raw image resulting in the same value. Let's try this out in practice and see what the result is: @@ -130,20 +130,20 @@ plot.plot_anat(masked_T1) ~~~ {: .language-python} -![*image-title-here]({{ site.url }}/fig/masked_t1.png){:class="img-responsive"} +![*image-title-here](../fig/masked_t1.png){:class="img-responsive"} As you can see areas where the mask image had a value of 1 were retained, everything else was set to 0 > ## Exercise #1 > Try applying the mask such that the brain is removed, but the rest of the head is intact! -> +> > > ## Solution > > ~~~ > > inverted_mask_t1 = img.math_img('a*(1-b)', a=T1, b=bm) > > plot.plot_anat(inverted_mask_t1) > > ~~~ > > {: .language-python} -> > ![*image-title-here]({{ site.url }}/fig/inverted_mask_t1.png){:class="img-responsive"} +> > ![*image-title-here](../fig/inverted_mask_t1.png){:class="img-responsive"} > {: .solution} {: .challenge} @@ -157,18 +157,18 @@ Recall from the previous lesson using Nibabel to explore neuroimaging data: Image that you have two images, one is a `256x256` JPEG image, the other is a `1024x1024` JPEG image. If you were to load up both images into paint, photoshop, or whatever then you can imagine that the first JPEG image would show up to be a lot smaller than the second one. To make it so that both images perfectly overlay each other one thing you could do is to resize the image, maybe by shrinking the larger high-resolution JPEG (1024x1024) down to the smaller low-resolution JPEG. -This JPEG problem is analogous our situation! The T1 image has smaller voxels (higher resolution), and the functional image has larger voxels (lower resolution). Both images represent the same real object and so must be the same size (in mm). Therefore you need more T1 voxels to represent a brain compared to the functional image! You have a mismatch in the dimensions! To fix this issue we need to **resize** (or more accurately **resample**) our images so that the dimensions match (same number of voxels). +This JPEG problem is analogous our situation! The T1 image has smaller voxels (higher resolution), and the functional image has larger voxels (lower resolution). Both images represent the same real object and so must be the same size (in mm). Therefore you need more T1 voxels to represent a brain compared to the functional image! You have a mismatch in the dimensions! To fix this issue we need to **resize** (or more accurately **resample**) our images so that the dimensions match (same number of voxels). > ## Resampling -> Resampling is a method of *interpolating* in between data-points. When we stretch an image +> Resampling is a method of *interpolating* in between data-points. When we stretch an image > we need to figure out what goes in the spaces that are created via stretching - this is what -> resampling does! -> Similarily, when we squish an image, we have to toss out some pixels - resampling -> in this context figures out how to replace values in an image to best represent +> resampling does! +> Similarily, when we squish an image, we have to toss out some pixels - resampling +> in this context figures out how to replace values in an image to best represent > what the original larger image would have looked like {: .callout} -Let's implement **resampling** so that our functional image (called EPI) matches our T1 image. +Let's implement **resampling** so that our functional image (called EPI) matches our T1 image. For this section, we'll use two new files: @@ -179,8 +179,8 @@ mni_epi = os.path.join(func_dir,'sub-10788_task-rest_bold_space-MNI152NLin2009cA {: .language-python} Where: -- `mni_T1` now is the standardized T1 image -- `mni_epi` now is the standardized EPI image +- `mni_T1` now is the standardized T1 image +- `mni_epi` now is the standardized EPI image First let's load in our data so we can examine it in more detail, remember Nilearn will load in the image as a nibabel object: @@ -201,13 +201,13 @@ EPI dimensions (65, 77, 49, 152) ~~~ {: .output} -This confirms our theory that the T1 image has a lot more voxels than the EPI image. Note that the 4th dimension of the EPI image are timepoints which we can safely ignore for now. +This confirms our theory that the T1 image has a lot more voxels than the EPI image. Note that the 4th dimension of the EPI image are timepoints which we can safely ignore for now. We can resample an image using nilearn's `img.resample_to_img` function, which has the following structure: `img.resample_to_img(source_img,target_img,interpolation)` - `source_img` the image you want to sample -- `target_img` the image you wish to *resample to* +- `target_img` the image you wish to *resample to* - `interpolation` the method of interpolation > ## Interpolation @@ -227,20 +227,20 @@ print("EPI dimensions", mni_epi_img.shape) {: .language-python} ~~~ -Resampled T1 dimensions (65, 77, 49) +Resampled T1 dimensions (65, 77, 49) EPI dimensions (65, 77, 49, 152) ~~~ {: .output} -![image-title-here]({{ site.url }}/fig/resamp_t1.png){:class="img-responsive"} +![image-title-here](../fig/resamp_t1.png){:class="img-responsive"} -As you might notice, we have a blockier version of our T1 image -- we've reduce the resolution to match that of the EPI image. +As you might notice, we have a blockier version of our T1 image -- we've reduce the resolution to match that of the EPI image. > ## Challenge > Using the **Native T1** and **Resting State in T1 space** do the following: > 1. Resample the Native T1 to match the Resting State image > 2. Replace the brain in the T1 image with the first frame of the resting state brain -> +> > Some files you'll need > ~~~ > ex_T1 = os.path.join(t1_dir,'sub-10788_T1w_preproc.nii.gz') @@ -250,50 +250,49 @@ As you might notice, we have a blockier version of our T1 image -- we've reduce > ~~~ > {: .language-python} > > ## Solution -> > +> > > > ~~~ > > #Resample -> > resamp_t1 = img.resample_to_img(source_img=ex_T1,target_img=ex_func,interpolation='continuous') -> > +> > resamp_t1 = img.resample_to_img(source_img=ex_T1,target_img=ex_func,interpolation='continuous') +> > > > #Step 2: We need to resample the mask as well! > > resamp_bm = img.resample_to_img(source_img=ex_bm,target_img=resamp_t1,interpolation='nearest') -> > +> > > > #Step 3: Mask out the T1 image > > removed_t1 = img.math_img('a*(1-b)',a=resamp_t1,b=resamp_bm) -> > +> > > > #Visualize the resampled and removed brain > > plot.plot_anat(removed_t1) -> > +> > > > ~~~ > > {: .language-python} -> > -> > ![image-title-here]({{ site.url }}/fig/removed_t1.png){:class="img-responsive"} -> > +> > +> > ![image-title-here](../fig/removed_t1.png){:class="img-responsive"} +> > > > ~~~ > > #Load in the first frame of the resting state image > > func_img = img.load_img(ex_func) > > first_func_img = func_img.slicer[:,:,:,0] -> > +> > > > #Mask the functional image and visualize > > masked_func = img.math_img('a*b', a=first_func_img, b=ex_func_bm) > > plot.plot_img(masked_func) > > ~~~ > > {: .language-python} -> > -> > ![image-title-here]({{ site.url }}/fig/masked_func.png){:class="img-responsive"} -> > +> > +> > ![image-title-here](../fig/masked_func.png){:class="img-responsive"} +> > > > #Now overlay the functional image on top of the anatomical missing the brain > > ~~~ > > combined_img = img.math_img('a+b', a=removed_t1, b=masked_func) > > plot.plot_anat(combined_img) > > ~~~ > > {: .language-python} -> > -> > -> > ![image-title-here]({{ site.url }}/fig/combined_img.png){:class="img-responsive"} -> > +> > +> > +> > ![image-title-here](../fig/combined_img.png){:class="img-responsive"} +> > > {: .solution} {: .challenge} {% include links.md %} - diff --git a/_episodes/06-apply-a-parcellation.md b/_episodes/06-apply-a-parcellation.md index 0d03759a..1efe57f3 100644 --- a/_episodes/06-apply-a-parcellation.md +++ b/_episodes/06-apply-a-parcellation.md @@ -11,30 +11,30 @@ objectives: keypoints: - "Parcellations group voxels based on criteria such as similarities, orthogonality or some other criteria" - "Nilearn stores several standard parcellations that can be applied to your data" -- "Parcellations are defined by assigning each voxel a parcel 'membership' value telling you which group the parcel belongs to" +- "Parcellations are defined by assigning each voxel a parcel 'membership' value telling you which group the parcel belongs to" - "Parcellations provide an interpretative framework for understanding resting state data. But beware, some of the techniques used to form parcellations may not represent actual brain functional units!" --- # Introduction -## What is a Brain Atlas or Parcellation? +## What is a Brain Atlas or Parcellation? A brain atlas/parcellation is a voxel-based labelling of your data into "structural or functional units". In a parcellation schema each voxel is assigned a numeric (integer) label corresponding to the structural/functional unit that the particular voxel is thought to belong to based on some criteria. You might wonder why someone would simply *average together a bunch of voxels* in a way that would reduce the richness of the data. This boils down to a few problems inherit to functional brain imaging: 1. Resting state data is noisy, averaging groups of "similar" voxels reduces the effect of random noise effects 2. Provide an interpretative framework to functional imaging data. For example one parcellation group might be defined as the Default Mode Network which is thought to be functionally significant. So averaging voxels together belonging to the Default Mode Network provides an average estimate of the Default Mode Network signal. In addition the discovery of the Default Mode Network has yielded important insights into the organizational principles of the brain. -3. Limit the number of statistical tests thereby reducing potential Type I errors without resorting to strong statistical correction techniques that might reduce statistical power. +3. Limit the number of statistical tests thereby reducing potential Type I errors without resorting to strong statistical correction techniques that might reduce statistical power. 4. A simpler way to visualize your data, instead of 40x40x40=6400 data points, you might have 17 or up to 200; this is still significantly less data to deal with! ## Applying a Parcellation to your Data -Since the parcellation of a brain is defined (currently) by spatial locations, application of an parcellation to fMRI data only concerns the first 3 dimensions; the last dimension (time) is retained. Thus a parcellation assigns every voxel (x,y,z) to a particular parcel ID (an integer). +Since the parcellation of a brain is defined (currently) by spatial locations, application of an parcellation to fMRI data only concerns the first 3 dimensions; the last dimension (time) is retained. Thus a parcellation assigns every voxel (x,y,z) to a particular parcel ID (an integer). -Nilearn supports a large selection of different atlases that can be found [here](http://nilearn.github.io/modules/reference.html#module-nilearn.datasets). For information about how to select which parcellation to use for analysis of your data we refer you to Arslan et al. 2018. +Nilearn supports a large selection of different atlases that can be found [here](http://nilearn.github.io/modules/reference.html#module-nilearn.datasets). For information about how to select which parcellation to use for analysis of your data we refer you to Arslan et al. 2018. ### Retrieving the Atlas -For this tutorial we'll be using a set of parcellation from [Yeo et al. 2011](link). This atlas was generated from fMRI data from 1000 healthy control participants. +For this tutorial we'll be using a set of parcellation from [Yeo et al. 2011](link). This atlas was generated from fMRI data from 1000 healthy control participants. -First we'll load in our packages as usual: +First we'll load in our packages as usual: ~~~ import numpy as np @@ -84,12 +84,12 @@ plotting.plot_roi(atlas_yeo_2011['thick_17'], cut_coords=cut_coords, colorbar=co ~~~ {: .language-python} -![image-title-here]({{ site.url }}/fig/thin_7.png){:class="img-responsive"} -![image-title-here]({{ site.url }}/fig/thin_17.png){:class="img-responsive"} -![image-title-here]({{ site.url }}/fig/thick_7.png){:class="img-responsive"} -![image-title-here]({{ site.url }}/fig/thick_17.png){:class="img-responsive"} +![image-title-here](../fig/thin_7.png){:class="img-responsive"} +![image-title-here](../fig/thin_17.png){:class="img-responsive"} +![image-title-here](../fig/thick_7.png){:class="img-responsive"} +![image-title-here](../fig/thick_17.png){:class="img-responsive"} -The 7 and 17 network parcellations correspond to the two most stable clustering solutions from the algorithm used by the authors. The thin/thick designation refer to how strict the voxel inclusion is (thick might include white matter/CSF, thin might exclude some regions of grey matter due to partial voluming effects). +The 7 and 17 network parcellations correspond to the two most stable clustering solutions from the algorithm used by the authors. The thin/thick designation refer to how strict the voxel inclusion is (thick might include white matter/CSF, thin might exclude some regions of grey matter due to partial voluming effects). For simplicity we'll use the thick_7 variation which includes the following networks: @@ -109,7 +109,7 @@ A key feature of the Yeo2011 networks is that they are *spatially distributed*, ~~~ from nilearn.regions import connected_label_regions region_labels = connected_label_regions(atlas_yeo) -plotting.plot_roi(region_labels, +plotting.plot_roi(region_labels, cut_coords=(-20,-10,0,10,20,30,40,50,60,70), display_mode='z', colorbar=True, @@ -118,7 +118,7 @@ plotting.plot_roi(region_labels, ~~~ {: .language-python} -![image-title-here]({{ site.url }}/fig/yeo_sep.png){:class="img-responsive"} +![image-title-here](../fig/yeo_sep.png){:class="img-responsive"} ### Resampling the Atlas @@ -144,21 +144,21 @@ region_labels.to_filename('../resources/rois/yeo_2011/Yeo_JNeurophysiol11_MNI152 > > print('Size of atlas file:', region_labels.shape) ./_episodes/01-neuroimaging-fundamentals.md:86:> > ~~~ > > {: .language-python} -> > +> > > > Turns out that they aren't the same! We can match the file sizes simply using `img.resample_to_img`: > > ~~~ > > resampled_atlas = image.resample_to_img(region_labels, func_img, interpolation = 'nearest') > > plotting.plot_roi(resampled_yeo, func_img.slicer[:,:,:,54]) > > ~~~ > > {: .language-python} -> > ![image-title-here]({{ site.url }}/fig/resampled_yeo.png){:class="img-responsive"} +> > ![image-title-here](../fig/resampled_yeo.png){:class="img-responsive"} > > Recall, that we use `interpolation = 'nearest' ` because parcel regions are integers. Nearest interpolation preserves the values in the original image. Something like `continuous` or `linear` will pick in between values; a parcel value of 2.2215 is not meaningful in this context. > {: .solution} {: .challenge} ## Visualizing ROIs -For the next section, we'll be performing an analysis using the Yeo parcellation on our functional data. Specifically, we'll be using two ROIs: 44 and 46. +For the next section, we'll be performing an analysis using the Yeo parcellation on our functional data. Specifically, we'll be using two ROIs: 44 and 46. > ## Exercise > Visualize ROI 44 and 46 @@ -166,23 +166,23 @@ For the next section, we'll be performing an analysis using the Yeo parcellation > > ## Solution > > ~~~ > > from nilearn import image -> > +> > > > roi = 44 -> > roi_mask = image.math_img('a == {}'.format(roi), a=resampled_yeo) -> > masked_resamp_yeo = image.math_img('a*b',a=resampled_yeo,b=roi_mask) +> > roi_mask = image.math_img('a == {}'.format(roi), a=resampled_yeo) +> > masked_resamp_yeo = image.math_img('a*b',a=resampled_yeo,b=roi_mask) > > plotting.plot_roi(masked_resamp_yeo) > > ~~~ > > {: .language-python} -> > -> > ![image-title-here* ]({{ site.url }}/fig/roi_44.png){:class="img-responsive"} +> > +> > ![image-title-here* ](../fig/roi_44.png){:class="img-responsive"} > > ~~~ > > roi = 46 -> > roi_mask = image.math_img('a == {}'.format(roi), a=resampled_yeo) -> > masked_resamp_yeo = image.math_img('a*b',a=resampled_yeo,b=roi_mask) +> > roi_mask = image.math_img('a == {}'.format(roi), a=resampled_yeo) +> > masked_resamp_yeo = image.math_img('a*b',a=resampled_yeo,b=roi_mask) > > plotting.plot_roi(masked_resamp_yeo) > > ~~~ > > {: .language-python} -> > ![image-title-here]({{ site.url }}/fig/roi_46.png){:class="img-responsive"} +> > ![image-title-here](../fig/roi_46.png){:class="img-responsive"} > {: .solution} {: .challenge} diff --git a/_episodes/07-functional-connectivity-analysis.md b/_episodes/07-functional-connectivity-analysis.md index 8d5caa1c..c6ffa417 100644 --- a/_episodes/07-functional-connectivity-analysis.md +++ b/_episodes/07-functional-connectivity-analysis.md @@ -19,20 +19,20 @@ Now we have an idea of three important components to analyzing neuroimaging data 2. Cleaning and confound regression 3. Parcellation and signal extraction -In this notebook the goal is to integrate these 3 basic components and perform a full analysis of group data using **Intranetwork Functional Connectivity (FC)**. +In this notebook the goal is to integrate these 3 basic components and perform a full analysis of group data using **Intranetwork Functional Connectivity (FC)**. -Intranetwork functional connectivity is essentially a result of performing correlational analysis on mean signals extracted from two ROIs. Using this method we can examine how well certain resting state networks, such as the **Default Mode Network (DMN)**, are synchronized across spatially distinct regions. +Intranetwork functional connectivity is essentially a result of performing correlational analysis on mean signals extracted from two ROIs. Using this method we can examine how well certain resting state networks, such as the **Default Mode Network (DMN)**, are synchronized across spatially distinct regions. ROI-based correlational analysis forms the basis of many more sophisticated kinds of functional imaging analysis. ## PART A NECESSARY? ## Lesson Outline -The outline of this lesson is divided into two parts. The first part directly uses what you've learned and builds upon it to perform the final functional connectivity analysis on group data. +The outline of this lesson is divided into two parts. The first part directly uses what you've learned and builds upon it to perform the final functional connectivity analysis on group data. -The second part shows how we can use Nilearn's convenient wrapper functionality to perform the same task with *significantly less effort*. +The second part shows how we can use Nilearn's convenient wrapper functionality to perform the same task with *significantly less effort*. -#### Part A: Manual computation +#### Part A: Manual computation 1. Functional data cleaning and confound regression 2. Applying a parcellation onto the data 3. Computing the correlation between two ROI time-series @@ -76,7 +76,7 @@ Now that we have a list of subjects to peform our analysis on, let's load up our ~~~ #Load separated parcellation -parcel_file = '../resources/rois/yeo_2011/Yeo_JNeurophysiol11_MNI152/relabeled_yeo_atlas.nii.gz' +parcel_file = '../resources/rois/yeo_2011/Yeo_JNeurophysiol11_MNI152/relabeled_yeo_atlas.nii.gz' yeo_7 = img.load_img(parcel_file) ~~~ {: .language-python} @@ -98,7 +98,7 @@ masker = input_data.NiftiLabelsMasker(labels_img=yeo_7, ~~~ {: .language-python} -The `input_data.NiftiLabelsMasker` object is a wrapper that applies parcellation, cleaning and averaging to an functional image. For example let's apply this to our first subject: +The `input_data.NiftiLabelsMasker` object is a wrapper that applies parcellation, cleaning and averaging to an functional image. For example let's apply this to our first subject: ~~~ @@ -111,7 +111,7 @@ func_file = layout.get(subject=example_sub, modality='func', type='preproc', return_type='file')[0] confound_file=layout.get(subject=example_sub, modality='func', type='confounds', return_type='file')[0] - + #Load functional file and perform TR drop func_img = img.load_img(func_file) func_img = func_img.slicer[:,:,:,tr_drop+1:] @@ -122,7 +122,7 @@ confounds = extract_confounds(confound_file, 'RotX','RotY','RotZ', 'GlobalSignal','aCompCor01', 'aCompCor02']) - + #Drop TR on confound matrix confounds = confounds[tr_drop+1:,:] @@ -137,28 +137,28 @@ time_series.shape ~~~ {: .output} -After performing our data extraction we're left with data containing 147 timepoints and 46 regions. This matches the number of regions in our parcellation atlas. +After performing our data extraction we're left with data containing 147 timepoints and 46 regions. This matches the number of regions in our parcellation atlas. > ## Exercise -> Apply the data extract process shown above to all subjects in our subject list and collect the results. Here is some skeleton code to help you think about how to organize your data: +> Apply the data extract process shown above to all subjects in our subject list and collect the results. Here is some skeleton code to help you think about how to organize your data: > ~~~ > pooled_subjects = [] > ctrl_subjects = [] > schz_subjects = [] -> +> > for sub in subjects: > #FILL LOOP > > ~~~ > {: .language-python} -> +> > > ## Solution -> > +> > > > ~~~ > > pooled_subjects = [] > > ctrl_subjects = [] > > schz_subjects = [] -> > +> > > > for sub in subjects: > > func_file = layout.get(subject=sub, modality='func', > > type='preproc', return_type='file')[0] @@ -189,7 +189,7 @@ After performing our data extraction we're left with data containing 147 timepoi > {: .solution} {: .challenge} -Once we have all extracted time series for each subject we can compute correlation matrices. Once again, Nilearn provides functionality to do this as well. We'll use the module `nilearn.connectome.ConnectivityMeasure` to automatically apply a pearson r correlation to our schizophrenia and control data: +Once we have all extracted time series for each subject we can compute correlation matrices. Once again, Nilearn provides functionality to do this as well. We'll use the module `nilearn.connectome.ConnectivityMeasure` to automatically apply a pearson r correlation to our schizophrenia and control data: ~~~ from nilearn.connectome import ConnectivityMeasure @@ -224,12 +224,12 @@ plot_matrices(ctrl_correlation_matrices, 'correlation') ~~~ {: .language-python} -![image-title-here]({{ site.url }}/fig/ctrl_r.png){:class="img-responsive"} +![image-title-here](../fig/ctrl_r.png){:class="img-responsive"} ~~~ plot_matrices(schz_correlation_matrices, 'correlation') ~~~ {: .language-python} -![image-title-here]({{ site.url }}/fig/schz_r.png){:class="img-responsive"} +![image-title-here](../fig/schz_r.png){:class="img-responsive"} Let's look at the data that is returned from `correlation_measure.fit`: @@ -243,7 +243,7 @@ ctrl_correlation.matrices.shape ~~~ {: .output} -We can see that we have a 3D array where the first index corresponds to a particular subject, and the last two indices refer to the correlation matrix (46 regions x 46 regions). +We can see that we have a 3D array where the first index corresponds to a particular subject, and the last two indices refer to the correlation matrix (46 regions x 46 regions). Finally we can extract our two regions of interest by picking the entries in the correlation matrix corresponding to the connection between regions 44 and 46: @@ -286,7 +286,8 @@ plt.show() ~~~ {: .language-python} -![image-title-here]({{ site.url }}/fig/group_compare.png){:class="img-responsive"} +![image-title-here](../fig/group_compare.png){:class="img-responsive"} + +Although the results here aren't significant they seem to indicate that there might be three subclasses in our schizophrenia group - of course we'd need *a lot* more data to confirm this! The interpretation of these results should ideally be based on some *a priori* hypothesis! -Although the results here aren't significant they seem to indicate that there might be three subclasses in our schizophrenia group - of course we'd need *a lot* more data to confirm this! The interpretation of these results should ideally be based on some *a priori* hypothesis! {% include links.md %} diff --git a/scwg_neuroimaging_workshop.pptx b/scwg_neuroimaging_workshop.pptx deleted file mode 100644 index 1255a707..00000000 Binary files a/scwg_neuroimaging_workshop.pptx and /dev/null differ