Skip to content

Latest commit

 

History

History
269 lines (200 loc) · 13.2 KB

PlanningYourStudy.md

File metadata and controls

269 lines (200 loc) · 13.2 KB

PLANNING YOUR STUDY

Reusing data

Some of the main databases are:

But there are many possibilities of databases where you can find your raw and/or pre-processed data. Maybe your university or your institute already ahs a repository of published data (e.g the Donders institute.

Chris Madan also has a nice Curated list of open-access databases with human structural MRI data

A list of brain imaging databases with multiple scans per subject put together by Xiangzhen Kong

The recent google extension for databases can also be useful to locate datasets that might be of interest.

There are some tools that help you search through them like the metasearch tool on the Open Neuroimaging Laboratory but this is also where Datalad can become useful to browse or crawl those databases.

Open science resources for neuroimaging research https://www.pathlms.com/ohbm/courses/8246/sections/12542/video_presentations/116089

Defining your terms and your task

Ontologies

Inigo Montoya: You keep using that word. I don't think it means what you think it means. Ayotnom Ogini: Funny you should say that! I was about to tell you the same thing.

The use of alternate and even competitive terminologies can often impede scientific discoveries.

Piloting ( ??? )

Good piloting is very important but piloting is not meant to be about finding a hypothesis you want to test: because of the small sample size of pilot studies, anything interesting you see there is very likely to be a fluke. Piloting is more about checking the overall feasibility of that experiment and that you can get high [quality data](#ONCE YOU-HAVE-DATA:-quality-control), judged by criteria that are unrelated to your hypothesis.

Sam Schwarzkopf has a few interesting posts on the topic here and there

Piloting is usually a phase where it would be good to check with your local MRI physicist and statistician. And you also might already have to make choices about pre-processing and data analysis.

Optimizing your design

Before you run your study there are a few things you can do to optimize your design. Two of them are doing a power analysis and optimizing the efficiency of your fMRI design.

Design efficiency ( ??? )

If you need a reminder about what design efficiency is.

Jeanette Mumford has a good video series about design efficiency and another standalone one from Neurohackademy 2016.

When you want to optimize it you have few options:

  • you can compute the efficiency by hand and tweaking your design to see what options work best. There is a function available Rik Henson's repo to help you do that.
  • but there are also more systematic ways to optimize your protocol: see here, here or there
  • the latest tool for design efficiency calculation is the website neuropowertools. It actually offers options to run both your design efficiency optimization and your power analysis. They also have their respective python packages.

Power

In order to investigate whether an effect exists, one should design an experiment that has a reasonable chance of detecting it. I take this insight as common sense. In statistical language, an experiment should have sufficient statistical power. Yet the null [hypothesis significant testing] ritual knows no statistical power.

Gerd Gigerenzer in Statistical Rituals: The Replication Delusion and How We Got There, DOI: 10.1177/2515245918771329

There is good evidence that the average statistical power has remained low for several decades in psychology which increases the false negative rate and reduces the positive predictive value of findings (i.e the chance that a significant finding is actually true). Maybe neuroimaging could learn from that mistake, especially that a large majority of neuroimaging studies seem to have even lower statistical power.

link to neuroskeptic on power failure: http://blogs.discovermagazine.com/neuroskeptic/2017/07/19/neuroscience-underpowered/ http://blogs.discovermagazine.com/neuroskeptic/2013/08/10/is-neuroscience-too-small/

link to Tal's response to Friston

link to tal's paper on why small studies give big correlations

link to law of small numbers paper by kanheman

link to figures of scanning the horizon

Add output from SIPS hackathon

fMRI power is a matlab based toolbox to help you run your power analysis.

The website neuropowertools actually offers options to run both your design efficiency optimization and your power analysis. They also have their respective python packages.

For MVPA: same analysis approach

If you intend to run a MVPA - classification analysis on your data, there are a few things you can do BEFORE you start collecting data to optimize your design. There is no app/toolbox for that so I am afraid you will have to read the paper

Defining your region of interest ( ??? )

If you don't want to run a whole brain analysis, then you will most likely need to define your regions of interest (ROI). This must be done using data that is independent from the data you will use in the end otherwise you will have a circularity problem (also known as double dipping or voodoo correlation).

Some blog posts related to voodoo correlations:

Using previous results ( ??? )

Neurosynth can help with to run a meta-analysis to create mask to define your ROI. See for example this if you wanted to have a ROI for brain region matching the search term auditory and see here for a tutorial.

google data search engine

Localizers ( ??? )

A typical example of a localizer are retinotopic mappings. Sam Schwarzkopf has good tutorial for those.

retinotopy

tonotopy

motion localizer

face localizer

Atlases

There are many atlases you could use to create ROIS. Some ship automatically with some softwares otherwise you can find lists on the

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0007200#s4

http://nilearn.github.io/modules/reference.html#module-nilearn.datasets

Some other retinotopics maps are apparently not listed in the above so here they are:

http://gallantlab.org/pycortex/retinotopy_demo/

The problem then becomes which atlas to choose. To help you with this the Online Brain Atlas Reconciliation Tool can show the overlap that exist between some of those atlases. The links I had to the website (here and there) are broken at the moment so at least here is a link to the paper

Some toolboxes out there also allow you to create your own ROI and rely on anatomical / cytoarchitectonic atlases:

Non-standard templates ( ??? )

In case you want to normalize brains of children it might be better to use a pediatric template. Some of them are listed here.