Skip to content
Noah C. Benson edited this page Sep 21, 2018 · 4 revisions

Neuropythy is most useful when it knows where to find your FreeSurfer subject data or where you want it to store datasets or Human Connectome Project files. These configuration items can be set in a number of ways:

  • On startup, neuropythy looks for a file ~/.npythyrc (though this file name may be changed by setting the NPYTHYRC environment variable). The contents of this file should be a JSON dictionary with configurable variables (such as "freesurfer_subject_paths") as the keys. An example configuration file:
    {"freesurfer_subject_paths": "/Volumes/server/Freesurfer_subjects",
     "data_cache_root":          "~/Temp/npythy_cache",
     "hcp_subject_paths":        "/Volumes/server/Projects/HCP/subjects",
     "hcp_auto_download":        true,
     "hcp_credentials":          "~/.hcp-passwd"}
  • Each config variable in the NPYTHYRC file may be overrided using an associated environment variable. Usually the environment variable names are either the config variables in uppercase or NPYTHY_ + the variable in uppercase: NPYTHY_DATA_CACHE_ROOT, HCP_CREDENTIALS, HCP_AUTO_DOWNLOAD. The SUBJECTS_DIR environment is used for the FreeSurfer subject paths, and the HCP_SUBJECTS_DIR variable is used for the HCP subject paths (both may be :-separated lists of directories).
  • The config items may be retrieved and set directly using neuropythy.config. Values that are set in this way override the NPYTHYRC file and all environment variables. For example:
    import neuropythy as ny
    ny.config['data_cache_root']
    #=> '/Users/nben/Temp/npythy_cache'
    ny.config['data_cache_root'] = '~/Documents/npythy_data'
    ny.config['data_cache_root']
    #=> '/Users/nben/Documents/npythy_data'

Understood Configuration Variables

The following configuration variables are understood by neruopythy. The "name" listed below is the name that neuropythy uses for the variable (in the ~/.npythyrc file and neuropythy.config) while the "environment name" is the name of the environment variable that can be used to set the variable.

  • "data_cache_root"

    Name Environment Name Default Value
    "data_cache_root" NPYTHY_DATA_CACHE_ROOT None

    The path where neuropythy should look for, download, and store datasets such as the Benson and Winawer (2018) dataset. If this is None (the default), then a temporary directory is created when a dataset must be downloaded; this temporary directory is automatically deleted when python exits.

  • "benson_winawer_2018_path"

    Name Environment Name Default Value
    "benson_winawer_2018_path" NPYTHY_BENSON_WINAWER_2018_PATH None

    The path where the database from Benson and Winawer (2018) should be searched for and, if downloaded, stored. If this is None, then the path is the subdirectory benson_winawer_2018 of the path in the data_cache_root configuration variable.

  • "freesurfer_subject_paths"

    Name Environment Name Default Value
    "freesurfer_subject_paths" SUBJECTS_DIR None

    The directory where neuropythy should look for FreeSurfer subjects. May be a colon-separated list of directories.

  • "hcp_subjects_path"

    Name Environment Name Default Value
    "hcp_subjects_path" HCP_SUBJECTS_PATH None

    The directory where neuropythy should look for HCP subjects, whose subject directories must be their ID numbers. May be a colon-separated list of directories.

  • "hcp_auto_download"

    Name Environment Name Default Value
    "hcp_auto_download" HCP_AUTO_DOWNLOAD False

    If you wish to enable auto-downloading of HCP subjects, set this to be true; you will also need to at least give neuropythy your S3 credentials for the HCP.

  • "hcp_credentials"

    Name Environment Name Default Value
    "hcp_credentials" HCP_CREDENTIALS None

    May be one of several things: (1) a list of [hcp_key, hcp_secret] strings; (2) a string "<key>:<secret>"; or (3) the name of a file that contains a string (like in (2)).

  • "hcp_auto_path"

    Name Environment Name Default Value
    "hcp_auto_path" HCP_AUTO_PATH None

    May specify the directory into which auto-downloaded subjects should be placed; if this is not specified (None), then uses the "hcp_subjects_path"--the first if there are many paths.

  • "hcp_auto_database"

    Name Environment Name Default Value
    "hcp_auto_database" HCP_AUTO_DATABASE None

    Generally not necessary; if you wish to specify an HCP database aside from hcp-openaccess, you can set this to specify it.

  • "hcp_auto_release"

    Name Environment Name Default Value
    "hcp_auto_release" HCP_AUTO_RELEASE None

    Generally not necessary; if you wish to specify an HCP subject release set other than "HCP_1200", you can set it here.

A list of all understood config variables for the version of the library you are currently using can be obtained also:

import neuropythy as ny
sorted(ny.config.keys())

The neuropythy.config Structure

Neuropythy exposes the configuration variables listed above in a data structure, neuropythy.config. This structure behaves roughly like a dict object (though it is in fact a static class). When you set the value of a configuration variable in the config object, some preprocessing is done and errors are raised if the value is known to be invalid. Values set in this way override the npythyrc file and environment variables.

import neuropythy as ny
# See my FreeSurfer directory:
ny.config['freesurfer_subject_paths']
#=> ['/Volumes/server/Freesurfer_subjects']
ny.config['freesurfer_subject_paths'] = '/Volumes/server/Freesurfer_subjects:~/data/subjects'
ny.config['freesurfer_subject_paths']
#=> ['/Volumes/server/Freesurfer_subjects', '/Users/nben/data/subjects']

Configuring Neuropythy to Work with the HCP

Neuropythy is capable of automatically integrating with the Human Connectome Project's Amazon S3 bucket. Neuropythy will present you with nested data structures representing individual HCP subjects and will silently download the relevant structure files as they are requested. To configure this behavior, follow these steps:

  • Make a directory somewhere to store the HCP subjects that are downloaded. The subjects won't be downloaded all at once, but it will drastically speed up future loading of subjects if you cache them on your local filesystem.
  • Sign up for an HCP account. You can do this at the HCP's database page.
  • Once you have an account, log into the database; near the top of the initial splash page is a cell titles "WU-Minn HCP Data - 1200 Subjects" and inside this cell is a button for activating Amazon S3 Access. When you activate this feature, you will be given an amazon "Key" and "Secret".
  • Copy and paste your key and secret into a file ~/.hcp-passwd such that the contents are your key followed by a colon followed by your secret, e.g., mys3key:mys3secret.
  • You should then make sure that the configuration variable "hcp_credentials" is set to "~/.hcp-passwd" in your ~/.npythyrc file (see Configuration, above). Additionally, set the "hcp_auto_download" value is set to true, and set the "hcp_auto_path" variable to the directory in which you plan to store the HCP subject data.

Note that the above steps will additionally enable auto-downloading of the retinotopic mapping database; if you are only interested in the structural data, you can set the "hcp_auto_download" variable to "structure". If you do enable auto-downloading of the retinotopic maps, then the first time you examine an HCP subject, neuropythy will have to download the retinotopy database files, which are approximately 1 GB; it may appear as if neuropythy has frozen during this time, but it is probably just due to the download. Generally speaking, if your internet connection is relatively fast, you should not notice significant delays from downloading the HCP strucutral data otherwise.

For more information about using the HCP module of neuropythy, see this page.

Additional notes:

  • Currently, only 'lowres-prf_*' properties are available via neuropythy. The 'lowres-' refers to the fact that the pRF models were solved on the HCP fs_LR32k mesh rather than the higher-resolution 59k mesh. Higher resolution solutions being available in the near future in a new release of neuropythy and will be named 'prf_*', e.g., 'prf_polar_angle'.
  • Low resolution and higher resolution pRF solutions are very similar; there is no need to be concerned that the low-resolution pRF solutions are broadly missing the mark with respect to the retinotopic maps of subjects.
  • If you enable pythons logging module to print info-level messages, then neuropythy will inform you whenever it is about to download a large file; it does not print messages for the smaller files that typically take only a few seconds to download. To configure this, use:
    import logging
    logging.getLogger().setLevel(logging.INFO)