From 48acb75b3ec6e6dbaa414c4ff1e6af5f54d92e4d Mon Sep 17 00:00:00 2001 From: Mayo Faulkner Date: Wed, 24 Apr 2024 10:42:17 +0100 Subject: [PATCH 1/5] add wheel screen stimulus docs --- .../examples/docs_wheel_screen_stimulus.ipynb | 425 ++++++++++++++++++ examples/exploring_data/data_download.ipynb | 69 +-- .../loading_data/loading_passive_data.ipynb | 34 +- .../loading_data/loading_raw_ephys_data.ipynb | 6 +- .../loading_data/loading_raw_video_data.ipynb | 2 +- 5 files changed, 464 insertions(+), 72 deletions(-) create mode 100644 brainbox/examples/docs_wheel_screen_stimulus.ipynb diff --git a/brainbox/examples/docs_wheel_screen_stimulus.ipynb b/brainbox/examples/docs_wheel_screen_stimulus.ipynb new file mode 100644 index 000000000..314346cb6 --- /dev/null +++ b/brainbox/examples/docs_wheel_screen_stimulus.ipynb @@ -0,0 +1,425 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "63c72402", + "metadata": {}, + "source": [ + "# Computing the stimulus position using the wheel" + ] + }, + { + "cell_type": "markdown", + "id": "cd4144b5", + "metadata": {}, + "source": [ + "In the IBL task a visual stimulus (Gabor patch) appears on the left (-35°) or right (+35°) of a screen and the mouse must use a wheel to bring the stimulus to the centre of the screen (0°). If the mouse moves the wheel in the correct direction, the trial is deemed correct and the mouse receives a reward, if however, the mouse moves the wheel in the wrong direction and the stimulus goes off the screen, this is an error trial and the mouse receives a white noise error tone. \n", + "\n", + "For some analysis it may be useful to know the position of the visual stimulus on the screen during a trial. While there is no direct read out of the location of the stimulus on the screen, as the stimulus is coupled to the wheel, we can infer the position using the wheel position. \n", + "\n", + "Below we walk you through an example of how to compute the continuous screen position for a given trial.\n", + "\n", + "For this anaylsis we need access to information about the wheel radius and the wheel gain (visual degrees moved on screen per mm of wheel movement).\n", + "- Wheel radius = 3.1 cm\n", + "- Wheel gain = 4 (deg / mm)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2ae5f990", + "metadata": { + "nbsphinx": "hidden" + }, + "outputs": [], + "source": [ + "# Turn off logging and disable tqdm this is a hidden cell on docs page\n", + "import logging\n", + "import os\n", + "\n", + "logger = logging.getLogger('ibllib')\n", + "logger.setLevel(logging.CRITICAL)\n", + "\n", + "os.environ[\"TQDM_DISABLE\"] = \"1\"" + ] + }, + { + "cell_type": "markdown", + "id": "4014587e", + "metadata": {}, + "source": [ + "## Step 1: Load data" + ] + }, + { + "cell_type": "markdown", + "id": "402f50ce", + "metadata": {}, + "source": [ + "For this analysis we will need to load in the trials and wheel data for a chosen session" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "8b92f69b", + "metadata": { + "ExecuteTime": { + "end_time": "2024-04-24T08:31:09.018526Z", + "start_time": "2024-04-24T08:31:07.846690Z" + } + }, + "outputs": [], + "source": [ + "from one.api import ONE\n", + "one = ONE(base_url='https://openalyx.internationalbrainlab.org')\n", + "\n", + "eid = 'f88d4dd4-ccd7-400e-9035-fa00be3bcfa8'\n", + "trials = one.load_object(eid, 'trials')\n", + "wheel = one.load_object(eid, 'wheel')" + ] + }, + { + "cell_type": "markdown", + "id": "2b7aa84b", + "metadata": {}, + "source": [ + "## Step 2: Compute evenly sampled wheel data" + ] + }, + { + "cell_type": "markdown", + "id": "bfecd27e", + "metadata": {}, + "source": [ + "The wheel data returned is not evenly sampled, we can sample the data at 1000 Hz using the following function" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b2c7b03d", + "metadata": { + "ExecuteTime": { + "end_time": "2024-04-24T08:31:09.079343Z", + "start_time": "2024-04-24T08:31:09.022391Z" + } + }, + "outputs": [], + "source": [ + "import brainbox.behavior.wheel as wh\n", + "wheel_pos, wheel_times = wh.interpolate_position(wheel.timestamps, wheel.position, freq=1000)" + ] + }, + { + "cell_type": "markdown", + "id": "c054fc52", + "metadata": {}, + "source": [ + "## Step 3: Extract wheel data for a given trial" + ] + }, + { + "cell_type": "markdown", + "id": "e4c4b1fd", + "metadata": {}, + "source": [ + "We now want to find the wheel data in the interval for a given trial" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "600a7b6c", + "metadata": { + "ExecuteTime": { + "end_time": "2024-04-24T08:31:09.085925Z", + "start_time": "2024-04-24T08:31:09.084116Z" + } + }, + "outputs": [], + "source": [ + "import numpy as np\n", + "# Choose trial no. 110 (right contrast = 1) ; or 150 (left)\n", + "tr_idx = 110\n", + "# Get interval of trial, gives two values, start of trial and end of trial\n", + "interval = trials['intervals'][tr_idx]\n", + "# Find the index of the wheel timestamps that contain this interval\n", + "wheel_idx = np.searchsorted(wheel_times, interval)\n", + "# Limit our wheel data to these indexes\n", + "wh_pos = wheel_pos[wheel_idx[0]:wheel_idx[1]]\n", + "wh_times = wheel_times[wheel_idx[0]:wheel_idx[1]]" + ] + }, + { + "cell_type": "markdown", + "id": "56a8f59c", + "metadata": {}, + "source": [ + "## Step 4: Compute the position in mm" + ] + }, + { + "cell_type": "markdown", + "id": "57b5b487", + "metadata": {}, + "source": [ + "The values for the wheel position are given in radians. Since the wheel gain is defined in visual degrees per mm we need to convert the wheel position to mm. We can use the radius of the wheel for this." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "785cb8ba", + "metadata": { + "ExecuteTime": { + "end_time": "2024-04-24T08:31:09.092109Z", + "start_time": "2024-04-24T08:31:09.090155Z" + } + }, + "outputs": [], + "source": [ + "# radius of wheel in mm\n", + "WHEEL_RADIUS = 3.1 * 10 \n", + "# compute circumference of wheel\n", + "wh_circ = 2 * np.pi * WHEEL_RADIUS\n", + "# compute the mm turned be wheel degree\n", + "mm_per_wh_deg = wh_circ / 360\n", + "# convert wh_pos from radians to degrees\n", + "wh_pos = wh_pos * 180 / np.pi\n", + "# convert wh_pos from degrees to mm\n", + "wh_pos = wh_pos * mm_per_wh_deg" + ] + }, + { + "cell_type": "markdown", + "id": "d1623b1d", + "metadata": {}, + "source": [ + "## Step 5: Compute the wheel displacement from stimOn" + ] + }, + { + "cell_type": "markdown", + "id": "493661dc", + "metadata": {}, + "source": [ + "To link the visual stimulus movement to the wheel position we need to compute the displacement of the wheel position relative to the time at which the stimulus first appears on the screen." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "dc95dd15", + "metadata": { + "ExecuteTime": { + "end_time": "2024-04-24T08:31:09.096554Z", + "start_time": "2024-04-24T08:31:09.094108Z" + } + }, + "outputs": [], + "source": [ + "# Find the index of the wheel timestamps when the stimulus was presented (stimOn_times)\n", + "idx_stim = np.searchsorted(wh_times, trials['stimOn_times'][tr_idx])\n", + "# Normalise the wh_pos to the position at stimOn\n", + "wh_pos = wh_pos - wh_pos[idx_stim]" + ] + }, + { + "cell_type": "markdown", + "id": "3f3c1843", + "metadata": {}, + "source": [ + "## Step 6: Convert wheel displacement to screen position" + ] + }, + { + "cell_type": "markdown", + "id": "93ebf279", + "metadata": {}, + "source": [ + "Now that we have computed the displacement of the wheel relative to when the stimulus was presented we can use the wheel gain to convert this into degrees of the visual stimlus on the screen." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "27b9e495", + "metadata": { + "ExecuteTime": { + "end_time": "2024-04-24T08:31:09.118395Z", + "start_time": "2024-04-24T08:31:09.098359Z" + } + }, + "outputs": [], + "source": [ + "GAIN_MM_TO_SC_DEG = 4\n", + "screen_pos = wh_pos * GAIN_MM_TO_SC_DEG" + ] + }, + { + "cell_type": "markdown", + "id": "fea32dca", + "metadata": {}, + "source": [ + "## Step 7: Fixing screen position linked to events" + ] + }, + { + "cell_type": "markdown", + "id": "e0189229", + "metadata": {}, + "source": [ + "The screen_pos values that we have above have been computed over the whole trial interval, from trial start to trial end. The stimlus on the screen however is can only move with the wheel between the time at which the stimlus is presented (stimOn_times) and the time at which a choice is made (response_times). After a response is made the visual stimulus then remains in a fixed position until the it disappears from the screen (stimOff_times)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "11d86179", + "metadata": { + "ExecuteTime": { + "end_time": "2024-04-24T08:31:09.229711Z", + "start_time": "2024-04-24T08:31:09.227047Z" + } + }, + "outputs": [], + "source": [ + "# Find the index of the wheel timestamps when the stimulus was presented (stimOn_times)\n", + "idx_stim = np.searchsorted(wh_times, trials['stimOn_times'][tr_idx])\n", + "# Find the index of the wheel timestamps when the choice was made (response_times)\n", + "idx_res = np.searchsorted(wh_times, trials['response_times'][tr_idx])\n", + "# Find the index of the wheel timestamps when the stimulus disappears (stimOff_times)\n", + "idx_off = np.searchsorted(wh_times, trials['response_times'][tr_idx])\n", + "\n", + "# Before stimOn no stimulus on screen, so set to nan\n", + "screen_pos[0:idx_stim - 1] = np.nan\n", + "# Stimulus is in a fixed position between response time and stimOff time\n", + "screen_pos[idx_res:idx_off - 1] = screen_pos[idx_res]\n", + "# After stimOff no stimulus on screen, so set to nan\n", + "screen_pos[idx_off:] = np.nan" + ] + }, + { + "cell_type": "markdown", + "id": "781fe47f", + "metadata": {}, + "source": [ + "The screen_pos values are given relative to stimOn times but the stimulus appears at either -35° or 35° depending on the stimlus side. We therefore need to apply this offset to our screen position" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b89e9e87", + "metadata": { + "ExecuteTime": { + "end_time": "2024-04-24T08:31:09.412405Z", + "start_time": "2024-04-24T08:31:09.410332Z" + } + }, + "outputs": [], + "source": [ + "# offset depends on whether stimulus was shown on left or right of screen\n", + "\n", + "ONSET_OFFSET = 35\n", + "\n", + "if np.isnan(trials['contrastLeft'][tr_idx]):\n", + " # The stimulus appeared on the right\n", + " # Values for the screen position will be >0\n", + " offset = ONSET_OFFSET # The stimulus starts at +35 and goes to --> 0\n", + " screen_pos = -1 * screen_pos + offset\n", + "else:\n", + " # The stimulus appeared on the left\n", + " # Values for the screen position will be <0\n", + " offset = -1 * ONSET_OFFSET # The stimulus starts at -35 and goes to --> 0\n", + " screen_pos = -1 * screen_pos + offset" + ] + }, + { + "cell_type": "markdown", + "id": "7fc5d580", + "metadata": {}, + "source": [ + "## Step 8: Plotting our results" + ] + }, + { + "cell_type": "markdown", + "id": "ee7874ec", + "metadata": {}, + "source": [ + "Finally we can plot our results to see if they make sense" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5e3fb652", + "metadata": { + "ExecuteTime": { + "end_time": "2024-04-24T08:31:09.772469Z", + "start_time": "2024-04-24T08:31:09.418855Z" + } + }, + "outputs": [], + "source": [ + "import matplotlib.pyplot as plt\n", + "fig, axs = plt.subplots(2, 1, sharex=True, height_ratios=[1, 3])\n", + "\n", + "# On top axis plot the wheel displacement\n", + "axs[0].plot(wh_times, wh_pos, 'k')\n", + "axs[0].vlines([trials['stimOn_times'][tr_idx], trials['response_times'][tr_idx]],\n", + " 0, 1, transform=axs[0].get_xaxis_transform(), colors='k', linestyles='dashed')\n", + "axs[0].text(trials['stimOn_times'][tr_idx], 1.01, 'stimOn', c='k', rotation=20,\n", + " rotation_mode='anchor', ha='left', transform=axs[0].get_xaxis_transform())\n", + "axs[0].text(trials['response_times'][tr_idx], 1.01, 'response', c='k', rotation=20,\n", + " rotation_mode='anchor', ha='left', transform=axs[0].get_xaxis_transform())\n", + "axs[0].set_ylabel('Wheel displacement (mm)')\n", + "\n", + "\n", + "# On bottom axis plot the screen position\n", + "axs[1].plot(wh_times, screen_pos, 'k')\n", + "axs[1].vlines([trials['stimOn_times'][tr_idx], trials['response_times'][tr_idx]],\n", + " 0, 1, transform=axs[1].get_xaxis_transform(), colors='k', linestyles='dashed')\n", + "axs[1].set_xlim(trials['intervals'][tr_idx])\n", + "# black dotted lines indicate starting stimulus position\n", + "axs[1].hlines([-35, 35], *axs[1].get_xlim(), colors='k', linestyles='dotted')\n", + "# green line indicates threshold for good trial\n", + "axs[1].hlines([0], *axs[1].get_xlim(), colors='g', linestyles='solid')\n", + "# red lines indicate threshold for incorrect trial\n", + "axs[1].hlines([-70, 70], *axs[1].get_xlim(), colors='r', linestyles='solid')\n", + "\n", + "axs[1].set_ylim([-90, 90])\n", + "axs[1].set_xlim(trials['stimOn_times'][tr_idx] - 0.1, trials['response_times'][tr_idx] + 0.1)\n", + "axs[1].set_ylabel('Screen position (°)')\n", + "axs[1].set_xlabel('Time in session (s)')\n", + "fig.suptitle(f\"ContrastLeft: {trials['contrastLeft'][tr_idx]}, ContrastRight: {trials['contrastRight'][tr_idx]},\"\n", + " f\"FeedbackType {trials['feedbackType'][tr_idx]}\")\n", + "\n" + ] + } + ], + "metadata": { + "celltoolbar": "Edit Metadata", + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.9.16" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/examples/exploring_data/data_download.ipynb b/examples/exploring_data/data_download.ipynb index 3e4b8961a..e0976fe02 100644 --- a/examples/exploring_data/data_download.ipynb +++ b/examples/exploring_data/data_download.ipynb @@ -3,7 +3,9 @@ { "cell_type": "code", "execution_count": null, - "metadata": {}, + "metadata": { + "nbsphinx": "hidden" + }, "outputs": [], "source": [ "# Turn off logging and disable tqdm this is a hidden cell on docs page\n", @@ -19,7 +21,6 @@ { "cell_type": "markdown", "metadata": { - "collapsed": false, "nbsphinx": "hidden" }, "source": [ @@ -32,9 +33,7 @@ }, { "cell_type": "markdown", - "metadata": { - "collapsed": false - }, + "metadata": {}, "source": [ "## Installation\n", "### Environment\n", @@ -74,9 +73,7 @@ }, { "cell_type": "markdown", - "metadata": { - "collapsed": false - }, + "metadata": {}, "source": [ "## Explore and download data using the ONE-api\n", "\n", @@ -94,9 +91,7 @@ }, { "cell_type": "markdown", - "metadata": { - "collapsed": false - }, + "metadata": {}, "source": [ "### Launch the ONE-api\n", "Prior to do any searching / downloading, you need to instantiate ONE :" @@ -114,9 +109,7 @@ }, { "cell_type": "markdown", - "metadata": { - "collapsed": false - }, + "metadata": {}, "source": [ "### List all sessions available\n", "Once ONE is instantiated, you can use the REST ONE-api to list all sessions publicly available:" @@ -133,9 +126,7 @@ }, { "cell_type": "markdown", - "metadata": { - "collapsed": false - }, + "metadata": {}, "source": [ "Each session is given a unique identifier (eID); this eID is what you will use to download data for a given session:" ] @@ -152,9 +143,7 @@ }, { "cell_type": "markdown", - "metadata": { - "collapsed": false - }, + "metadata": {}, "source": [ "### Find a session that has a dataset of interest\n", "Not all sessions will have all the datasets available. As such, it may be important for you to filter and search for only sessions with particular datasets of interest.\n", @@ -175,9 +164,7 @@ }, { "cell_type": "markdown", - "metadata": { - "collapsed": false - }, + "metadata": {}, "source": [ "[Click here](https://int-brain-lab.github.io/ONE/notebooks/one_search/one_search.html) for a complete guide to searching using ONE.\n", "\n", @@ -200,9 +187,7 @@ }, { "cell_type": "markdown", - "metadata": { - "collapsed": false - }, + "metadata": {}, "source": [ "You can use the tag to restrict your searches to a specific data release and as a filter when browsing the public database:" ] @@ -230,9 +215,7 @@ }, { "cell_type": "markdown", - "metadata": { - "collapsed": false - }, + "metadata": {}, "source": [ "### Downloading data using the ONE-api\n", "Once sessions of interest are identified with the unique identifier (eID), all files ready for analysis are found in the **alf** collection:" @@ -258,9 +241,7 @@ }, { "cell_type": "markdown", - "metadata": { - "collapsed": false - }, + "metadata": {}, "source": [ "To download the spike sorting data we need to find out which probe label (`probeXX`) was used for this session. This can be done by finding the probe insertion associated with this session." ] @@ -291,9 +272,7 @@ }, { "cell_type": "markdown", - "metadata": { - "collapsed": false - }, + "metadata": {}, "source": [ "### Loading different objects\n", "\n", @@ -319,18 +298,14 @@ }, { "cell_type": "markdown", - "metadata": { - "collapsed": false - }, + "metadata": {}, "source": [ "Examples for loading different objects can be found in the following tutorials [here](https://int-brain-lab.github.io/iblenv/loading_examples.html)." ] }, { "cell_type": "markdown", - "metadata": { - "collapsed": false - }, + "metadata": {}, "source": [ "### Advanced examples\n", "#### Example 1: Searching for sessions from a specific lab\n", @@ -350,9 +325,7 @@ }, { "cell_type": "markdown", - "metadata": { - "collapsed": false - }, + "metadata": {}, "source": [ "However, if you wanted to query only the data for a given lab, it might be most judicious to first\n", "know the list of all labs available, select an arbitrary lab name from it, and query the specific sessions from it." @@ -382,7 +355,6 @@ { "cell_type": "markdown", "metadata": { - "collapsed": false, "pycharm": { "name": "#%% md\n" } @@ -412,8 +384,9 @@ } ], "metadata": { + "celltoolbar": "Edit Metadata", "kernelspec": { - "display_name": "Python 3", + "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, @@ -427,9 +400,9 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.6" + "version": "3.9.16" } }, "nbformat": 4, - "nbformat_minor": 0 + "nbformat_minor": 1 } diff --git a/examples/loading_data/loading_passive_data.ipynb b/examples/loading_data/loading_passive_data.ipynb index 5d3e03114..68852fee0 100644 --- a/examples/loading_data/loading_passive_data.ipynb +++ b/examples/loading_data/loading_passive_data.ipynb @@ -17,10 +17,14 @@ }, "outputs": [], "source": [ - "# Turn off logging, this is a hidden cell on docs page\n", + "# Turn off logging and disable tqdm this is a hidden cell on docs page\n", "import logging\n", + "import os\n", + "\n", "logger = logging.getLogger('ibllib')\n", - "logger.setLevel(logging.CRITICAL)" + "logger.setLevel(logging.CRITICAL)\n", + "\n", + "os.environ[\"TQDM_DISABLE\"] = \"1\"" ] }, { @@ -67,9 +71,7 @@ "cell_type": "code", "execution_count": null, "id": "2b807296", - "metadata": { - "ibl_execute": false - }, + "metadata": {}, "outputs": [], "source": [ "from one.api import ONE\n", @@ -92,9 +94,7 @@ "cell_type": "code", "execution_count": null, "id": "811e3533", - "metadata": { - "ibl_execute": false - }, + "metadata": {}, "outputs": [], "source": [ "from brainbox.io.one import load_passive_rfmap\n", @@ -114,9 +114,7 @@ "cell_type": "code", "execution_count": null, "id": "c65f1ca8", - "metadata": { - "ibl_execute": false - }, + "metadata": {}, "outputs": [], "source": [ "# Load visual stimulus task replay events\n", @@ -167,9 +165,7 @@ "cell_type": "code", "execution_count": null, "id": "7552f7c5", - "metadata": { - "ibl_execute": false - }, + "metadata": {}, "outputs": [], "source": [ "# Find first probe insertion for session\n", @@ -207,9 +203,7 @@ "cell_type": "code", "execution_count": null, "id": "eebdc9af", - "metadata": { - "ibl_execute": false - }, + "metadata": {}, "outputs": [], "source": [ "# Find out at what times each voxel on the screen was turned 'on' (grey to white) or turned 'off' (grey to black)\n", @@ -236,9 +230,9 @@ "metadata": { "celltoolbar": "Edit Metadata", "kernelspec": { - "display_name": "Python [conda env:iblenv] *", + "display_name": "Python 3 (ipykernel)", "language": "python", - "name": "conda-env-iblenv-py" + "name": "python3" }, "language_info": { "codemirror_mode": { @@ -250,7 +244,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.6" + "version": "3.9.16" } }, "nbformat": 4, diff --git a/examples/loading_data/loading_raw_ephys_data.ipynb b/examples/loading_data/loading_raw_ephys_data.ipynb index 30b2d3b42..788d0ff57 100644 --- a/examples/loading_data/loading_raw_ephys_data.ipynb +++ b/examples/loading_data/loading_raw_ephys_data.ipynb @@ -70,7 +70,6 @@ "cell_type": "markdown", "id": "541898a2492f2c14", "metadata": { - "collapsed": false, "jupyter": { "outputs_hidden": false } @@ -97,6 +96,7 @@ "metadata": {}, "outputs": [], "source": [ + "%%capture\n", "stimOn_times = one.load_object(ssl.eid, 'trials', collection='alf')['stimOn_times']\n", "event_no = 100\n", "# timepoint in recording to stream, as per the experiment main clock \n", @@ -185,7 +185,6 @@ "cell_type": "markdown", "id": "d7dba84029780138", "metadata": { - "collapsed": false, "jupyter": { "outputs_hidden": false } @@ -359,6 +358,7 @@ "metadata": {}, "outputs": [], "source": [ + "%%capture\n", "from one.api import ONE\n", "from brainbox.io.spikeglx import Streamer\n", "\n", @@ -481,7 +481,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.6" + "version": "3.9.16" } }, "nbformat": 4, diff --git a/examples/loading_data/loading_raw_video_data.ipynb b/examples/loading_data/loading_raw_video_data.ipynb index 8b8c9eb9e..959e85dc7 100644 --- a/examples/loading_data/loading_raw_video_data.ipynb +++ b/examples/loading_data/loading_raw_video_data.ipynb @@ -293,7 +293,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.6" + "version": "3.9.16" } }, "nbformat": 4, From 8c8b3d79e08f8af3da1aab1ad673f388f3cf98bb Mon Sep 17 00:00:00 2001 From: Mayo Faulkner Date: Wed, 24 Apr 2024 11:04:33 +0100 Subject: [PATCH 2/5] data -> dataset --- examples/exploring_data/data_download.ipynb | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/examples/exploring_data/data_download.ipynb b/examples/exploring_data/data_download.ipynb index e0976fe02..dcc050ea9 100644 --- a/examples/exploring_data/data_download.ipynb +++ b/examples/exploring_data/data_download.ipynb @@ -159,7 +159,7 @@ "outputs": [], "source": [ "# Find sessions that have spikes.times datasets\n", - "sessions_with_spikes = one.search(project='brainwide', data='spikes.times')" + "sessions_with_spikes = one.search(project='brainwide', dataset='spikes.times')" ] }, { @@ -209,6 +209,7 @@ "insertions_rep_site = one.alyx.rest('insertions', 'list', django=ins_str_query)\n", "\n", "# To return to the full cache containing an index of all IBL experiments\n", + "%%capture\n", "ONE.cache_clear()\n", "one = ONE(base_url='https://openalyx.internationalbrainlab.org')" ] @@ -228,7 +229,7 @@ "outputs": [], "source": [ "# Find an example session with data\n", - "eid, *_ = one.search(project='brainwide', data='alf/')\n", + "eid, *_ = one.search(project='brainwide', dataset='alf/')\n", "# List datasets associated with a session, in the alf collection\n", "datasets = one.list_datasets(eid, collection='alf*')\n", "\n", @@ -254,7 +255,7 @@ "source": [ "# Find an example session with spike data\n", "# Note: Restricting by task and project makes searching for data much quicker\n", - "eid, *_ = one.search(project='brainwide', data='spikes', task='ephys')\n", + "eid, *_ = one.search(project='brainwide', dataset='spikes', task='ephys')\n", "\n", "# Data for each probe insertion are stored in the alf/probeXX folder.\n", "datasets = one.list_datasets(eid, collection='alf/probe*')\n", @@ -319,6 +320,7 @@ "metadata": {}, "outputs": [], "source": [ + "%%capture\n", "one.load_cache(tag='2022_Q2_IBL_et_al_RepeatedSite')\n", "sessions_lab = one.search(lab='mrsicflogellab')" ] @@ -349,7 +351,7 @@ "lab_name = list(labs)[0]\n", "\n", "# Searching for RS sessions with specific lab name\n", - "sessions_lab = one.search(data='spikes', lab=lab_name)" + "sessions_lab = one.search(dataset='spikes', lab=lab_name)" ] }, { From 982d2badef29ec8818cbfd72a5eb14f48b810c99 Mon Sep 17 00:00:00 2001 From: Mayo Faulkner Date: Wed, 24 Apr 2024 11:14:49 +0100 Subject: [PATCH 3/5] capture at top of cell --- examples/exploring_data/data_download.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/examples/exploring_data/data_download.ipynb b/examples/exploring_data/data_download.ipynb index dcc050ea9..bfaca800f 100644 --- a/examples/exploring_data/data_download.ipynb +++ b/examples/exploring_data/data_download.ipynb @@ -198,6 +198,7 @@ "metadata": {}, "outputs": [], "source": [ + "%%capture\n", "# Note that tags are associated with datasets originally\n", "# You can load a local index of sessions and datasets associated with a specific data release\n", "one.load_cache(tag='2022_Q2_IBL_et_al_RepeatedSite')\n", @@ -209,7 +210,6 @@ "insertions_rep_site = one.alyx.rest('insertions', 'list', django=ins_str_query)\n", "\n", "# To return to the full cache containing an index of all IBL experiments\n", - "%%capture\n", "ONE.cache_clear()\n", "one = ONE(base_url='https://openalyx.internationalbrainlab.org')" ] From cc244c08a2139201270a99f99876ab2d29aeceb0 Mon Sep 17 00:00:00 2001 From: Mayo Faulkner Date: Wed, 24 Apr 2024 11:49:10 +0100 Subject: [PATCH 4/5] fixes to layout --- brainbox/examples/docs_wheel_screen_stimulus.ipynb | 4 ++-- examples/loading_data/loading_raw_ephys_data.ipynb | 1 + 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/brainbox/examples/docs_wheel_screen_stimulus.ipynb b/brainbox/examples/docs_wheel_screen_stimulus.ipynb index 314346cb6..a7c0fe037 100644 --- a/brainbox/examples/docs_wheel_screen_stimulus.ipynb +++ b/brainbox/examples/docs_wheel_screen_stimulus.ipynb @@ -20,8 +20,8 @@ "Below we walk you through an example of how to compute the continuous screen position for a given trial.\n", "\n", "For this anaylsis we need access to information about the wheel radius and the wheel gain (visual degrees moved on screen per mm of wheel movement).\n", - "- Wheel radius = 3.1 cm\n", - "- Wheel gain = 4 (deg / mm)" + "* Wheel radius = 3.1 cm\n", + "* Wheel gain = 4 (deg / mm)" ] }, { diff --git a/examples/loading_data/loading_raw_ephys_data.ipynb b/examples/loading_data/loading_raw_ephys_data.ipynb index 788d0ff57..458ab82ce 100644 --- a/examples/loading_data/loading_raw_ephys_data.ipynb +++ b/examples/loading_data/loading_raw_ephys_data.ipynb @@ -50,6 +50,7 @@ "metadata": {}, "outputs": [], "source": [ + "%%capture\n", "from one.api import ONE\n", "from brainbox.io.one import SpikeSortingLoader\n", "\n", From 9e63992ea5fcec8af6e5f986871f924d7e31ec12 Mon Sep 17 00:00:00 2001 From: Mayo Faulkner Date: Wed, 24 Apr 2024 16:42:36 +0100 Subject: [PATCH 5/5] extra info about visual azimuth --- .../examples/docs_wheel_screen_stimulus.ipynb | 35 +++++++++---------- 1 file changed, 17 insertions(+), 18 deletions(-) diff --git a/brainbox/examples/docs_wheel_screen_stimulus.ipynb b/brainbox/examples/docs_wheel_screen_stimulus.ipynb index a7c0fe037..018e3c012 100644 --- a/brainbox/examples/docs_wheel_screen_stimulus.ipynb +++ b/brainbox/examples/docs_wheel_screen_stimulus.ipynb @@ -13,15 +13,14 @@ "id": "cd4144b5", "metadata": {}, "source": [ - "In the IBL task a visual stimulus (Gabor patch) appears on the left (-35°) or right (+35°) of a screen and the mouse must use a wheel to bring the stimulus to the centre of the screen (0°). If the mouse moves the wheel in the correct direction, the trial is deemed correct and the mouse receives a reward, if however, the mouse moves the wheel in the wrong direction and the stimulus goes off the screen, this is an error trial and the mouse receives a white noise error tone. \n", + "In the IBL task a visual stimulus (Gabor patch of size 7°2) appears on the left (-35°) or right (+35°) of a screen and the mouse must use a wheel to bring the stimulus to the centre of the screen (0°). If the mouse moves the wheel in the correct direction, the trial is deemed correct and the mouse receives a reward. If however, the mouse moves the wheel 35° in the wrong direction and the stimulus goes off the screen, this is an error trial and the mouse receives a white noise error tone. The screen was positioned 8 cm in front of the animal and centralized relative to the position of eyes to cover ~102 visual degree azimuth. In the case that the mouse moves the stimulus 35° in the wrong direction, the stimulus, therefore is visible for 20° and the rest is off the screen.\n", "\n", - "For some analysis it may be useful to know the position of the visual stimulus on the screen during a trial. While there is no direct read out of the location of the stimulus on the screen, as the stimulus is coupled to the wheel, we can infer the position using the wheel position. \n", + "For some analyses it may be useful to know the position of the visual stimulus on the screen during a trial. While there is no direct read out of the location of the stimulus on the screen, as the stimulus is coupled to the wheel, we can infer the position using the wheel position. \n", "\n", - "Below we walk you through an example of how to compute the continuous screen position for a given trial.\n", + "Below we walk you through an example of how to compute the continuous stimulus position on the screen for a given trial.\n", "\n", - "For this anaylsis we need access to information about the wheel radius and the wheel gain (visual degrees moved on screen per mm of wheel movement).\n", - "* Wheel radius = 3.1 cm\n", - "* Wheel gain = 4 (deg / mm)" + "For this anaylsis we need access to information about the wheel radius (3.1 cm) and the wheel gain (visual degrees moved on screen per mm of wheel movement). The wheel gain changes throughout the training period (see our [behavior paper](https://doi.org/10.7554/eLife.63711\n", + ") for more information) but for the majority of sessions is set at 4°/mm." ] }, { @@ -221,7 +220,7 @@ "source": [ "# Find the index of the wheel timestamps when the stimulus was presented (stimOn_times)\n", "idx_stim = np.searchsorted(wh_times, trials['stimOn_times'][tr_idx])\n", - "# Normalise the wh_pos to the position at stimOn\n", + "# Zero the wh_pos to the position at stimOn\n", "wh_pos = wh_pos - wh_pos[idx_stim]" ] }, @@ -254,7 +253,7 @@ "outputs": [], "source": [ "GAIN_MM_TO_SC_DEG = 4\n", - "screen_pos = wh_pos * GAIN_MM_TO_SC_DEG" + "stim_pos = wh_pos * GAIN_MM_TO_SC_DEG" ] }, { @@ -270,7 +269,7 @@ "id": "e0189229", "metadata": {}, "source": [ - "The screen_pos values that we have above have been computed over the whole trial interval, from trial start to trial end. The stimlus on the screen however is can only move with the wheel between the time at which the stimlus is presented (stimOn_times) and the time at which a choice is made (response_times). After a response is made the visual stimulus then remains in a fixed position until the it disappears from the screen (stimOff_times)" + "The stim_pos values that we have above have been computed over the whole trial interval, from trial start to trial end. The stimlus on the screen however is can only move with the wheel between the time at which the stimlus is presented (stimOn_times) and the time at which a choice is made (response_times). After a response is made the visual stimulus then remains in a fixed position until the it disappears from the screen (stimOff_times)" ] }, { @@ -293,11 +292,11 @@ "idx_off = np.searchsorted(wh_times, trials['response_times'][tr_idx])\n", "\n", "# Before stimOn no stimulus on screen, so set to nan\n", - "screen_pos[0:idx_stim - 1] = np.nan\n", + "stim_pos[0:idx_stim - 1] = np.nan\n", "# Stimulus is in a fixed position between response time and stimOff time\n", - "screen_pos[idx_res:idx_off - 1] = screen_pos[idx_res]\n", + "stim_pos[idx_res:idx_off - 1] = stim_pos[idx_res]\n", "# After stimOff no stimulus on screen, so set to nan\n", - "screen_pos[idx_off:] = np.nan" + "stim_pos[idx_off:] = np.nan" ] }, { @@ -305,7 +304,7 @@ "id": "781fe47f", "metadata": {}, "source": [ - "The screen_pos values are given relative to stimOn times but the stimulus appears at either -35° or 35° depending on the stimlus side. We therefore need to apply this offset to our screen position" + "The stim_pos values are given relative to stimOn times but the stimulus appears at either -35° or 35° depending on the stimlus side. We therefore need to apply this offset to our stimulus position. We also need to account for the convention that increasing wheel position indicates a counter-clockwise movement and therefore a left-ward (-ve) movement of the stimulus in visual azimuth." ] }, { @@ -328,12 +327,12 @@ " # The stimulus appeared on the right\n", " # Values for the screen position will be >0\n", " offset = ONSET_OFFSET # The stimulus starts at +35 and goes to --> 0\n", - " screen_pos = -1 * screen_pos + offset\n", + " stim_pos = -1 * stim_pos + offset\n", "else:\n", " # The stimulus appeared on the left\n", " # Values for the screen position will be <0\n", " offset = -1 * ONSET_OFFSET # The stimulus starts at -35 and goes to --> 0\n", - " screen_pos = -1 * screen_pos + offset" + " stim_pos = -1 * stim_pos + offset" ] }, { @@ -378,8 +377,8 @@ "axs[0].set_ylabel('Wheel displacement (mm)')\n", "\n", "\n", - "# On bottom axis plot the screen position\n", - "axs[1].plot(wh_times, screen_pos, 'k')\n", + "# On bottom axis plot the stimulus position\n", + "axs[1].plot(wh_times, stim_pos, 'k')\n", "axs[1].vlines([trials['stimOn_times'][tr_idx], trials['response_times'][tr_idx]],\n", " 0, 1, transform=axs[1].get_xaxis_transform(), colors='k', linestyles='dashed')\n", "axs[1].set_xlim(trials['intervals'][tr_idx])\n", @@ -392,7 +391,7 @@ "\n", "axs[1].set_ylim([-90, 90])\n", "axs[1].set_xlim(trials['stimOn_times'][tr_idx] - 0.1, trials['response_times'][tr_idx] + 0.1)\n", - "axs[1].set_ylabel('Screen position (°)')\n", + "axs[1].set_ylabel('Visual azimuth angle (°)')\n", "axs[1].set_xlabel('Time in session (s)')\n", "fig.suptitle(f\"ContrastLeft: {trials['contrastLeft'][tr_idx]}, ContrastRight: {trials['contrastRight'][tr_idx]},\"\n", " f\"FeedbackType {trials['feedbackType'][tr_idx]}\")\n",