Skip to content

Commit

Permalink
touchups
Browse files Browse the repository at this point in the history
  • Loading branch information
epress12 committed Jan 8, 2021
1 parent 5df4083 commit 97704bc
Showing 1 changed file with 40 additions and 22 deletions.
62 changes: 40 additions & 22 deletions vignettes/visual-perception-functions.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -19,15 +19,18 @@ knitr::opts_chunk$set(

## Introduction

Studies of visually guided locomotion in birds, insects or even humans during often involve data gathered from motion capture technologies such as Optitrack's [(Motive)](https://optitrack.com/software/motive/), or the Straw Lab's [(Flydra)](https://github.com/strawlab/flydra). For these experiments, it is important to understand how visual stimuli influence behaviour. While is it not possible to measure how subjects directly perceive visual stimuli, it is possible to use the same motion capture data to calculate estimates of stimulus properties as they are perceived by the subject. With the tools available in pathviewR, researchers in ethology or psychology can provide analyses of both stimulus and response under natural locomotor conditions.
Studies of visually guided locomotion in birds, insects or even humans often involve data gathered from motion capture technologies such as Optitrack's [(Motive)](https://optitrack.com/software/motive/), or the Straw Lab's [(Flydra)](https://github.com/strawlab/flydra). For these experiments, it is important to understand how visual stimuli influence behaviour. While is it not possible to measure how subjects directly perceive visual stimuli, it is possible to use motion capture data to calculate estimates of stimulus properties as they are perceived by the subject. With the tools available in pathviewR, researchers in ethology or psychology may provide analyses of both stimulus and response under natural locomotor conditions.

*Inherent to estimations of visual perceptions, we make several assumptions, which will be discussed below. We welcome suggestions and aim to address any assumptions that limit the accuracy of these estimates*.

To bridge the gap between objective measures of subject position and estimates of subjective stimulus perception, we can begin by calculating the angle a visual pattern subtends on the subject's retina - the visual angle.
Visual angles can be calculated provided there is information about the physical size of the pattern and its distance from the subject's retina. Because researchers can control or measure the size of a pattern, and we can calculate the distance between the subject and pattern using motion capture data, we can further calculate the visual angle produced by patterns in the visual scene. Therefore, we first need to calculate the distance to the pattern. *Currently, we assume the subject's gaze is directly frontal or lateral to the face, in effect estimating image properties at a point in the frontal and lateral fields of view, respectively*.
To bridge the gap between objective measures of subject position and estimates of subjective stimulus perception, we can begin by calculating the angle a visual pattern subtends on the subject's retina - the visual angle (θ).

Visual angles can be calculated provided there is information about the physical size of the pattern and its distance from the subject's retina. Because researchers can control or measure the size of a pattern, and we can calculate the distance between the subject and pattern using motion capture data, we can further calculate the visual angle produced by patterns in the visual scene. Therefore, we first need to calculate the distance from the subject's retina to the pattern.

```{r, echo = FALSE, out.width = "90%", fig.cap = "Visual angles can be calculated using the size of a visual pattern (`stim_param`) and the distance to the pattern. Larger patterns at shorter distances produce larger visual angles.}
*Currently, we assume the subject's gaze is directly frontal or lateral to the face, in effect estimating image properties at single points in the frontal and lateral fields of view, respectively. We currently calculate distances to the center of the subject's head rather than the position of the retina - future versions of pathviewR will include features addressing these limitations*.


```{r, echo=FALSE, out.width="100%", fig.cap="Visual angles can be calculated using the size of a visual pattern (`stim_param`) and the distance to the pattern. Larger patterns at shorter distances produce larger visual angles. For a given distance, gratings produce a constant visual angle from one perspectival orientation while dot fields produce constant visual angles from many orientations."}
knitr::include_graphics("https://github.com/vbaliga/pathviewR/raw/master/images/stim_param_angle.jpeg")
```
Expand All @@ -38,15 +41,14 @@ knitr::include_graphics("https://github.com/vbaliga/pathviewR/raw/master/images/
```{r package_loading, message=FALSE, warning=FALSE}
library(pathviewR)
library(ggplot2)
library(magrittr)
library(tidyverse)
```


## Data preparation
Data objects must be prepared as described in [the Data Import and Cleaning vignette](https://vbaliga.github.io/pathviewR/articles/data-import-cleaning.html) pipeline prior to their use with these functions. For a detailed description of the importing and cleaning functions, and when to use them, please see the linked vignette.

Let's work with a few example datasets included in `pathviewR`. `pathviewR_motive_example_data.csv` is a .csv file exported from `Motive`. `pathviewR_flydra_example_data.mat` is a .mat file exported from `Flydra`. For more coarse-grained data cleaning tasks, `pathviewR` contains an all-in-one cleaning function `clean_viewr`. We will use this function for the following examples.
Let's work with a few example datasets included in the package. `pathviewR_motive_example_data.csv` is a .csv file exported from Motive. `pathviewR_flydra_example_data.mat` is a .mat file exported fromFlydra. For more coarse-grained data cleaning tasks, `pathviewR` contains an all-in-one cleaning function `clean_viewr`. We will use this function for the following examples.


```{r}
Expand Down Expand Up @@ -102,9 +104,11 @@ flydra_full <-
)
```



## Add experiment information with `insert_treatments()`
Now that our objects have been cleaned, we must use `insert_treatments()` to add information about the experiments, including relevant properties of the visual stimulus and experimental tunnel that are necessary for calculating visual angles.
*`pathviewR` currently supports rectangular (box) or v-shaped experimental tunnels, though we are intrigued to include additional tunnel configurations*
Now that our objects have been cleaned, we must use `insert_treatments()` to add information about the experiments, including relevant properties of the visual stimulus and experimental tunnel that are necessary for calculating visual perceptions.
*`pathviewR` currently supports rectangular (box) or v-shaped experimental tunnels, though we are intrigued to include additional tunnel configurations*.

#### V-shaped tunnel example
The data within `motive_full` were collected from birds flying through a 3m long v-shaped tunnel in which the height of the origin `(0,0,0)` was set to the height of the perches that were 0.3855m above the vertex, which was angled at 90˚. The visual stimuli on the positive and negative walls of the tunnel (where `position_width` values > 0 and < 0, respectively) were stationary dot-fields. Each dot was of size 0.05m in diameter. The visual stimuli on the positive and negative end walls of the tunnel (where `position_length` > 0 and < 0, respectively) were dot fields with dots 0.1m in diameter. This treatment was referred to as `"latB"`.
Expand All @@ -124,7 +128,7 @@ motive_treatments <-
treatment = "latB")
names(motive_treatments)
```
`motive_treatments` now has the variables `tunnel_config`, `perch_2_vertex`, `vertex_angle`, `stim_param_lat_pos`, `stim_param_lat_neg`, `stim_param_end_pos`, and `stim_param_end_neg` which are needed to calculate visual angles. The variable `treatment` has also been included and all of this information has been stored in the object's metadata.
`motive_treatments` now has the variables `tunnel_config`, `perch_2_vertex`, `vertex_angle`, `tunnel_length`, `stim_param_lat_pos`, `stim_param_lat_neg`, `stim_param_end_pos`, and `stim_param_end_neg` which are needed to calculate visual angles. The variable `treatment` has also been included and all of this information has been stored in the object's metadata.


#### Box-shaped tunnel example
Expand All @@ -143,18 +147,21 @@ flydra_treatments <-
```
`flydra_treatments` similarly has the variables `tunnel_config`, `tunnel_width`, `tunnel_length`, `stim_param_lat_pos`, `stim_param_lat_neg`, `stim_param_end_pos`, `stim_param_end_neg` and `treatment`.



## Calculating visual angles
#### Start by calculating the minimum distances to visual stimuli
### Start by calculating distances to visual stimuli

To estimate the visual angles perceived by the subject is it moves through the tunnel, we first need to calculate the distance between the subject and the visual stimuli. For this, we will use `calc_min_dist_v` or `calc_min_dist_box` depending on the configuration of the experimental tunnel.
To estimate the visual angles perceived by the subject is it moves through the tunnel, we first need to calculate the distance between the subject and the visual stimuli. For this, we will use `calc_min_dist_v` or `calc_min_dist_box` depending on the configuration of the experimental tunnel. These functions calculate the minimum distance between the subject and the surface displaying a visual pattern, therefore maximizing the visual angles.
For v-shaped tunnels, several internal calculations are required and can be added to the output object with `simplify_output = FALSE`. Otherwise, the minimum distance are computed to the lateral walls and the end wall to which the subject is facing.

```{r motive_min_dist_pos, fig.height=4, fig.width=7}
motive_min_dist <-
motive_treatments %>%
calc_min_dist_v(simplify_output = FALSE)
## Display minimum distances to the positive lateral walls
## Display minimum distances to the positive lateral walls
## Viewpoint is from the end of the tunnel
motive_min_dist %>%
ggplot(aes(x = position_width, y = position_height)) +
geom_point(aes(color = min_dist_pos), size = 2, shape = 1) +
Expand All @@ -176,6 +183,7 @@ flydra_min_dist <-
calc_min_dist_box()
## Display minimum distances to the end walls
## Viewpoint is from above the tunnel
flydra_min_dist %>%
ggplot(aes(x = position_length, y = position_width)) +
geom_point(aes(color = min_dist_end), size = 2, shape = 1) +
Expand All @@ -192,14 +200,17 @@ flydra_min_dist %>%
```


#### Now get visual angles


### Now get visual angles

```{r motive_vis_angle_pos, fig.height=4, fig.width=7}
motive_vis_angle <-
motive_min_dist %>%
get_vis_angle()
## visualize the angles produced from stimuli on the positive wall
## Visualize the angles produced from stimuli on the positive wall
## Viewpoint is from the end of the tunnel
motive_vis_angle %>%
ggplot(aes(x = position_width, y = position_height)) +
geom_point(aes(color = vis_angle_pos_deg), size = 2, shape = 1) +
Expand All @@ -214,13 +225,15 @@ motive_vis_angle %>%
xend = -0.5869,
yend = 0.2014))
```
Notice larger visual angles as the subject approaches the positive wall.

```{r flydra_vis_angle_end, fig.height=4, fig.width=7}
flydra_vis_angle <-
flydra_min_dist %>%
get_vis_angle()
## visualize the angles produced by stimuli on the end walls
## Visualize the angles produced by stimuli on the end walls
## Viewpoint is from above the tunnel
flydra_vis_angle %>%
ggplot(aes(x = position_length, y = position_width)) +
geom_point(aes(color = vis_angle_end_deg), size = 2, shape = 1) +
Expand All @@ -235,12 +248,12 @@ flydra_vis_angle %>%
xend = 1,
yend = 0.5))
```


With visual angles, we can now determine the spatial frequency of a visual pattern as it is perceived by the subject. Spatial frequency refers to the size of the pattern in visual space. It's often reported as the number of cycles of a visual pattern per 1˚ of the visual field (cycles/degree). For a given distance from the subject, a larger visual pattern produces a smaller spatial frequency, whereas a smaller visual pattern produces a higher spatial frequency. See below for how
Notice larger visual angles as the subject approaches the end wall that it is moving towards.


## Calculating spatial frequency
With visual angles, we can now determine the spatial frequency of a visual pattern as it is perceived by the subject. Spatial frequency refers to the size of the pattern in visual space. It's often reported as the number of cycles of a visual pattern per 1˚ of the visual field (cycles/degree). Here, we will define a cycle length as the length used for the `stim_param`. For a given distance from the subject, a larger visual pattern produces a smaller spatial frequency, whereas a smaller visual pattern produces a larger spatial frequency.

To calculate the spatial frequency of the visual stimuli as perceived by the
subject some distance from the stimuli, we will use `get_sf()`.

Expand All @@ -249,7 +262,8 @@ motive_sf <-
motive_vis_angle %>%
get_sf()
## visualize the spatial frequency of the stimulus on the positive wall
## Visualize the spatial frequency of the stimulus on the positive wall
## point is from the end of the tunnel
motive_sf %>%
ggplot(aes(x = position_width, y = position_height)) +
geom_point(aes(color = sf_pos), size = 2, shape = 1) +
Expand All @@ -264,13 +278,15 @@ motive_sf %>%
xend = -0.5869,
yend = 0.2014))
```
Notice the spatial frequency increases the further the subject recedes from positive wall.

```{r flydra_sf_end, fig.height=4, fig.width=7}
flydra_sf <-
flydra_vis_angle %>%
get_sf()
## visualize the spatial frequency of the stimulus on the end walls
## Visualize the spatial frequency of the stimulus on the end walls
## Viewpoint is from above the tunnel
flydra_sf %>%
ggplot(aes(x = position_length, y = position_width)) +
geom_point(aes(color = sf_end), size = 2, shape = 1) +
Expand All @@ -285,9 +301,11 @@ flydra_sf %>%
xend = 1,
yend = 0.5))
```
Notice the spatial frequency decreases as the subject approaches the end wall that it is moving towards.



### Stay tuned as additional features for image motion analysis are coming soon!
### Stay tuned as additional features for image motion estimation are coming soon!



0 comments on commit 97704bc

Please sign in to comment.