From deffd9da937c586a7a8685c7fa17bd8542d7ed21 Mon Sep 17 00:00:00 2001 From: Vikram Baliga Date: Thu, 14 Jan 2021 11:06:21 -0800 Subject: [PATCH] update package description and readme --- DESCRIPTION | 6 ++-- README.Rmd | 30 ++++++++--------- README.md | 32 ++++++++++--------- codemeta.json | 4 +-- docs/articles/data-import-cleaning.html | 2 +- docs/articles/managing-frame-gaps.html | 2 +- .../articles/visual-perception-functions.html | 6 ++-- docs/index.html | 24 +++++++------- docs/pkgdown.yml | 2 +- docs/reference/pathviewR-package.html | 12 +++---- docs/reference/read_motive_csv.html | 4 +-- man/pathviewR-package.Rd | 6 ++-- vignettes/visual-perception-functions.Rmd | 2 +- 13 files changed, 67 insertions(+), 65 deletions(-) diff --git a/DESCRIPTION b/DESCRIPTION index e23c84d..b09e6ee 100644 --- a/DESCRIPTION +++ b/DESCRIPTION @@ -18,14 +18,14 @@ Authors@R: email = "epress12@gmail.com", comment = c(ORCID = "0000-0002-1944-3755")) ) -Description: Tools to import, clean, and visualize - animal movement data from motion capture systems such as Optitrack's +Description: Tools to import, clean, and visualize movement data, + particularly from motion capture systems such as Optitrack's Motive, the Straw Lab's Flydra, or from other sources. We provide functions to remove artifacts, standardize tunnel position and tunnel axes, select a region of interest, isolate specific trajectories, fill gaps in trajectory data, and calculate 3D and per-axis velocity. For experiments of visual guidance, we also provide functions that use - animal position to estimate perception of visual stimuli. + subject position to estimate perception of visual stimuli. Maintainer: Vikram B. Baliga License: GPL-3 Encoding: UTF-8 diff --git a/README.Rmd b/README.Rmd index 71c7d34..91fa4c6 100644 --- a/README.Rmd +++ b/README.Rmd @@ -22,15 +22,15 @@ knitr::opts_chunk$set( [![](https://badges.ropensci.org/409_status.svg)](https://github.com/ropensci/software-review/issues/409) -`pathviewR` offers tools to import, clean, and visualize animal movement data -from motion capture systems such as +`pathviewR` offers tools to import, clean, and visualize movement data, +particularly from motion capture systems such as [Optitrack's Motive](https://optitrack.com/software/motive/), the -[Straw Lab's Flydra](https://github.com/strawlab/flydra), -or other sources. We provide functions to remove artifacts, standardize -tunnel position and tunnel axes, select a region of interest, isolate specific -trajectories, fill gaps in trajectory data, and calculate 3D and per-axis -velocity. For experiments of visual guidance, we also provide functions that -use animal position to estimate perception of visual stimuli. +[Straw Lab's Flydra](https://github.com/strawlab/flydra), or other sources. We +provide functions to remove artifacts, standardize tunnel position and tunnel +axes, select a region of interest, isolate specific trajectories, fill gaps in +trajectory data, and calculate 3D and per-axis velocity. For experiments of +visual guidance, we also provide functions that use subject position to estimate +perception of visual stimuli. ## Installation @@ -43,7 +43,8 @@ devtools::install_github("vbaliga/pathviewR") ## Example #### Data import and cleaning via `pathviewR` -We'll also load two `tidyverse` packages for wrangling & plotting. +We'll also load two `tidyverse` packages for wrangling & plotting in this +readme. ```{r package_loading, message=FALSE, warning=FALSE} library(pathviewR) @@ -55,7 +56,7 @@ library(magrittr) We will import and clean a sample data set from `.csv` files exported by [Optitrack's Motive](https://optitrack.com/software/motive/) software. For examples of how to import and clean other types of data, -[see the data import and cleaning vignette](https://vbaliga.github.io/pathviewR/articles/data-import-cleaning.html). +[see the Basics of data import and cleaning vignette](https://vbaliga.github.io/pathviewR/articles/data-import-cleaning.html). ```{r import_motive} ## Import the Motive example data included in @@ -72,7 +73,7 @@ motive_data <- Several functions to clean and wrangle data are available, and we have a suggested pipeline for how these steps should be handled. For this example, we will use one of two "all-in-one" functions: `clean_viewr()`. -[See the Data Import and Cleaning vignette](https://vbaliga.github.io/pathviewR/articles/data-import-cleaning.html) +[See the Basics of data import and cleaning vignette](https://vbaliga.github.io/pathviewR/articles/data-import-cleaning.html) for the full pipeline and the other "all-in-one" function. ```{r all_in_one, fig.height=3, fig.width=6, dpi=300} @@ -118,14 +119,14 @@ str(motive_allinone) An important aspect of how `pathviewR` defines trajectories is by managing gaps in the data. -[See the Managing Frame Gaps vignette](https://vbaliga.github.io/pathviewR/articles/managing-frame-gaps.html) +[See the vignette on Managing frame gaps](https://vbaliga.github.io/pathviewR/articles/managing-frame-gaps.html) for more information on trajectory definition and frame gaps. Now that the data is cleaned, `pathviewR` includes functions that estimate visual perceptions based on the distance between the subject/observer and visual stimuli on the walls of the experimental tunnel. For a complete description of these functions, -[see the Visual Perception Functions vignette](https://vbaliga.github.io/pathviewR/articles/visual-perception-functions.html). +[see the vignette on Estimating visual perceptions from tracking data](https://vbaliga.github.io/pathviewR/articles/visual-perception-functions.html). #### Add more info about experiments @@ -175,8 +176,7 @@ motive_V_sf <- Visualizing the calculations provides an more intuitive understanding of how these visual perceptions change as the subject moves throughout the tunnel. -Please [see the Visual Perception Functions vignette](https://vbaliga.github.io/pathviewR/articles/visual-perception-functions.html) -for more examples of visualizing calculations. +Please [see the vignette on Estimating visual perceptions from tracking data](https://vbaliga.github.io/pathviewR/articles/visual-perception-functions.html) for more examples of visualizing calculations. ```{r motive_V_sf_pos, fig.height=3, fig.width=6, dpi=300} ggplot(motive_V_sf, aes(x = position_width, y = position_height)) + diff --git a/README.md b/README.md index 5edbf24..b9ff50a 100644 --- a/README.md +++ b/README.md @@ -15,15 +15,15 @@ coverage](https://codecov.io/gh/vbaliga/pathviewR/graph/badge.svg)](https://code [![](https://badges.ropensci.org/409_status.svg)](https://github.com/ropensci/software-review/issues/409) -`pathviewR` offers tools to import, clean, and visualize animal movement -data from motion capture systems such as [Optitrack’s +`pathviewR` offers tools to import, clean, and visualize movement data, +particularly from motion capture systems such as [Optitrack’s Motive](https://optitrack.com/software/motive/), the [Straw Lab’s Flydra](https://github.com/strawlab/flydra), or other sources. We provide functions to remove artifacts, standardize tunnel position and tunnel axes, select a region of interest, isolate specific trajectories, fill gaps in trajectory data, and calculate 3D and per-axis velocity. For experiments of visual guidance, we also provide functions that use -animal position to estimate perception of visual stimuli. +subject position to estimate perception of visual stimuli. ## Installation @@ -38,7 +38,8 @@ devtools::install_github("vbaliga/pathviewR") #### Data import and cleaning via `pathviewR` -We’ll also load two `tidyverse` packages for wrangling & plotting. +We’ll also load two `tidyverse` packages for wrangling & plotting in +this readme. ``` r library(pathviewR) @@ -49,7 +50,7 @@ library(magrittr) We will import and clean a sample data set from `.csv` files exported by [Optitrack’s Motive](https://optitrack.com/software/motive/) software. For examples of how to import and clean other types of data, [see the -data import and cleaning +Basics of data import and cleaning vignette](https://vbaliga.github.io/pathviewR/articles/data-import-cleaning.html). ``` r @@ -66,7 +67,7 @@ motive_data <- Several functions to clean and wrangle data are available, and we have a suggested pipeline for how these steps should be handled. For this example, we will use one of two “all-in-one” functions: `clean_viewr()`. -[See the Data Import and Cleaning +[See the Basics of data import and cleaning vignette](https://vbaliga.github.io/pathviewR/articles/data-import-cleaning.html) for the full pipeline and the other “all-in-one” function. @@ -139,7 +140,7 @@ str(motive_data) #> - attr(*, ".internal.selfref")= #> - attr(*, "pathviewR_steps")= chr "viewr" #> - attr(*, "file_id")= chr "pathviewR_motive_example_data.csv" -#> - attr(*, "file_mtime")= POSIXct[1:1], format: "2021-01-09 16:14:48" +#> - attr(*, "file_mtime")= POSIXct[1:1], format: "2021-01-14 11:04:23" #> - attr(*, "frame_rate")= num 100 #> - attr(*, "header")='data.frame': 11 obs. of 2 variables: #> ..$ metadata: chr [1:11] "Format Version" "Take Name" "Take Notes" "Capture Frame Rate" ... @@ -182,7 +183,7 @@ str(motive_allinone) #> $ end_length_sign : num [1:449] -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 ... #> $ direction : chr [1:449] "leftwards" "leftwards" "leftwards" "leftwards" ... #> - attr(*, "file_id")= chr "pathviewR_motive_example_data.csv" -#> - attr(*, "file_mtime")= POSIXct[1:1], format: "2021-01-09 16:14:48" +#> - attr(*, "file_mtime")= POSIXct[1:1], format: "2021-01-14 11:04:23" #> - attr(*, "frame_rate")= num 100 #> - attr(*, "header")='data.frame': 11 obs. of 2 variables: #> ..$ metadata: chr [1:11] "Format Version" "Take Name" "Take Notes" "Capture Frame Rate" ... @@ -213,16 +214,16 @@ str(motive_allinone) ``` An important aspect of how `pathviewR` defines trajectories is by -managing gaps in the data. [See the Managing Frame Gaps -vignette](https://vbaliga.github.io/pathviewR/articles/managing-frame-gaps.html) +managing gaps in the data. [See the vignette on Managing frame +gaps](https://vbaliga.github.io/pathviewR/articles/managing-frame-gaps.html) for more information on trajectory definition and frame gaps. Now that the data is cleaned, `pathviewR` includes functions that estimate visual perceptions based on the distance between the subject/observer and visual stimuli on the walls of the experimental -tunnel. For a complete description of these functions, [see the Visual -Perception Functions -vignette](https://vbaliga.github.io/pathviewR/articles/visual-perception-functions.html). +tunnel. For a complete description of these functions, [see the vignette +on Estimating visual perceptions from tracking +data](https://vbaliga.github.io/pathviewR/articles/visual-perception-functions.html). #### Add more info about experiments @@ -272,8 +273,9 @@ motive_V_sf <- Visualizing the calculations provides an more intuitive understanding of how these visual perceptions change as the subject moves throughout the -tunnel. Please [see the Visual Perception Functions -vignette](https://vbaliga.github.io/pathviewR/articles/visual-perception-functions.html) +tunnel. Please [see the vignette on Estimating visual perceptions from +tracking +data](https://vbaliga.github.io/pathviewR/articles/visual-perception-functions.html) for more examples of visualizing calculations. ``` r diff --git a/codemeta.json b/codemeta.json index 828a79a..c7f690b 100644 --- a/codemeta.json +++ b/codemeta.json @@ -5,7 +5,7 @@ ], "@type": "SoftwareSourceCode", "identifier": "pathviewR", - "description": "Tools to import, clean, and visualize \n animal movement data from motion capture systems such as Optitrack's \n Motive, the Straw Lab's Flydra, or from other sources. We provide \n functions to remove artifacts, standardize tunnel position and tunnel \n axes, select a region of interest, isolate specific trajectories, fill\n gaps in trajectory data, and calculate 3D and per-axis velocity. For \n experiments of visual guidance, we also provide functions that use \n animal position to estimate perception of visual stimuli. ", + "description": "Tools to import, clean, and visualize movement data,\n particularly from motion capture systems such as Optitrack's \n Motive, the Straw Lab's Flydra, or from other sources. We provide \n functions to remove artifacts, standardize tunnel position and tunnel \n axes, select a region of interest, isolate specific trajectories, fill\n gaps in trajectory data, and calculate 3D and per-axis velocity. For \n experiments of visual guidance, we also provide functions that use \n subject position to estimate perception of visual stimuli. ", "name": "pathviewR: Wrangle, Analyze, and Visualize Animal Movement Data", "codeRepository": "https://github.com/vbaliga/pathviewR", "issueTracker": "https://github.com/vbaliga/pathviewR/issues", @@ -258,7 +258,7 @@ } ], "readme": "https://github.com/vbaliga/pathviewR/blob/master/README.md", - "fileSize": "11559.219KB", + "fileSize": "11558.996KB", "contIntegration": "https://codecov.io/gh/vbaliga/pathviewR?branch=master", "developmentStatus": "https://www.repostatus.org/#active", "citation": [ diff --git a/docs/articles/data-import-cleaning.html b/docs/articles/data-import-cleaning.html index 8d4aa94..fda34a0 100644 --- a/docs/articles/data-import-cleaning.html +++ b/docs/articles/data-import-cleaning.html @@ -89,7 +89,7 @@

Basics of data import and cleaning in pathviewR

Vikram B. Baliga

-

2021-01-09

+

2021-01-14

Source: vignettes/data-import-cleaning.Rmd diff --git a/docs/articles/managing-frame-gaps.html b/docs/articles/managing-frame-gaps.html index 40545f4..67902a3 100644 --- a/docs/articles/managing-frame-gaps.html +++ b/docs/articles/managing-frame-gaps.html @@ -89,7 +89,7 @@

Managing frame gaps with pathviewR

Melissa S. Armstrong

-

2021-01-09

+

2021-01-14

Source: vignettes/managing-frame-gaps.Rmd diff --git a/docs/articles/visual-perception-functions.html b/docs/articles/visual-perception-functions.html index 951d651..b713f26 100644 --- a/docs/articles/visual-perception-functions.html +++ b/docs/articles/visual-perception-functions.html @@ -89,7 +89,7 @@

Estimating visual perceptions from tracking data

Eric R. Press

-

2021-01-09

+

2021-01-14

Source: vignettes/visual-perception-functions.Rmd @@ -103,11 +103,11 @@

Introduction

Studies of visually guided locomotion in birds, insects or even humans often involve data gathered from motion capture technologies such as Optitrack’s (Motive), or the Straw Lab’s (Flydra). For these experiments, it is important to understand how visual stimuli influence behaviour. While is it not possible to measure how subjects directly perceive visual stimuli, it is possible to use motion capture data to calculate estimates of stimulus properties as they are perceived by the subject. With the tools available in pathviewR, researchers in ethology or psychology may provide analyses of both stimulus and response under natural locomotor conditions.

Inherent to estimations of visual perceptions, we make several assumptions, which will be discussed below. We welcome suggestions and aim to address any assumptions that limit the accuracy of these estimates.

-

To bridge the gap between objective measures of subject position and estimates of subjective stimulus perception, we can begin by calculating the angle a visual pattern subtends on the subject’s retina - the visual angle (θ).

+

To bridge the gap between objective measures of subject position and estimates of subjective stimulus perception, we can begin by calculating the angle a visual pattern subtends on the subject’s retina - the visual angle (θ). Visual angles can be used to calculate aspects of image motion such as the rate of visual expansion (Dakin et al, 2016). For a detailed review of different forms of visual motion and how they’re processed by the brain, see (Frost, 2010).

Visual angles can be calculated provided there is information about the physical size of the pattern and its distance from the subject’s retina. Because researchers can control or measure the size of a pattern, and we can calculate the distance between the subject and pattern using motion capture data, we can further calculate the visual angle produced by patterns in the visual scene. Therefore, we first need to calculate the distance from the subject’s retina to the pattern.

Currently, we assume the subject’s gaze is directly frontal or lateral to the face, in effect estimating image properties at single points in the frontal and lateral fields of view, respectively. We currently calculate distances to the center of the subject’s head rather than the position of the retina - future versions of pathviewR will include features addressing these limitations, such as including head orientation information and eye position relative to the center of the subject’s head.

-Visual angles can be calculated using the size of a visual pattern (`stim_param`) and the distance to the pattern. Larger patterns at shorter distances produce larger visual angles. For a given distance to a grating pattern a constant visual angle is produced from a single line of sight while dot fields produce constant visual angles from many lines of sight

+Visual angles can be calculated using the size of a visual pattern (`stim_param`) and the distance to the pattern. Larger patterns at shorter distances produce larger visual angles. For a given distance to a grating pattern a constant visual angle is produced from a single line of sight while dot fields produce constant visual angles from many lines of sight

Visual angles can be calculated using the size of a visual pattern (stim_param) and the distance to the pattern. Larger patterns at shorter distances produce larger visual angles. For a given distance to a grating pattern a constant visual angle is produced from a single line of sight while dot fields produce constant visual angles from many lines of sight

diff --git a/docs/index.html b/docs/index.html index beba949..9d96d93 100644 --- a/docs/index.html +++ b/docs/index.html @@ -18,14 +18,14 @@ - + subject position to estimate perception of visual stimuli. "> -

pathviewR offers tools to import, clean, and visualize animal movement data from motion capture systems such as Optitrack’s Motive, the Straw Lab’s Flydra, or other sources. We provide functions to remove artifacts, standardize tunnel position and tunnel axes, select a region of interest, isolate specific trajectories, fill gaps in trajectory data, and calculate 3D and per-axis velocity. For experiments of visual guidance, we also provide functions that use animal position to estimate perception of visual stimuli.

+

pathviewR offers tools to import, clean, and visualize movement data, particularly from motion capture systems such as Optitrack’s Motive, the Straw Lab’s Flydra, or other sources. We provide functions to remove artifacts, standardize tunnel position and tunnel axes, select a region of interest, isolate specific trajectories, fill gaps in trajectory data, and calculate 3D and per-axis velocity. For experiments of visual guidance, we also provide functions that use subject position to estimate perception of visual stimuli.

Installation

@@ -114,12 +114,12 @@

Data import and cleaning via pathviewR

-

We’ll also load two tidyverse packages for wrangling & plotting.

+

We’ll also load two tidyverse packages for wrangling & plotting in this readme.

-

We will import and clean a sample data set from .csv files exported by Optitrack’s Motive software. For examples of how to import and clean other types of data, see the data import and cleaning vignette.

+

We will import and clean a sample data set from .csv files exported by Optitrack’s Motive software. For examples of how to import and clean other types of data, see the Basics of data import and cleaning vignette.

 ## Import the Motive example data included in 
 ## the package
@@ -129,7 +129,7 @@ 

system.file("extdata", "pathviewR_motive_example_data.csv", package = 'pathviewR') )

-

Several functions to clean and wrangle data are available, and we have a suggested pipeline for how these steps should be handled. For this example, we will use one of two “all-in-one” functions: clean_viewr(). See the Data Import and Cleaning vignette for the full pipeline and the other “all-in-one” function.

+

Several functions to clean and wrangle data are available, and we have a suggested pipeline for how these steps should be handled. For this example, we will use one of two “all-in-one” functions: clean_viewr(). See the Basics of data import and cleaning vignette for the full pipeline and the other “all-in-one” function.

 motive_allinone <-
   motive_data %>%
@@ -194,7 +194,7 @@ 

#> - attr(*, ".internal.selfref")=<externalptr> #> - attr(*, "pathviewR_steps")= chr "viewr" #> - attr(*, "file_id")= chr "pathviewR_motive_example_data.csv" -#> - attr(*, "file_mtime")= POSIXct[1:1], format: "2021-01-09 16:14:48" +#> - attr(*, "file_mtime")= POSIXct[1:1], format: "2021-01-14 11:04:23" #> - attr(*, "frame_rate")= num 100 #> - attr(*, "header")='data.frame': 11 obs. of 2 variables: #> ..$ metadata: chr [1:11] "Format Version" "Take Name" "Take Notes" "Capture Frame Rate" ... @@ -237,7 +237,7 @@

#> $ end_length_sign : num [1:449] -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 ... #> $ direction : chr [1:449] "leftwards" "leftwards" "leftwards" "leftwards" ... #> - attr(*, "file_id")= chr "pathviewR_motive_example_data.csv" -#> - attr(*, "file_mtime")= POSIXct[1:1], format: "2021-01-09 16:14:48" +#> - attr(*, "file_mtime")= POSIXct[1:1], format: "2021-01-14 11:04:23" #> - attr(*, "frame_rate")= num 100 #> - attr(*, "header")='data.frame': 11 obs. of 2 variables: #> ..$ metadata: chr [1:11] "Format Version" "Take Name" "Take Notes" "Capture Frame Rate" ... @@ -265,8 +265,8 @@

#> - attr(*, "max_frame_gap")= int [1:3] 1 1 2 #> - attr(*, "span")= num 0.95 #> - attr(*, "trajectories_removed")= int 5

-

An important aspect of how pathviewR defines trajectories is by managing gaps in the data. See the Managing Frame Gaps vignette for more information on trajectory definition and frame gaps.

-

Now that the data is cleaned, pathviewR includes functions that estimate visual perceptions based on the distance between the subject/observer and visual stimuli on the walls of the experimental tunnel. For a complete description of these functions, see the Visual Perception Functions vignette.

+

An important aspect of how pathviewR defines trajectories is by managing gaps in the data. See the vignette on Managing frame gaps for more information on trajectory definition and frame gaps.

+

Now that the data is cleaned, pathviewR includes functions that estimate visual perceptions based on the distance between the subject/observer and visual stimuli on the walls of the experimental tunnel. For a complete description of these functions, see the vignette on Estimating visual perceptions from tracking data.

@@ -299,7 +299,7 @@

calc_min_dist_v(simplify_output = TRUE) %>% get_vis_angle() %>% get_sf()

-

Visualizing the calculations provides an more intuitive understanding of how these visual perceptions change as the subject moves throughout the tunnel. Please see the Visual Perception Functions vignette for more examples of visualizing calculations.

+

Visualizing the calculations provides an more intuitive understanding of how these visual perceptions change as the subject moves throughout the tunnel. Please see the vignette on Estimating visual perceptions from tracking data for more examples of visualizing calculations.

 ggplot(motive_V_sf, aes(x = position_width, y = position_height)) +
   geom_point(aes(color = sf_pos), shape=1, size=3) +
diff --git a/docs/pkgdown.yml b/docs/pkgdown.yml
index 69b4ba8..3801cc1 100644
--- a/docs/pkgdown.yml
+++ b/docs/pkgdown.yml
@@ -5,7 +5,7 @@ articles:
   data-import-cleaning: data-import-cleaning.html
   managing-frame-gaps: managing-frame-gaps.html
   visual-perception-functions: visual-perception-functions.html
-last_built: 2021-01-10T00:30Z
+last_built: 2021-01-14T19:04Z
 urls:
   reference: https://vbaliga.github.io/pathviewR//reference
   article: https://vbaliga.github.io/pathviewR//articles
diff --git a/docs/reference/pathviewR-package.html b/docs/reference/pathviewR-package.html
index 3b266f2..4ff65b6 100644
--- a/docs/reference/pathviewR-package.html
+++ b/docs/reference/pathviewR-package.html
@@ -48,14 +48,14 @@
 
 
 
+    subject position to estimate perception of visual stimuli." />
 
 
 
@@ -143,14 +143,14 @@ 

pathviewR: Wrangle, Analyze, and Visualize Animal Movement Data

logo

-

Tools to import, clean, and visualize - animal movement data from motion capture systems such as Optitrack's +

Tools to import, clean, and visualize movement data, + particularly from motion capture systems such as Optitrack's Motive, the Straw Lab's Flydra, or from other sources. We provide functions to remove artifacts, standardize tunnel position and tunnel axes, select a region of interest, isolate specific trajectories, fill gaps in trajectory data, and calculate 3D and per-axis velocity. For experiments of visual guidance, we also provide functions that use - animal position to estimate perception of visual stimuli.

+ subject position to estimate perception of visual stimuli.

diff --git a/docs/reference/read_motive_csv.html b/docs/reference/read_motive_csv.html index 875c3e5..01d7959 100644 --- a/docs/reference/read_motive_csv.html +++ b/docs/reference/read_motive_csv.html @@ -351,7 +351,7 @@

Examp #> [919] 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 #> #> $.internal.selfref -#> <pointer: 0x7f8b6c80d4e0> +#> <pointer: 0x7fc13900d4e0> #> #> $class #> [1] "tbl_df" "tbl" "data.frame" @@ -363,7 +363,7 @@

Examp #> [1] "pathviewR_motive_example_data.csv" #> #> $file_mtime -#> [1] "2021-01-09 16:30:14 PST" +#> [1] "2021-01-14 11:04:34 PST" #> #> $frame_rate #> [1] 100 diff --git a/man/pathviewR-package.Rd b/man/pathviewR-package.Rd index 9522d4c..5623b38 100644 --- a/man/pathviewR-package.Rd +++ b/man/pathviewR-package.Rd @@ -8,14 +8,14 @@ \description{ \if{html}{\figure{logo.png}{options: align='right' alt='logo' width='120'}} -Tools to import, clean, and visualize - animal movement data from motion capture systems such as Optitrack's +Tools to import, clean, and visualize movement data, + particularly from motion capture systems such as Optitrack's Motive, the Straw Lab's Flydra, or from other sources. We provide functions to remove artifacts, standardize tunnel position and tunnel axes, select a region of interest, isolate specific trajectories, fill gaps in trajectory data, and calculate 3D and per-axis velocity. For experiments of visual guidance, we also provide functions that use - animal position to estimate perception of visual stimuli. + subject position to estimate perception of visual stimuli. } \seealso{ Useful links: diff --git a/vignettes/visual-perception-functions.Rmd b/vignettes/visual-perception-functions.Rmd index 92dac14..9be66df 100644 --- a/vignettes/visual-perception-functions.Rmd +++ b/vignettes/visual-perception-functions.Rmd @@ -31,7 +31,7 @@ knitr::opts_chunk$set( ```{r, echo=FALSE, out.width="100%", fig.cap="Visual angles can be calculated using the size of a visual pattern (`stim_param`) and the distance to the pattern. Larger patterns at shorter distances produce larger visual angles. For a given distance to a grating pattern a constant visual angle is produced from a single line of sight while dot fields produce constant visual angles from many lines of sight"} -knitr::include_graphics("https://github.com/vbaliga/pathviewR/raw/master/images/stim_param_angle.jpeg") +knitr::include_graphics("https://raw.githubusercontent.com/vbaliga/pathviewR/master/images/stim_param_angle.jpeg") ```