diff --git a/_quarto.yml b/_quarto.yml index 44a4177..8c380c5 100644 --- a/_quarto.yml +++ b/_quarto.yml @@ -35,6 +35,8 @@ website: text: Software - href: teaching/index.qmd text: Teaching + - href: teaching/courses/2017_lsa/index.qmd + text: "LSA 2017 Course" right: - href: https://jofrhwld.github.io/blog/ text: Blog @@ -88,7 +90,11 @@ website: href: "research/#2008" - text: "2007" href: "research/#2007" - + - title: "LSA 2017 Course" + contents: + - teaching/courses/2017_lsa/index.qmd + - auto: teaching/courses/2017_lsa/lectures/ + format: html: theme: @@ -101,6 +107,7 @@ format: - styles/dark.scss - styles/styles.scss toc: true - + smooth-scroll: true + editor: visual diff --git a/teaching/courses/2017_lsa/index.qmd b/teaching/courses/2017_lsa/index.qmd index 6d26a3a..d506454 100644 --- a/teaching/courses/2017_lsa/index.qmd +++ b/teaching/courses/2017_lsa/index.qmd @@ -1,5 +1,9 @@ --- title: "LSA 2017 Statistical Modelling with R" +listing: + contents: lectures + type: table + fields: [image, order, title, reading-time] --- - [Meeting 1: Introduction to R](lectures/Session_1.nb.html) diff --git a/teaching/courses/2017_lsa/lectures/Session_1.nb.html b/teaching/courses/2017_lsa/lectures/Session_1.nb.html deleted file mode 100644 index b3e1610..0000000 --- a/teaching/courses/2017_lsa/lectures/Session_1.nb.html +++ /dev/null @@ -1,1202 +0,0 @@ - - - - -
- - - - - - - - -Welcome to Statistical Modelling with R. If there is one thing to remeber from this course, it is that your analysis workflow should look something like this:
-These are some of the core areas I figure are necessary to getting good at statistical modelling in R:
-These are all skills you can achieve through practice, experience, and occasional guidance from someone more skilled than you. It is exactly like acquiring any other skill or craft. At first it will be confusing, you’ll make some mistakes, and it won’t look so good. I think
-The first hat I ever knit:
-The most recent hat I knit:
-The way I improved my knitting is exactly the same as how you can improve your R programming ability:
-Most of the content of the course is devoted to core R programming (things you should be memorizing or remembering where to find help), but I’ll try my best to annotate portions of the notes which correspond to workspace hygiene, being idiomatic, etc, so that you can distinguish between them.
-The course will follow the workflow outlined at the beginning: begin → summarize → visualize → analyze
.
Week | -Monday | -Thursday | -
---|---|---|
1 | -– | -Intro - Basics & R Notebooks | -
2 | -Data Frames & Factors | -Split-Apply-Combine, Reshaping | -
3 | -ggplot2 | -Fitting Linear Models | -
4 | -map functions & fitting many models | -Mixed Effects Linear models | -
5 | -Bootstraps & Visualization | -– | -
Workspace Hygiene
-If you have a directory planning structure that you’re happy with, go ahead and do that. But if how to organize your R analysis life is something you’d like to get out of this course, I’d recommend the following directory structure & naming conventions.
-├── lsa_2017
-│ └── r_modelling*
-│ ├── assignments
-│ ├── data
-│ └── lectures
-
-The r_modelling directory will be the home directory for the course. I would recommend creating a new R Notebook for each lecture (more on that in a moment) and giving them a naming convention like:
-01_lecture.Rmd
-02_lecture.Rmd
-Right now eliminate the impulse to create any folders or file names with spaces in them.
-We’re going to be using R, RStudio, and R Notebooks in this course, and it’s a little important to keep straight what these three things are:
-R is a programming language that runs on your computer. At its barest bones, it looks like this:
-You can type text into the prompt there, and if you’ve successfully memorized the right R commands, it’ll do some things.
-RStudio is like an Instagram filter over to of R, to make your R use experience better. It visually organizes some important components of using R into panes, and offers code completion suggestions. For example, if you rember there’s something called a “Wilcoxon test”, but you don’t remember what the function in R is, you can start typing in Wilc
, and this will happen:
RStudio’s autocompletion is really useful for a lot of other things, like reminding you what the column names are in your data frame, what the names of all the arguments to a function are, etc.
-But perhaps the most valuable component in R Studio these days is its authoring tools, like R Notebooks
-R Notebooks allow you to document your code in plain text, insert R Code chunks, and view the results of the R code all in one place, then compile it into a nice looking notebook.
-~5 Minute Activity
-Goals
-Create a new RStudio Project, either by using the menu options File > New Project
or by clicking on the icon in the top right corner of the RStudio window. If you have created directory structure above choose Existing Directory and choose r_modelling
. Otherwise, select the options New Directory then Empty Project and tell it the projec name is r_modelling
Open a new R Notebook using the menu command File > New File > R Notebook
. If this is the first time you’ve opened an R Notebook on your computer, you’ll probably be faced with the following prompt:
Click “Yes”, and wait for the installation to finish. A window with a bunch of gobbledygook will pop up, and that’s fine. Once it’s all finished, the new file should open.
-First, run the R code chunk that comes automatically in a new R Notebook by clicking on the green “play” button in the top right corner of the code chunk.
-Next, insert a new R code chunk at the bottom of the notebook (directions for how to do so are already included in the new R Notebook), and inside, enter:
-"Hello World"
-Then run this code chunk by clicking the play button.
-Click the “Preview” button at the top of the R Notebook panel to compile it into an HTML document. You will need to save the notebook first. In the lectures
folder, save it as 00_practice.Rmd
I’m going to recommend (for now at least) that you run all of your code though an R Notebook. It is possible to just type things into the R console, but that’s kind of like dictating a paper into thin air. Once you’ve spoken the words, they dissapear and can be hard to recover.
-My earlier advice would have been to write all of your code in an R script file, but that also separates the code from its results, which can be hard for beginners to keep track of.
-R comes with a lot of functionality installed, but one way that R is extentible is through users’ ability to contribute new code & data through it’s package management system. We’re going to using a number of these packages in the course, especially since a few of them have fundamentally changed the way R programming works in the past 3 years. There’s also a course R package I’ve created to easilly distribute sample datasets.
-Here’s a basic diagram of how R packages work:
-install.packages()
Most R packages are distributed through CRAN (Comprehensive R Archive Network). When you run function install.packages("x")
, R checks whether the package "x"
exists on CRAN, and installs it on your computer if it does. You maybe asked to choose a “CRAN mirror” the first time you run install.packages()
. This is because there are many copies of CRAN distributed aross the internet. I’d recommend choosing the first option called 0-Cloud
.
install_github()
As a package developer, getting a package onto CRAN can be a bit of a pain, so some packages (and development versions of many) are also available on GitHub, which can be easilly installed with devtools::install_github("username/package")
.
Installing a package is different from loading packages. Installing a package only downloads and configures the code on your computer. In order to use the contents of a package, you need to load it into your R session with library()
.
install.packages()
once to install a package, or to update a package.library()
at the start of every new R session in order to use the functionality from that package.For example, ggplot()
is a function from the package ggplot2
. I have already installed ggplot2
on my computer, but if I try to use ggplot()
before loading the package with library()
, I’ll get the error that the function was not found.
foo <- ggplot()
-
-
-Error in ggplot() : could not find function "ggplot"
-
-
-
-
-
-
-library("ggplot2")
-foo <- ggplot()
-
-
-
-~2 Minute Activity
-Let’s install all of the packages we’re going to use in the course. Double check that you’re connected to the internet.
-Create a notebook for this lecture called 01_lecture.Rmd
. Copy-paste the following into an R code chunk and run it:
install.packages(
- c("tidyverse",
- "devtools")
-)
-
-library("devtools")
-
-install_github("jofrhwld/lsa2017")
-
-
-
-We’re now going to run through some very basics of R, specifically:
-Create a new R Notebook. Change the Title
field to Intro to R
, and save it as 0_lecture.Rmd
in the folder lectures
.
As we come to a code chunk in the lecture, either copy-paste or re-type it into a new code chunk in your lecture R notebook, and run it.
-One way to think of R is as an overblown calculator.
- - - -3+3
-
-
-[1] 6
-
-
-2*4
-
-
-[1] 8
-
-
-(369-1)/6
-
-
-[1] 61.33333
-
-
-
-But it’s not all that useful to do a bunch of calculations without saving the results for later, which is where assignment comes in.
-You can assign values to variables using the assignment operator: <-
or ->
(but in practice, only use <-
).
variable <- value
-x <- 10
-y <- 2*3
-
-
-
-Once you’ve assigned a value to a variable, you can reuse the value stored in that variable for other purposes, like just printing it out again
- - - -x
-
-
-[1] 10
-
-
-y
-
-
-[1] 6
-
-
-
-Or adding the two values together
- - - -x + y
-
-
-[1] 16
-
-
-
-In short, you can use these variables x
and y
like they are the values you assigned to them. If this is your first time programming, here are a few things to clarify:
Note
-x
and y
didn’t exist before you created them by assigning values to them.Idiom
-x
and y
are lousy names for variables. When it comes to naming variables, there’s a famous saying:
--“There are only two hard things in Computer Science: cache invalidation and naming things.” — Phil Karlton
-
For best practices on naming variables, I’ll refer you to the tidyverse style guide by Hadley Wickham. To briefly summarize it:
-_
to separate words in a a variable name.Also, be guided by The Principle of Least Effort. Use the minimal ammount of characters that are still clearly interpretable.
- - - -# Good Names
-model_1
-model_full
-
-
-# Bad Names
-the_first_model_I_ever_fit
-just_trying_out_a_model_with_all_predictors
-m_01
-m_agdf
-
-
-
-Also, just use good judgment. There is nothing in R preventing you from doing stuff like this to yourself.
- - - -five <- 10
-ten <- 5
-
-yellow <- "green"
-
-
-
-Another thing to keep in mind is that R can’t handle any other characters in numeric values other than 0
through 9
and decimal places. All of these will fail:
# no commas
-thousand <- 1,000
-
-
-Error: unexpected ',' in "thousand <- 1,"
-
-
-
-
-
-
-# no spaces
-thousand <- 1 000
-
-
-Error: unexpected numeric constant in "thousand <- 1 000"
-
-
-
-
-
-
-# like this
-thousand <- 1000
-
-
-
-
-
-
-# no currencies
-dollars <- $1000
-
-
-Error: unexpected '$' in "dollars <- $"
-
-
-
-
-
-
-# no percentages
-percent <- 51%
-
-
-Error: unexpected input in "percent <- 51%"
-
-
-
-In addition to numbers, other basic data types in R are character and logical.
- - - -# character data
-digital_words <- c("fam",
- "Harambe",
- "tweetstorm",
- "@")
-
-
-
-
-
-
-# logical values
-TRUE
-
-
-[1] TRUE
-
-
-# a logical test
-(10/2) < 3
-
-
-[1] FALSE
-
-
-
-When you enter characters without quotes around them, R assumes you’re referring to a variable. If you tried to do the assignment above without the quotes, you’ll get an error.
- - - -digital_words_fail <- c(fam,
- Harambe,
- tweetstorm)
-
-
-Error: object 'fam' not found
-
-
-
-Here, R saw fam
, which isn’t in quotes, searched the environment for any variables named fam
and couldn’t find any.
When you put characters in quotes, R assumes it’s a character value, even if there’s a variable by the same name.
- - - -digital_words
-
-
-[1] "fam" "Harambe" "tweetstorm" "@"
-
-
-"digital_words"
-
-
-[1] "digital_words"
-
-
-
-Vectors are essentially lists of data, and can contain characters, numbers, or TRUE FALSE values. There are a number of ways to create vectors in R, and frequently doing data manipulation will produce subvectors of data.
-1:10
-c(...)
-c()
.
-c(1,2,3,4)
seq(from,to,...)
-seq(1,10,by=0.5)
seq(1,10,length=11)
rep(x,...)
-rep(1,6)
rep(1:3,2)
rep("hello world",4)
A pretty cool and unique feature of R is how you can do arithmetic with vectors. For example, let’s say you’ve interviewed a bunch of speakers of the following ages
- - - -ages <- c(18, 35, 41, 62)
-
-
-
-If you wanted to know the year of birth of these speakers, it’s as easy as:
- - - -2017 - ages
-
-
-[1] 1999 1982 1976 1955
-
-
-
-R has taken each value in ages
, and subtracted it from 2017
, and created a new vector with the results.
Or, if you wanted to know in which year these speakers turned 17, it’s as easy as:
- - - -(2017 - ages) + 17
-
-
-[1] 2016 1999 1993 1972
-
-
-
-Or, let’s say these speakers weren’t all interviewed the same year. Half were interviewed in the 90s, and half in the 2000s.
- - - -interview_year <- c(1995, 1996, 2003, 2004)
-
-
-
-Getting each speaker’s date of birth is as simple as:
- - - -interview_year - ages
-
-
-[1] 1977 1961 1962 1942
-
-
-
-This worked because the two vectors, interview_year
and ages
were the same length. R took the first values of age
and subtracted it from the first value of interview_year
, the second value of age
and subtracted it from the second value of interview_year
, etc, creating new vector of the result. You could easilly assign this output to a new variable.
dob <- interview_year - ages
-
-
-
-Of course, if you now wanted to know what year these speakers turned 17, you could do it like so:
- - - -(interview_year - ages) + 17
-
-
-[1] 1994 1978 1979 1959
-
-
-
-~5 Minute Activity
-A Starbucks Grande filter coffee in the UK currently costs £1.85. The value of £1 before the Brexit vote was about $1.49. After the vote, it dropped down to about $1.31, and lately it’s been closer to $1.27.
-Using vector arithmetic as much as possible, find out how the value in dollars of my coffee has changed.
-If you have a bunch of values stored in a vecor, and you want to pull out specific ones, you can do so by indexing it with square brackets []
.
Let’s start by indexing by position.
-vector[position]
-R has some built in vectors for you to use, like one called letters
. We haven’t defined letters
, and it’s not listed as being in your R environment, but it’s there.
letters
-
-
- [1] "a" "b" "c" "d" "e" "f" "g" "h" "i" "j" "k" "l" "m" "n" "o" "p" "q" "r"
-[19] "s" "t" "u" "v" "w" "x" "y" "z"
-
-
-
-The first value in a vector has index 1
, the second index 2
, and so on. If you’ve forgotten what the 19th letter of the alphabet is, you can find it out like so:
letters[19]
-
-
-[1] "s"
-
-
-
-If instead of just one number, you use another vector to index letters
, you’ll get back out another vector.
yes <- c(25, 5, 19)
-letters[yes]
-
-
-[1] "y" "e" "s"
-
-
-abba <- c(1, 2, 2, 1)
-letters[abba]
-
-
-[1] "a" "b" "b" "a"
-
-
-
-You can also index by logical values.
-vector[true false vector]
-Let’s come back to our vector of speaker’s ages
- - - -ages
-
-
-[1] 18 35 41 62
-
-
-
-If we make another vector of TRUE
and FALSE
values of the same length, we can use it to index test_vec
.
logical_vec <- c(T, F, T, F)
-ages[logical_vec]
-
-
-[1] 18 41
-
-
-
-You only get back values where the index vector was TRUE
.
Of course, what you’ll usually do is generate a vectore of TRUE
and FALSE
values by using a logical operator.
ages > 40
-ages[ages > 40]
-
-
-
-~2 Minute Activity
-Let’s assume our speakers had the following names:
- - - -speaker_names <- c("Charlie", "Skyler", "Sawyer", "Jamie")
-
-
-
-Using logical indexing and the ages in ages
and year of interview in interview_year
(or just dob
, if you assigned anything to that variable), find out which speakers were born after 1960.
The following operators will return a vector of TRUE
and FALSE
values.
Operator | -Meaning | -
---|---|
== |
-exactly equal to | -
!= |
-not equal to | -
> |
-greater than | -
< |
-less than | -
>= |
-greater than or equal to | -
< |
-less than | -
<= |
-less than or equal to | -
You can use these to compare vectors to single values, as we’ve seen, but you can also compare vectors to vectors if they are the same length. Comparison is done elementwise.
- - - -group_a <- c(20, 10, 13, 60)
-group_b <- c(11, 31, 2, 9)
-group_a < group_b
-
-
-[1] FALSE TRUE FALSE FALSE
-
-
-
-There are three more operators that have an effect on TRUE
and FALSE
vectors.
Operator | -Meaning | -
---|---|
! |
-not x changes all T to F and F to T |
-
| |
-x or y | -
& |
-x and y | -
x <- c(T, T, F, F)
-y <- c(T, F, T, F)
-
-
-
-
-
-
-cbind(
- x = x,
- y = y,
- and = x&y,
- or = x|y
-)
-
-
- x y and or
-[1,] TRUE TRUE TRUE TRUE
-[2,] TRUE FALSE FALSE TRUE
-[3,] FALSE TRUE FALSE TRUE
-[4,] FALSE FALSE FALSE FALSE
-
-
-
-%in%
This gets its own heading because it’s so useful, and you’ll use it a lot. If you say a %in% b
, R checks every value in a
to see if it’s in b
.
value %in% vector
-# Was Sage in our study?
-"Sage" %in% speaker_names
-
-
-[1] FALSE
-
-
-
-
-
-
-# Was Schuyler in our study?
-"Schuyler" %in% speaker_names
-
-
-[1] FALSE
-
-
-# Yes, but not spelled that way.
-"Skyler" %in% speaker_names
-
-
-[1] TRUE
-
-
-
-The first item can also be a vector.
- - - -# How about all of these people?
-check_names <- c("Oakley", "Charlie", "Azaria", "Landry", "Skyler", "Justice")
-check_names %in% speaker_names
-
-
-[1] FALSE TRUE FALSE FALSE TRUE FALSE
-
-
-check_names[check_names %in% speaker_names]
-
-
-[1] "Charlie" "Skyler"
-
-
-check_names[!(check_names %in% speaker_names)]
-
-
-[1] "Oakley" "Azaria" "Landry" "Justice"
-
-
-
-~2 minute setup
-Make sure that your current RStudio project is set to your course project. Create and save your R notebook for today (I would recommend 02_lecture.Rmd
). Clear the workspace of anything left over from last time with the menu options Session > Clear Workspace
.
Load the important packages for today’s work:
- - - -library(lsa2017)
-library(tidyverse)
-
-
-
-When collecting data in the first place, over-collect if at all possible or ethical. The world is a very complex place, so there is no way you could cram it all into a bottle, but give it your best shot! If during the course of your data analysis, you find that it would have been really useful to have data on, say, duration, as well as formant frequencies, it becomes costly to recollect that data, especially if you haven’t laid the proper trail for yourself. On the other hand, automation of acoustic analysis or data processing can cut down on this costliness.
-This doesn’t go for personal information on human subjects, though. It’s important from an ethics standpoint to ask for everything you’ll need, but not more. You don’t want to collect an enormous demographic profile on your participants if you won’t wind up using it, especially if you know you won’t use it to begin with.
-If, for instance, you’re collecting data on the effect of voicing on preceding vowel duration, preserve high dimensional data coding, like Lexical Item, or the transcription of the following segment. These high dimensional codings probably won’t be too useful for your immediate analysis, but they will allow you to procedurally extract additional features from them at a later time. For example, if you have a column called fol_seg
, which is just a transcription of the following segment, it is easy create a new column called manner
with code that looks like this:
table(iy_ah$fol_seg)
-
-
-
- AA0 AA1 AE1 AH0 AH1 AO1 AY1 B CH D DH EH1 ER0 F G
- 2 1 1 371 2 1 36 1588 1201 1920 507 2 5 124 140
- HH IH0 IH1 IY1 JH K L M N NG OW0 OW2 P R S
- 10 255 1 4 126 3156 2888 1589 5397 26 1 2 4963 1 1479
- SH SP T TH V W Y Z ZH
- 217 107 12690 96 2167 13 4 3693 32
-
-
-
-
-
-
-iy_ah <- iy_ah %>%
- mutate(manner = recode(fol_seg, B = 'stop',
- CH = 'affricate',
- D = 'stop',
- DH = 'fricative',
- `F` = 'fricative',
- G = 'stop',
- HH = 'fricative',
- JH = 'affricate',
- K = 'stop',
- L = 'liquid',
- M = 'nasal',
- N = 'nasal',
- NG = 'nasal',
- P = 'stop',
- R = 'liquid',
- S = 'fricative',
- SH = 'fricative',
- SP = 'pause',
- `T` = 'stop',
- TH = 'fricative',
- V = 'fricative',
- W = 'glide',
- Y = 'glide',
- Z = 'fricative',
- ZH = 'fricative',
- .default = 'vowel'))
-table(iy_ah$manner)
-
-
-
-affricate fricative glide liquid nasal pause stop vowel
- 1327 8325 17 2889 7012 107 24457 684
-
-
-
-Be sure to answer this question: How can I preserve a record of this observation in such a way that I can quickly return to it and gather more data on it if necessary? If you fail to successfully answer this question, then you’ll be lost in the woods if you ever want to restudy, and the only way home is to replicate the study from scratch.
-Give meaningful names to both the names of predictor columns, as well as to labels of nominal observations. Keeping a readme describing the data is still a good idea, but at least now the data is approachable at first glance.
-0
and NA
I have worked with some spreadsheets where missing data was given a value of 0
, which will mess things up. For example, /oy/ is a fairly rarely occurring phoneme in English, and it’s possible that a speaker won’t produce any tokens in a short interview. In a spreadsheet of mean F1 and F2 for all vowels, that speaker should get an NA
for /oy/, not 0
.
When we store data, it should be:
-Raw Raw data is the most useful data. It’s impossible to move down to smaller granularity from a coarser, summarized granularity. Summary tables etc. are nice for publishing in a paper document, but raw data is what we need for asking novel research questions with old data.
Open formatted Do not use proprietary database software for long term storage of your data. I have enough heard stories about interesting data sets that are no longer accessible for research either because the software they are stored in is defunct, or current versions are not backwards compatible. At that point, your data is property of Microsoft, or whoever. Store your data as raw text, delimited in some way (I prefer tabs).
Consistent I think this is most important when you may have data in many separate files. Each file and its headers should be consistently named and formatted. They should be consistently delimited and commented also. There is nothing worse than inconsistent headers and erratic comments, labels, headers or NA characters in a corpus. (Automation also helps here.)
Documented Produce a readme describing the data, how it was collected and processed, and describe every variable and its possible values.
Let’s start off by looking at a picture of a data organization approach that might look familiar, and is a very bad way to do things:
-This spreadsheet has a fairly strict organizational structure, but is virtuously hopeless for doing any kind of serious statistical analysis. It’s also verging on irreparable using R. This because the data in this spreadsheet is organized to be easy to look at with your eyeballs 👀.
-But looking at neatly organized data in a spreadsheet is not a statistical analysis technique. So we need to start organizing our data in a way that isn’t easy to look at, but is easy to graph and analyze.
-Everyone working with data (in R or otherwise) should read Hadley Wickham’s paper on Tidy Data: https://cran.r-project.org/web/packages/tidyr/vignettes/tidy-data.html If you are coming off of organizing your data like the picture above, there are a few guidelines not discussed in that paper, namely:
-In the semantics of data structure Wickham lays out, there are three important primitives:
-Variables are the collections of values of interest in data analysis. For example, let’s say you were doing a study on unnormalized vowel space size by just looking at /i:/ and /ɑ/. The variables in that study could be:
-speaker
word
phoneme
duration
F1
F2
word_frequency
Values are, as the name implies, the possible values that each variable can have, for example:
-speaker
: "Oakley"
, "Charlie"
, "Azaria"
, ...
word
: "street"
, "thirteen"
, "not"
, "got"
, ...
phoneme
: "iy"
, "ah"
An observation is the minimal unit across which all variables are collected. For example, in the vowel space study, one observation would be one instance of an uttered vowel for which you record who the speaker was, the word, the duration, F1, F2, etc.
-Once you’ve thought through what the variables, values and observations are for your study, the principle of how to organize them is simple:
-For the vowel space size study, you might want to wind up with a plot that looks like this:
- - - - - - - -It wouldn’t be uncommon to see the data untidily organized like this:
- - - -~5 Minute Activity
-In small groups, figure out the following:
-So far we have discussed the following types of values in R:
-And we’ve discussed the following data structures.
-Here, we’ll cover one new data structure:
-Data Frames are the data structure we’ll be using the most in R. When you begin thinking about data frames, a useful starting place is to think of them as spreadsheets, with columns and rows (but we’ll eventually abandon spreadsheet thinking). Let’s start out by creating a very simple data frame using the data.frame()
function.
pitch <- data.frame(speaker_names = c("Charlie", "Skyler", "Sawyer", "Jamie"),
- ages = c(18, 35, 41, 62),
- F0 = c(114, 189, 189, 199))
- pitch
-
-
-The pitch
data frame has four rows, and three columns. The rows are just numbered 1 through 4, and the three columns are named speaker_names
, ages
and F0
. To find out how many rows and columns a data frame has, you can use the nrow()
and ncol()
functions.
nrow(pitch)
-
-
-[1] 4
-
-
- ncol(pitch)
-
-
-[1] 3
-
-
-
-Most data frames you’re going to work with have a lot more rows than that. For example, iy_ah
is a data frame that is bundled in the lsa2017
package.
nrow(iy_ah)
-
-
-[1] 44818
-
-
-
-That’s too many rows to look at just in the console. One option is to use the head()
function, that just prints the first 6 rows.
head(iy_ah)
-
-
-Another option is to use the summary()
function.
summary(iy_ah)
-
-
- idstring age sex year years_of_schooling
- Length:44818 Min. :18.00 Length:44818 Min. :1973 Length:44818
- Class :character 1st Qu.:30.00 Class :character 1st Qu.:1980 Class :character
- Mode :character Median :45.00 Mode :character Median :1985 Mode :character
- Mean :46.69 Mean :1989
- 3rd Qu.:65.00 3rd Qu.:2002
- Max. :93.00 Max. :2010
- vowel word F1 F2 dur
- Length:44818 Length:44818 Min. : 186.6 Min. : 598.6 Min. :0.0500
- Class :character Class :character 1st Qu.: 398.3 1st Qu.:1355.1 1st Qu.:0.0800
- Mode :character Mode :character Median : 528.6 Median :1879.5 Median :0.1100
- Mean : 570.6 Mean :1889.1 Mean :0.1214
- 3rd Qu.: 732.5 3rd Qu.:2386.7 3rd Qu.:0.1500
- Max. :1428.9 Max. :3690.4 Max. :0.8700
- plt_vclass pre_seg fol_seg context
- Length:44818 Length:44818 Length:44818 Length:44818
- Class :character Class :character Class :character Class :character
- Mode :character Mode :character Mode :character Mode :character
-
-
-
- word_trans F1_n F2_n manner
- Length:44818 Min. :-3.0055 Min. :-2.4533 Length:44818
- Class :character 1st Qu.:-1.4855 1st Qu.:-0.7021 Class :character
- Mode :character Median :-0.7261 Median : 0.8525 Mode :character
- Mean :-0.2476 Mean : 0.5639
- 3rd Qu.: 1.0521 3rd Qu.: 1.7315
- Max. : 4.3707 Max. : 4.8674
-
-
-
-summary()
is a function that works on almost every kind of object.
Since data frames are 2 dimensional (rows are one dimension, columns are another), the way you index them is a little bit more complicated than with vectors. It still uses square brackets, though, but these square brackets have two positions:
-df[row number, column number]
-If you specify a specific row number, but leave the column number blank, you’ll get back that row and all columns.
- - - - pitch[1,]
-
-
-Alternatively, if you specify just the column number, but leave the rows blank, you’ll get back all of the values for that column.
- - - - pitch[,2]
-
-
-[1] 18 35 41 62
-
-
-
-When you specify both, you get back the value in the specified row and column
- - - - pitch[1,2]
-
-
-[1] 18
-
-
-
-However, there is a special indexing operator for data frames that take advantage of their named columns: $
.
df$column_name
- pitch$speaker_names
-
-
-[1] Charlie Skyler Sawyer Jamie
-Levels: Charlie Jamie Sawyer Skyler
-
-
-
-After accessing the column of a data frame, you can index it just like it’s a vector.
- - - - pitch$speaker_names[1]
-
-
-[1] Charlie
-Levels: Charlie Jamie Sawyer Skyler
-
-
-
-If you really want to, you can do logical indexing of data frames like so:
- - - - pitch[pitch$speaker_names == "Charlie", ]
-
-
-But there’s also a function called filter()
that you can use to do the same thing. filter()
takes a data frame as its first argument, and then a logical statement referring to one or more of the data frame’s columns.
filter(pitch, speaker_names == "Charlie")
-
-
- filter(pitch, ages > 18, F0 > 190)
-
-
-~5 Minute Activity
-First, review the documentation of the iy_ah
data set with ?iy_ah
. Using filter()
and nrow()
, find out what percent of /i:/ tokens have a duration less than 90ms (0.09s).
R can easily read comma-separated (.csv) files and tab-delimited files into its memory.1 You can read them in with read.csv()
and read.delim()
, respectively. If your data is unavoidably in an Excel spreadsheet, there is a package called readxl
with a function called read_excel()
If you have the readxl
package installed, I strongly recommend reading over its documentation on sheet geometry by calling up the vignette like so:
vignette("sheet-geometry", package = "readxl")
-
-
-
-Last Minute Update: There is also package for reading data in from google spreadsheets https://github.com/jennybc/googlesheets. I haven’t used it yet, but it’s gotten good reviews.
-When loading a data file into R, you are just loading it into the R workspace. Any alterations or modifications you make to the data frame will not be reflected in the file in your system, just in the copy in the R workspace.
-The tricky thing now is that the way that feels most natural or normal for you to organize and name your files and folders doesn’t necessarily translate into a good way for R (or other programming languages) to look at them. In order to load a file into R, you need to provide read.csv()
or read.delim()
with the “path” to the file, which is just a text string.
For example, here’s a screenshot of a data file I’d like to load into R.
-I have the option turned on in my system to see the full path at the bottom of the file window, so you can see a full list of all of the folders this data file is embedded in. In order to read this data into R, you need to type out the full path, although a nice thing about
- - - - joe_vowels <- read.csv("~/ownCloud/DocSyncUoE/Courses/LSA/data/joe_vowels.csv")
-
-
-
-If you’re not sure what it looks like on your system, use the file.choose()
function.
file.choose()
-
-
-
-That’ll launch the default visual file browser for your system. After browsing around and clicking on a file, file.choose()
will print the character string that represents the path to that file into the console.
Hygiene
-Don’t rely heavily on file.choose()
. Sometimes, I’ve seen R scripts with the following line of code in it:
data <- read.csv(file.choose())
-
-
-
-Please never do this. I would caution against using it in any code, scripts or notebooks at all. Only ever use it to refresh your memory of where your data is located. By always writing out the the text of the path to the data, you
-One pretty cool thing is that if a data file is up on a website somewhere, you can just access it by passing the url to read.csv()
or read.delim()
.2 Here is some sample data on the Donner Party.3
donner <- read.csv("http://jofrhwld.github.io/data/donner.csv")
- head(donner)
-
-
-
-~5 minute activity
-Download the file joe_vowels.csv
from the course Canvas. Save it to the data directory for the course, or wherever you would like to keep it. Read it into R. What’s my mean F1 and F2 across all of my vowels?
We’ve discussed how data ought to be tidily organized, and we’ve now gone over how to load data, and minimally explore dataframes in R. Let’s quickly go over how to tidy up messy data a little.
-First, let’s look at the wide iy_ah_wide
dataframe, which is part of the lsa2017
package.
iy_ah_wide
-
-
-The problem with this data is
-Getting to a tidier format of the data will involve a three step process:
-We can do this easily with the functions gather()
, separate()
and spread()
from the tidyr
package.
For a smaller illustrative purpose for people who may feel uneasy about vowels and formants, I’ll be illustrating each of these steps with a simpler data set about how many apples and oranges two people bought, and how many they ate.
- - - -fruit <- data.frame(person = c("Oakley", "Charlie"),
- apples_bought = c(5, 3),
- apples_ate = c(1, 2),
- oranges_bought = c(5, 4),
- oranges_ate = c(3, 3))
-
-
-
-
-
-
-
-
-person | -apples_bought | -apples_ate | -oranges_bought | -oranges_ate | -
---|---|---|---|---|
Oakley | -5 | -1 | -5 | -3 | -
Charlie | -3 | -2 | -4 | -3 | -
Note, even though the column labels look different, this is is an equivalent table to formatting involving merged column label cells.
-The gather()
function makes wide data long. It takes the following arguments:
gather(data, key, value, cols)
-data
-key
and value
-gather()
is going to take the column names and put them in the column you give to key
, and the values from all the cells and put them in the column you call value
.cols
-gather()
that we’ll discuss.Here’s how that’ll work for the fruit data. We’ll tell gather()
to gather columns 2 through 5.
fruit_long <- gather(data = fruit,
- key = fruit_behavior,
- value = number,
- 2:5)
-
-
-
-
-
-
-person | -fruit_behavior | -number | -
---|---|---|
Oakley | -apples_bought | -5 | -
Charlie | -apples_bought | -3 | -
Oakley | -apples_ate | -1 | -
Charlie | -apples_ate | -2 | -
Oakley | -oranges_bought | -5 | -
Charlie | -oranges_bought | -4 | -
Oakley | -oranges_ate | -3 | -
Charlie | -oranges_ate | -3 | -
gather()
has returned a new data frame. It has created a new column called fruit_behavior
, because we told it to with the key
argument, and it has created a new column called number
, because we told it to with the value
function. It has taken all of the column names of the columns we told it to gather, and put them into the fruit_behavior
column, and the numeric values from the columns we told it to gather, and put them into the number
column. It has also repeated the rows of the other columns (person
) as logically necessary.
Now, we told it to gather column numbers 2 through 5, but this would have also worked:
- - - -gather(data = fruit,
- key = fruit_behavior,
- value = number,
- c("apples_bought","apples_ate", "oranges_bought", "oranges_ate"))
-
-
-
-gather()
also has a more convenient method of specifying the columns you want to gather by passing it a named range of columns. We want to gather all columns from apples_bought
to oranges_ate
, so we can tell it to do so with apples_bought:oranges_ate
.
gather(data = fruit,
- key = fruit_behavior,
- value = number,
- apples_bought:oranges_ate)
-
-
-
-Ok, let’s do this now to the iy_ah_wide
data, gathering all of the columns from ah_F1
to iy_F2
.
iy_ah_step1 <- gather(data = iy_ah_wide,
- key = vowel_formant,
- value = hz,
- ah_F1:iy_F2)
-iy_ah_step1
-
-
-For the fruit data, the only un-gathered column was person
, but for iy_ah_wide
, idstring
, age
, sex
, and year
, were all ungathered. Here you can see how all rows of ungathered columns are repeated as logically necessary.
There is still a problem with both the fruit_long
and the iy_ah_step1
data frames, which is that two different kinds of data are merged within one column. For iy_ah_step1
, the vowel class and formant variable are merged together (e.g. ah_F1
) and for fruit_long
, the fruit and behavior are merged together (e.g. apple_bought
). We need to separate these, with a very aptly named function called separate()
separate(data, col, into, sep)
-data
-col
-into
-sep
-col
.Here’s how it works for fruit_long
.
fruit_separate <- separate(data = fruit_long,
- col = fruit_behavior,
- into = c("fruit", "behavior"),
- sep = "_")
-
-
-
-
-
-
-person | -fruit | -behavior | -number | -
---|---|---|---|
Oakley | -apples | -bought | -5 | -
Charlie | -apples | -bought | -3 | -
Oakley | -apples | -ate | -1 | -
Charlie | -apples | -ate | -2 | -
Oakley | -oranges | -bought | -5 | -
Charlie | -oranges | -bought | -4 | -
Oakley | -oranges | -ate | -3 | -
Charlie | -oranges | -ate | -3 | -
It has returned a new data frame with the fruit_behavior
column split into two new columns, named after what I passed to the into
argument. It split up fruit_behavior
based on what I passed to sep
, which was the underscore character.
Let’s do this for iy_ah_step1
now.
iy_ah_step2 <- separate(iy_ah_step1,
- vowel_formant,
- into = c("vowel", "formant"),
- sep = "_")
-iy_ah_step2
-
-
-We now have two separate columns for vowel
and formant
.
Hygiene
-I have been very helpful and used underscores to merge together the values we want to separate. Be helpful to yourself, and be consistent in the semantics of how you used potential delimiters like -
and _
. Here’s an example of being helpful to yourself:
project_subject_firstname-lastname
-
-EDI_1_Stuart-Duddingston
-EDI_2_Connor-Black-Macdowall
-EDI_3_Mhairi
-This is helpful, because when you separate by underscore, you’ll have something tidy
-EDI 1 Stuart-Duddingston
-EDI 2 Connor-Black-Macdowall
-EDI 3 Mhairi
-If you used -
for everything, you’ll have chaos when you try to separate them because some speakers have “double barreled” names, and some speakers have only first names:
# Input:
-EDI-1-Stuart-Duddingston
-EDI-2-Connor-Black-Macdowall
-EDI-3-Mhairi
-
-# Becomes
-
-EDI 1 Stuart Duddingston
-EDI 2 Connor Black Macdowall
-EDI 3 Mhairi
-This goes beyond R programming. You should make some decisions and stick with them for all of your data analysis, including file naming, Praat tier naming, etc.
-We’ve got one last step, which is spreading the values in some rows across the column space. With the fruit
data, we might not want a column called behavior
, but actually have two columns called bought
and ate
. For the vowel data, we definitely don’t want one column called formant
. We want one called F1
and one called F2
. We can do this with the spread()
function.
spread(data, key, value)
-data
-key
-value
-Here’s how that looks with the fruit_separate
data.
fruit_spread <- spread(data = fruit_separate,
- key = behavior,
- value = number)
-
-
-
-
-
-
-person | -fruit | -ate | -bought | -
---|---|---|---|
Charlie | -apples | -2 | -3 | -
Charlie | -oranges | -3 | -4 | -
Oakley | -apples | -1 | -5 | -
Oakley | -oranges | -3 | -5 | -
This has created a new data frame. I told spread()
to spread the values in behavior
across the column space. Because it had only two unique values in it (bought
and ate
), it has created two new columns called bought
and ate
. After creating these new columns, it had to fill in the new cells with some values, and I told it to use the values in number
for that.
Here’s how that works with iy_ah_step2
.
iy_ah_step3 <- spread(data = iy_ah_step2,
- key = formant,
- value = hz)
-iy_ah_step3
-
-
-Now, we’ve finally gotten to a tidy data format. In our next meeting, we’ll discuss how to chain these three functions into one easy to read process.
-Idiom
-You might have noticed that in the functions above, I’ve put a new line between individual function arguments. I’ve done this because white-space doesn’t matter when it comes to R. I could have written these with just spaces between each argument, but that would be too visually crowded.
- - - -# compare
-
-# One line
-fruit_separate <- separate(data = fruit_long, col = fruit_behavior, into = c("fruit", "behavior"), sep = "_")
-
-# New Lines
-fruit_separate <- separate(data = fruit_long,
- col = fruit_behavior,
- into = c("fruit", "behavior"),
- sep = "_")
-
-
-
-
-I encourage you to use new lines similarly to give yourself “some space to breathe”. Don’t be shy about it. But, if you put newlines between some arguments, you should really put new lines between all arguments.
-My personal aesthetic preference is for tab-delimited files.↩
This doesn’t work if the file is behind encryption, i.e. if it begins with https://
.↩
“The Donner Party (sometimes called the Donner-Reed Party) was a group of American pioneer migrants who set out for California in a wagon train. Delayed by a series of mishaps, they spent the winter of 1846–47 snowbound in the Sierra Nevadas. Some of the migrants resorted to cannibalism to survive, eating those who had succumbed to starvation and sickness.” https://en.wikipedia.org/wiki/Donner_Party↩