From 46d06cefbf2af45dfd0b91cea9d38f98c7231911 Mon Sep 17 00:00:00 2001 From: Andrew Ghazi <6763470+andrewGhazi@users.noreply.github.com> Date: Mon, 16 Sep 2024 13:24:18 -0400 Subject: [PATCH 1/3] change q1 to be a direct application --- episodes/eda_qc.Rmd | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/episodes/eda_qc.Rmd b/episodes/eda_qc.Rmd index d2d8acd..9bfda28 100644 --- a/episodes/eda_qc.Rmd +++ b/episodes/eda_qc.Rmd @@ -98,6 +98,8 @@ median(bcrank$total) ``` Just 2! Clearly many barcodes produce practically no output. + + ::: :::: @@ -197,6 +199,7 @@ or we could look at the distribution of such metrics and use a data adaptive thr ```{r} summary(df$detected) + summary(df$subsets_Mito_percent) ``` @@ -212,14 +215,17 @@ sce$discard <- reasons$discard :::: challenge -We've removed empty cells and low-quality cells to be discarded. How many cells are we left with at this point? +Maybe our sample preparation was poor and we want the QC to be more strict. How could we change the set the QC filtering to use 4 MADs as the threshold for outlier calling? ::: solution +You set `nmads = 4` like so: + ```{r} -table(sce$discard) +reasons_strict <- perCellQCFilters(df, sub.fields = "subsets_Mito_percent", nmads = 4) ``` -There are `r unname(table(sce$discard)[1])` cells that *haven't* been flagged to be discarded, so that's how many we have left. +You would then need to reassign the `discard` column as well, but we'll stick with the 3 MADs default for now. + ::: :::: From 369749074612a4668eb6269da595d58fe33b635c Mon Sep 17 00:00:00 2001 From: Andrew Ghazi <6763470+andrewGhazi@users.noreply.github.com> Date: Mon, 16 Sep 2024 13:47:26 -0400 Subject: [PATCH 2/3] more direct fill-in-the-blank question on size factors --- episodes/eda_qc.Rmd | 53 +++++++++++++++++++++++++++++++++++++-------- 1 file changed, 44 insertions(+), 9 deletions(-) diff --git a/episodes/eda_qc.Rmd b/episodes/eda_qc.Rmd index 9bfda28..246a3e8 100644 --- a/episodes/eda_qc.Rmd +++ b/episodes/eda_qc.Rmd @@ -342,12 +342,30 @@ sce :::: challenge -Some sophisticated experiments perform additional steps so that they can estimate size factors from so-called "spike-ins". Judging by the name, what do you think "spike-ins" are, and what additional steps are required to use them? +Fill in the blanks for normalization that uses simpler library size factors instead of deconvolution. + +```{r eval=FALSE} +____ <- ____SizeFactors(sce) + +sizeFactors(sce) <- ____ + +sce <- ____(sce) + +sce +``` ::: solution +```{r eval=FALSE} +lib.sf <- librarySizeFactors(sce) -Spike-ins are deliberately-introduced exogeneous RNA from an exotic or synthetic source at a known concentration. This provides a known signal to normalize to. Exotic or synthetic RNA (e.g. soil bacteria RNA in a study of human cells) is used in order to avoid confusing spike-in RNA with sample RNA. This has the obvious advantage of accounting for cell-wise variation, but adds additional sample-preparation work. +sizeFactors(sce) <- lib.sf + +sce <- logNormCounts(sce) +sce +``` + +If you run this chunk, make sure to go back and re-run the normalization with deconvolution normalization if you want your work to align with the rest of this episode. ::: :::: @@ -385,7 +403,7 @@ The blue line represents the uninteresting "technical" variance for any given ge ### Selecting highly variable genes -The next step is to select the subset of HVGs to use in downstream analyses. A larger set will assure that we do not remove important genes, at the cost of potentially increasing noise. Typically, we restrict ourselves to the top $n$ genes, here we chose $n = 1000$, but this choice should be guided by prior biological knowledge; for instance, we may expect that only about 10% of genes to be differentially expressed across our cell populations and hence select 10% of genes as higly variable (e.g., by setting `prop = 0.1`). +The next step is to select the subset of HVGs to use in downstream analyses. A larger set will assure that we do not remove important genes, at the cost of potentially increasing noise. Typically, we restrict ourselves to the top $n$ genes, here we chose $n = 1000$, but this choice should be guided by prior biological knowledge; for instance, we may expect that only about 10% of genes to be differentially expressed across our cell populations and hence select 10% of genes as highly variable (e.g., by setting `prop = 0.1`). ```{r} hvg.sce.var <- getTopHVGs(dec.sce, n = 1000) @@ -393,12 +411,6 @@ hvg.sce.var <- getTopHVGs(dec.sce, n = 1000) head(hvg.sce.var) ``` -:::: challenge - -Run an internet search for some of the most highly variable genes we've identified here. See if you can identify the type of protein they produce or what sort of process they're involved in. Do they make biological sense to you? - -:::: - ## Dimensionality Reduction Many scRNA-seq analysis procedures involve comparing cells based on their expression values across multiple genes. For example, clustering aims to identify cells with similar transcriptomic profiles by computing Euclidean distances across genes. In these applications, each individual gene represents a dimension of the data, hence we can think of the data as "living" in a ten-thousand-dimensional space. @@ -583,6 +595,29 @@ The package `DropletTestFiles` includes the raw output from Cell Ranger of the p :::::::::::::::::::::::::::::::::: + +:::: challenge + +#### Extension challenge 1: Spike-ins + +Some sophisticated experiments perform additional steps so that they can estimate size factors from so-called "spike-ins". Judging by the name, what do you think "spike-ins" are, and what additional steps are required to use them? + +::: solution + +Spike-ins are deliberately-introduced exogeneous RNA from an exotic or synthetic source at a known concentration. This provides a known signal to normalize against. Exotic (e.g. soil bacteria RNA in a study of human cells) or synthetic RNA is used in order to avoid confusing spike-in RNA with sample RNA. This has the obvious advantage of accounting for cell-wise variation, but can substantially increase the amount of sample-preparation work. + +::: + +:::: + +:::: challenge + +#### Extension challenge 2: Background research + +Run an internet search for some of the most highly variable genes we identified in the feature selection section. See if you can identify the type of protein they produce or what sort of process they're involved in. Do they make biological sense to you? + +:::: + ::::::::::::::::::::::::::::::::::::: keypoints - Empty droplets, i.e. droplets that do not contain intact cells and that capture only ambient or background RNA, should be removed prior to an analysis. The `emptyDrops` function from the [DropletUtils](https://bioconductor.org/packages/DropletUtils) package can be used to identify empty droplets. From a2601a5da3bedfc67c858f5f08ae8f8781428078 Mon Sep 17 00:00:00 2001 From: Andrew Ghazi <6763470+andrewGhazi@users.noreply.github.com> Date: Mon, 16 Sep 2024 14:34:22 -0400 Subject: [PATCH 3/3] reformulate questions --- episodes/eda_qc.Rmd | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/episodes/eda_qc.Rmd b/episodes/eda_qc.Rmd index 246a3e8..9d80562 100644 --- a/episodes/eda_qc.Rmd +++ b/episodes/eda_qc.Rmd @@ -411,6 +411,30 @@ hvg.sce.var <- getTopHVGs(dec.sce, n = 1000) head(hvg.sce.var) ``` +:::: challenge + +Imagine you have data that were prepared by three people with varying level of experience, which leads to varying technical noise. How can you account for this blocking structure when selecting HVGs? + +::: solution +Use the `block` argument in the call to `modelGeneVar()` like so: + +```{r eval=FALSE} + +sce$experimenter = factor(sample(c("Perry", "Merry", "Gary"), + replace = TRUE, + size = ncol(sce))) + +blocked_variance_df = modelGeneVar(sce, + block = sce$experimenter) + +``` + +Blocked models are evaluated on each block separately then combined. If the experimental groups are related in some structured way, it may be preferable to use the `design` argument. See `?modelGeneVar` for more detail. + +::: + +::: + ## Dimensionality Reduction Many scRNA-seq analysis procedures involve comparing cells based on their expression values across multiple genes. For example, clustering aims to identify cells with similar transcriptomic profiles by computing Euclidean distances across genes. In these applications, each individual gene represents a dimension of the data, hence we can think of the data as "living" in a ten-thousand-dimensional space.