From ed0818057e6b184dcd2a1a3e0739421c1878b9eb Mon Sep 17 00:00:00 2001 From: GitHub Actions Date: Thu, 19 Dec 2024 20:22:35 +0000 Subject: [PATCH] site deploy Auto-generated via `{sandpaper}` Source : 2cbf87376e44277594e26f396baa5bb7686411eb Branch : md-outputs Author : GitHub Actions Time : 2024-12-19 20:22:17 +0000 Message : markdown source builds Auto-generated via `{sandpaper}` Source : 6064725685e3a7ff6f9de599bacb439a33cc95e7 Branch : main Author : Chris Endemann Time : 2024-12-19 20:21:39 +0000 Message : Update 7d-OOD-detection-distance.md --- 2-model-eval-and-fairness.html | 10 +++--- 3-model-fairness-deep-dive.html | 4 +-- 7d-OOD-detection-distance.html | 17 ++++++---- 8-releasing-a-model.html | 8 ++--- aio.html | 39 +++++++++++++--------- instructor/2-model-eval-and-fairness.html | 10 +++--- instructor/3-model-fairness-deep-dive.html | 4 +-- instructor/7d-OOD-detection-distance.html | 17 ++++++---- instructor/8-releasing-a-model.html | 8 ++--- instructor/aio.html | 39 +++++++++++++--------- md5sum.txt | 2 +- pkgdown.yml | 2 +- 12 files changed, 92 insertions(+), 68 deletions(-) diff --git a/2-model-eval-and-fairness.html b/2-model-eval-and-fairness.html index 5d3e602..a00d06c 100644 --- a/2-model-eval-and-fairness.html +++ b/2-model-eval-and-fairness.html @@ -526,7 +526,7 @@

What accuracy metric to use?

-
+
  1. It is best if all patients who need the screening get it, and there is little downside for doing screenings unnecessarily because the @@ -655,7 +655,7 @@

    Matching fairness terminology with definitions

    -
    +

    A - 3, B - 2, C - 4, D - 1

    @@ -736,7 +736,7 @@

    Red-teaming large language models

    -
    +

    Most publicly-available LLM providers set up guardrails to avoid propagating biases present in their training data. For instance, as of @@ -787,7 +787,7 @@

    Challenge

    -
    +

    While the picture is of Barack Obama, the upsampled image shows a white face. Unblurred version of the pixelated picture of Obama. Instead of showing Obama, it shows a white man.

    @@ -888,7 +888,7 @@

    Pros and cons of preprocessing options

    -
    +

    A downside of oversampling is that it may violate statistical assumptions about independence of samples. A downside of undersampling diff --git a/3-model-fairness-deep-dive.html b/3-model-fairness-deep-dive.html index ea90862..0b84cc4 100644 --- a/3-model-fairness-deep-dive.html +++ b/3-model-fairness-deep-dive.html @@ -766,7 +766,7 @@

    Interpreting the plot

    -
    +
    1. Using a threshold of 0.1, the accuracy is about 0.72 and the 1-DI score is about 0.54. Using a threshold of 0.5, the accuracy is about @@ -1070,7 +1070,7 @@

      Discuss

      -
      +

      Pros: Randomization can be effective at increasing fairness.

      diff --git a/7d-OOD-detection-distance.html b/7d-OOD-detection-distance.html index e387cd1..8157f23 100644 --- a/7d-OOD-detection-distance.html +++ b/7d-OOD-detection-distance.html @@ -465,15 +465,19 @@

      Disadvantages\[ D_M(x) = \sqrt{(x - \mu)^T \Sigma^{-1} (x - \mu)} -\] where: - x: The input data point. - \(mu\): The mean vector of the distribution. -- Sigma: The covariance matrix of the distribution. The inverse of the +\] where:

      +
      • x: The input data point.
      • +
      • +\(mu\): The mean vector of the +distribution.
      • +
      • Sigma: The covariance matrix of the distribution. The inverse of the covariance matrix is used to “whiten” the feature space, ensuring that features with larger variances do not dominate the distance computation. This adjustment also accounts for correlations between features, transforming the data into a space where all features are uncorrelated and standardized. This approach is robust for high-dimensional data as -it accounts for correlations between features.

        -
        +it accounts for correlations between features.
      • +

      PYTHON

      import numpy as np
      @@ -910,10 +914,11 @@ 

      Concluding thoughts and futur

      If you’re interested, we can explore specific contrastive learning methods like SimCLR or MoCo in future sessions, diving into how their objectives help create robust feature -spaces!

      + -->

      diff --git a/8-releasing-a-model.html b/8-releasing-a-model.html index dedc37b..9fb3282 100644 --- a/8-releasing-a-model.html +++ b/8-releasing-a-model.html @@ -475,7 +475,7 @@

      Why should we share trained models?

      -
      +