Skip to content

Commit

Permalink
Update 6-uncertainty-overview.md
Browse files Browse the repository at this point in the history
  • Loading branch information
qualiaMachine authored Dec 19, 2024
1 parent 0551991 commit d572932
Showing 1 changed file with 2 additions and 3 deletions.
5 changes: 2 additions & 3 deletions episodes/6-uncertainty-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,7 @@ exercises: 0
- Summarize when and where different uncertainty estimation methods are most useful.

::::::::::::::::::::::::::::::::::::::::::::::::

## How confident is my model? Will it generalize to new data or subpopulations?
### How confident is my model? Will it generalize to new data or subpopulations?
Understanding how confident a model is in its predictions is a valuable tool for building trustworthy AI systems, especially in high-stakes settings like healthcare or autonomous vehicles. Model uncertainty estimation focuses on quantifying the model's confidence and is often used to identify predictions that require further review or caution.

### Sources of uncertainty
Expand Down Expand Up @@ -162,4 +161,4 @@ When choosing a method, it’s important to consider the trade-offs in computati
3. **Out-of-distribution detection**: Identify inputs outside the training distribution.
- Example application: Flagging out-of-scope queries in chatbot systems.
- Reference: Hendrycks, D., & Gimpel, K. (2017). "A baseline for detecting misclassified and out-of-distribution examples in neural networks."
[ArXiv](https://arxiv.org/abs/1610.02136).
[ArXiv](https://arxiv.org/abs/1610.02136).

0 comments on commit d572932

Please sign in to comment.