From 97640d95f26e25fd7b20d2badf318f0c6dd0ed67 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Tue, 12 Mar 2024 10:37:49 +0000 Subject: [PATCH] build based on 865544c --- dev/.documenter-siteinfo.json | 2 +- dev/contribute/index.html | 2 +- dev/explanation/architecture/index.html | 2 +- dev/explanation/finite_sample_correction/index.html | 2 +- dev/explanation/index.html | 2 +- dev/faq/index.html | 2 +- dev/how_to_guides/index.html | 2 +- dev/how_to_guides/llm/index.html | 2 +- dev/how_to_guides/mnist/index.html | 2 +- dev/how_to_guides/timeseries/index.html | 2 +- dev/index.html | 2 +- dev/reference/index.html | 8 ++++---- dev/search_index.js | 2 +- dev/tutorials/classification/index.html | 2 +- dev/tutorials/index.html | 2 +- dev/tutorials/plotting/index.html | 2 +- dev/tutorials/regression/index.html | 2 +- 17 files changed, 20 insertions(+), 20 deletions(-) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index af2ab4e..b837295 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.2","generation_timestamp":"2024-03-11T17:00:57","documenter_version":"1.3.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.2","generation_timestamp":"2024-03-12T10:37:45","documenter_version":"1.3.0"}} \ No newline at end of file diff --git a/dev/contribute/index.html b/dev/contribute/index.html index b54de1e..4960bde 100644 --- a/dev/contribute/index.html +++ b/dev/contribute/index.html @@ -1,2 +1,2 @@ -🛠 Contribute · ConformalPrediction.jl

Contributor’s Guide

Contents

Contributing to ConformalPrediction.jl

Contributions are welcome! Please follow the SciML ColPrac guide. To get started we recommend you have a look at the Explanation section in the docs. The subsection explaining the package architecture may be particularly useful. You may already have a specific idea about what you want to contribute, in which case please feel free to open an issue and pull request. If you don’t have anything specific in mind, the list of outstanding issues may be a good source of inspiration. If you decide to work on an outstanding issue, be sure to check its current status: if it’s “In Progress”, check in with the developer who last worked on the issue to see how you may help.

+🛠 Contribute · ConformalPrediction.jl

Contributor’s Guide

Contents

Contributing to ConformalPrediction.jl

Contributions are welcome! Please follow the SciML ColPrac guide. To get started we recommend you have a look at the Explanation section in the docs. The subsection explaining the package architecture may be particularly useful. You may already have a specific idea about what you want to contribute, in which case please feel free to open an issue and pull request. If you don’t have anything specific in mind, the list of outstanding issues may be a good source of inspiration. If you decide to work on an outstanding issue, be sure to check its current status: if it’s “In Progress”, check in with the developer who last worked on the issue to see how you may help.

diff --git a/dev/explanation/architecture/index.html b/dev/explanation/architecture/index.html index 4b53929..ee3f507 100644 --- a/dev/explanation/architecture/index.html +++ b/dev/explanation/architecture/index.html @@ -1,2 +1,2 @@ -Package Architecture · ConformalPrediction.jl

Package Architecture

The diagram below demonstrates the package architecture at the time of writing. This is still subject to change, so any thoughts and comments are very much welcome.

The goal is to make this package as compatible as possible with MLJ to tab into existing functionality. The basic idea is to subtype MLJ Supervised models and then use concrete types to implement different approaches to conformal prediction. For each of these concrete types the compulsory MMI.fit and MMI.predict methods need be implemented (see here).

Abstract Subtypes

Currently, I intend to work with three different abstract subtypes:

fit and predict

The fit and predict methods are compulsory in order to prepare models for general use with MLJ. They also serve us to implement the logic underlying the various approaches to conformal prediction. To understand how this currently works, have a look at the ConformalPrediction.AdaptiveInductiveClassifier as an example: fit(conf_model::ConformalPrediction.AdaptiveInductiveClassifier, verbosity, X, y) and predict(conf_model::ConformalPrediction.AdaptiveInductiveClassifier, fitresult, Xnew).

+Package Architecture · ConformalPrediction.jl

Package Architecture

The diagram below demonstrates the package architecture at the time of writing. This is still subject to change, so any thoughts and comments are very much welcome.

The goal is to make this package as compatible as possible with MLJ to tab into existing functionality. The basic idea is to subtype MLJ Supervised models and then use concrete types to implement different approaches to conformal prediction. For each of these concrete types the compulsory MMI.fit and MMI.predict methods need be implemented (see here).

Abstract Subtypes

Currently, I intend to work with three different abstract subtypes:

fit and predict

The fit and predict methods are compulsory in order to prepare models for general use with MLJ. They also serve us to implement the logic underlying the various approaches to conformal prediction. To understand how this currently works, have a look at the ConformalPrediction.AdaptiveInductiveClassifier as an example: fit(conf_model::ConformalPrediction.AdaptiveInductiveClassifier, verbosity, X, y) and predict(conf_model::ConformalPrediction.AdaptiveInductiveClassifier, fitresult, Xnew).

diff --git a/dev/explanation/finite_sample_correction/index.html b/dev/explanation/finite_sample_correction/index.html index 14227e6..c38bc6b 100644 --- a/dev/explanation/finite_sample_correction/index.html +++ b/dev/explanation/finite_sample_correction/index.html @@ -15,4 +15,4 @@ vline!([mean(Δ)], color=:red, label="mean") push!(plts, plt) end -plot(plts..., layout=(1,3), size=(900, 300), legend=:topleft, title=["nobs = 100" "nobs = 1000" "nobs = 10000"])

See also this related discussion.

References

Angelopoulos, Anastasios N., and Stephen Bates. 2021. “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification.” https://arxiv.org/abs/2107.07511.

Barber, Rina Foygel, Emmanuel J. Candès, Aaditya Ramdas, and Ryan J. Tibshirani. 2021. “Predictive Inference with the Jackknife+.” The Annals of Statistics 49 (1): 486–507. https://doi.org/10.1214/20-AOS1965.

+plot(plts..., layout=(1,3), size=(900, 300), legend=:topleft, title=["nobs = 100" "nobs = 1000" "nobs = 10000"])

See also this related discussion.

References

Angelopoulos, Anastasios N., and Stephen Bates. 2021. “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification.” https://arxiv.org/abs/2107.07511.

Barber, Rina Foygel, Emmanuel J. Candès, Aaditya Ramdas, and Ryan J. Tibshirani. 2021. “Predictive Inference with the Jackknife+.” The Annals of Statistics 49 (1): 486–507. https://doi.org/10.1214/20-AOS1965.

diff --git a/dev/explanation/index.html b/dev/explanation/index.html index 58bcbb8..7d1b1a1 100644 --- a/dev/explanation/index.html +++ b/dev/explanation/index.html @@ -1,2 +1,2 @@ -Overview · ConformalPrediction.jl

Explanation

In this section you will find detailed explanations about the methodology and code.

Explanation clarifies, deepens and broadens the reader’s understanding of a subject.

Diátaxis

In other words, you come here because you are interested in understanding how all of this actually works 🤓

+Overview · ConformalPrediction.jl

Explanation

In this section you will find detailed explanations about the methodology and code.

Explanation clarifies, deepens and broadens the reader’s understanding of a subject.

Diátaxis

In other words, you come here because you are interested in understanding how all of this actually works 🤓

diff --git a/dev/faq/index.html b/dev/faq/index.html index eccea00..74f241c 100644 --- a/dev/faq/index.html +++ b/dev/faq/index.html @@ -1,2 +1,2 @@ -❓ FAQ · ConformalPrediction.jl

Frequently Asked Questions

In this section we attempt to provide some reflections on frequently asked questions about the package and implemented methodologies. If you have a particular question that is not listed here, please feel free to also open an issue. While can answer questions regarding the package with a certain degree of confidence, I do not pretend to have any definite answers to methodological questions, but merely reflections (see the disclaimer below).

Package

Why the interface to MLJ.jl?

An important design choice. MLJ.jl is a one-stop shop for common machine learning models and pipelines in Julia. It’s growing fast and the development team is very accessible, friendly and enthusiastic. Conformal Prediction is a model-agnostic approach to uncertainty quantification, so it can be applied to any common (supervised) machine learning model. For these reasons I decided to interface this package to MLJ.jl. The idea is that any (supervised) MLJ.jl model can be conformalized using ConformalPrediction.jl. By leveraging existing MLJ.jl functionality for common tasks like training, prediction and model evaluation, this package is light-weight and scalable.

Methodology

For methodological questions about Conformal Prediction, my best advice is to consult the literature on the topic. A good place to start is “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification” (Angelopoulos and Bates 2021): the tutorial is comprehensive, accessible and continuously updated. Below you will find a list of high-level questions and reflections.

Disclaimer

    I want to emphasize that these are merely my own reflections. I provide these to the best of my knowledge and understanding of the topic, but please be aware that I am still on a learning journey myself. I have not read the entire literature on this topic (and won’t be able to in the future either). If you spot anything that doesn’t look right or sits at odds with something your read in the literature, please open an issue. Even better: if you want to add your own reflections and thoughts, feel free to open a pull request.

What is Predictive Uncertainty Quantification?

Predictive Uncertainty Quantification deals with quantifying the uncertainty around predictions for the output variable of a supervised model. It is a subset of Uncertainty Quantification, which can also relate to uncertainty around model parameters, for example. I will sometimes use both terms interchangeably, even though I shouldn’t (please bare with me, or if you’re bothered by a particular slip-up, open a PR).

Uncertainty of model parameters is a very important topic itself: we might be interested in understanding, for example, if the estimated effect θ of some input variable x on the output variable y is statistically significant. This typically hinges on being able to quantify the uncertainty around the parameter θ. This package does not offer this sort of functionality. I have so far not come across any work on Conformal Inference that deals with parameter uncertainty, but I also haven’t properly looked for it.

What is the (marginal) coverage guarantee?

The (marginal) coverage guarantee states that:

[…] the probability that the prediction set contains the correct label [for a fresh test point from the same distribution] is almost exactly 1 − α.

— Angelopoulos and Bates (2021)

See Angelopoulos and Bates (2021) for a formal proof of this property or check out this section or Pluto.jl 🎈 notebook to convince yourself through a small empirical exercise. Note that this property relates to a special case of conformal prediction, namely Split Conformal Prediction (Angelopoulos and Bates 2021).

What does marginal mean in this context?

The property is “marginal” in the sense that the probability is averaged over the randomness in the data (Angelopoulos and Bates 2021). Depending on the size of the calibration set (context: Split Conformal Prediction), the realized coverage or estimated empirical coverage may deviate slightly from the user specified value 1 − α. To get a sense of this effect, you may want to check out this Pluto.jl 🎈 notebook: it allows you to adjust the calibration set size and check the resulting empirical coverage. See also Section 3 of Angelopoulos and Bates (2021).

Is CP really distribution-free?

The marginal coverage property holds under the assumption that the input data is exchangeable, which is a minimal distributional assumption. So, in my view, the short answer to this question is “No”. I believe that when people use the term “distribution-free” in this context, they mean that no prior assumptions are being made about the actual form or family of distribution(s) that generate the model parameters and data. If we define “distribution-free” in this sense, then the answer to me seems “Yes”.

What happens if this minimal distributional assumption is violated?

Then the marginal coverage property does not hold. See here for an example.

What are set-valued predictions?

This should be clearer after reading through some of the other tutorials and explanations. For conformal classifiers of type ConformalProbabilisticSet, predictions are set-valued: these conformal classifiers may return multiple labels, a single label or no labels at all. Larger prediction sets indicate higher predictive uncertainty: for sets of size greater than one the conformal predictor cannot with certainty narrow down its prediction down to a single label, so it returns all labels that meet the specified marginal coverage.

How do I interpret the distribution of set size?

It can be useful to plot the distribution of set sizes in order to visually asses how adaptive a conformal predictor is. For more adaptive predictors the distribution of set sizes is typically spread out more widely, which reflects that “the procedure is effectively distinguishing between easy and hard inputs” (Angelopoulos and Bates 2021). This is desirable: when for a given sample it is difficult to make predictions, this should be reflected in the set size (or interval width in the regression case). Since ‘difficult’ lies on some spectrum that ranges from ‘very easy’ to ‘very difficult’ the set size should very across the spectrum of ‘empty set’ to ‘all labels included’.

What is aleatoric uncertainty? What is epistemic uncertainty?

Loosely speaking: aleatoric uncertainty relates to uncertainty that cannot be “learned away” by observing more data (think points near the decision boundary); epistemic uncertainty relates to uncertainty that can be “learned away” by observing more data.

References

Angelopoulos, Anastasios N., and Stephen Bates. 2021. “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification.” https://arxiv.org/abs/2107.07511.

+❓ FAQ · ConformalPrediction.jl

Frequently Asked Questions

In this section we attempt to provide some reflections on frequently asked questions about the package and implemented methodologies. If you have a particular question that is not listed here, please feel free to also open an issue. While can answer questions regarding the package with a certain degree of confidence, I do not pretend to have any definite answers to methodological questions, but merely reflections (see the disclaimer below).

Package

Why the interface to MLJ.jl?

An important design choice. MLJ.jl is a one-stop shop for common machine learning models and pipelines in Julia. It’s growing fast and the development team is very accessible, friendly and enthusiastic. Conformal Prediction is a model-agnostic approach to uncertainty quantification, so it can be applied to any common (supervised) machine learning model. For these reasons I decided to interface this package to MLJ.jl. The idea is that any (supervised) MLJ.jl model can be conformalized using ConformalPrediction.jl. By leveraging existing MLJ.jl functionality for common tasks like training, prediction and model evaluation, this package is light-weight and scalable.

Methodology

For methodological questions about Conformal Prediction, my best advice is to consult the literature on the topic. A good place to start is “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification” (Angelopoulos and Bates 2021): the tutorial is comprehensive, accessible and continuously updated. Below you will find a list of high-level questions and reflections.

Disclaimer

    I want to emphasize that these are merely my own reflections. I provide these to the best of my knowledge and understanding of the topic, but please be aware that I am still on a learning journey myself. I have not read the entire literature on this topic (and won’t be able to in the future either). If you spot anything that doesn’t look right or sits at odds with something your read in the literature, please open an issue. Even better: if you want to add your own reflections and thoughts, feel free to open a pull request.

What is Predictive Uncertainty Quantification?

Predictive Uncertainty Quantification deals with quantifying the uncertainty around predictions for the output variable of a supervised model. It is a subset of Uncertainty Quantification, which can also relate to uncertainty around model parameters, for example. I will sometimes use both terms interchangeably, even though I shouldn’t (please bare with me, or if you’re bothered by a particular slip-up, open a PR).

Uncertainty of model parameters is a very important topic itself: we might be interested in understanding, for example, if the estimated effect θ of some input variable x on the output variable y is statistically significant. This typically hinges on being able to quantify the uncertainty around the parameter θ. This package does not offer this sort of functionality. I have so far not come across any work on Conformal Inference that deals with parameter uncertainty, but I also haven’t properly looked for it.

What is the (marginal) coverage guarantee?

The (marginal) coverage guarantee states that:

[…] the probability that the prediction set contains the correct label [for a fresh test point from the same distribution] is almost exactly 1 − α.

— Angelopoulos and Bates (2021)

See Angelopoulos and Bates (2021) for a formal proof of this property or check out this section or Pluto.jl 🎈 notebook to convince yourself through a small empirical exercise. Note that this property relates to a special case of conformal prediction, namely Split Conformal Prediction (Angelopoulos and Bates 2021).

What does marginal mean in this context?

The property is “marginal” in the sense that the probability is averaged over the randomness in the data (Angelopoulos and Bates 2021). Depending on the size of the calibration set (context: Split Conformal Prediction), the realized coverage or estimated empirical coverage may deviate slightly from the user specified value 1 − α. To get a sense of this effect, you may want to check out this Pluto.jl 🎈 notebook: it allows you to adjust the calibration set size and check the resulting empirical coverage. See also Section 3 of Angelopoulos and Bates (2021).

Is CP really distribution-free?

The marginal coverage property holds under the assumption that the input data is exchangeable, which is a minimal distributional assumption. So, in my view, the short answer to this question is “No”. I believe that when people use the term “distribution-free” in this context, they mean that no prior assumptions are being made about the actual form or family of distribution(s) that generate the model parameters and data. If we define “distribution-free” in this sense, then the answer to me seems “Yes”.

What happens if this minimal distributional assumption is violated?

Then the marginal coverage property does not hold. See here for an example.

What are set-valued predictions?

This should be clearer after reading through some of the other tutorials and explanations. For conformal classifiers of type ConformalProbabilisticSet, predictions are set-valued: these conformal classifiers may return multiple labels, a single label or no labels at all. Larger prediction sets indicate higher predictive uncertainty: for sets of size greater than one the conformal predictor cannot with certainty narrow down its prediction down to a single label, so it returns all labels that meet the specified marginal coverage.

How do I interpret the distribution of set size?

It can be useful to plot the distribution of set sizes in order to visually asses how adaptive a conformal predictor is. For more adaptive predictors the distribution of set sizes is typically spread out more widely, which reflects that “the procedure is effectively distinguishing between easy and hard inputs” (Angelopoulos and Bates 2021). This is desirable: when for a given sample it is difficult to make predictions, this should be reflected in the set size (or interval width in the regression case). Since ‘difficult’ lies on some spectrum that ranges from ‘very easy’ to ‘very difficult’ the set size should very across the spectrum of ‘empty set’ to ‘all labels included’.

What is aleatoric uncertainty? What is epistemic uncertainty?

Loosely speaking: aleatoric uncertainty relates to uncertainty that cannot be “learned away” by observing more data (think points near the decision boundary); epistemic uncertainty relates to uncertainty that can be “learned away” by observing more data.

References

Angelopoulos, Anastasios N., and Stephen Bates. 2021. “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification.” https://arxiv.org/abs/2107.07511.

diff --git a/dev/how_to_guides/index.html b/dev/how_to_guides/index.html index eb06ba1..3f8f011 100644 --- a/dev/how_to_guides/index.html +++ b/dev/how_to_guides/index.html @@ -1,2 +1,2 @@ -Overview · ConformalPrediction.jl

How-To Guides

In this section you will find a series of how-to-guides that showcase specific use cases of Conformal Prediction.

How-to guides are directions that take the reader through the steps required to solve a real-world problem. How-to guides are goal-oriented.

Diátaxis

In other words, you come here because you may have some particular problem in mind, would like to see how it can be solved using CP and then most likely head off again 🫡

+Overview · ConformalPrediction.jl

How-To Guides

In this section you will find a series of how-to-guides that showcase specific use cases of Conformal Prediction.

How-to guides are directions that take the reader through the steps required to solve a real-world problem. How-to guides are goal-oriented.

Diátaxis

In other words, you come here because you may have some particular problem in mind, would like to see how it can be solved using CP and then most likely head off again 🫡

diff --git a/dev/how_to_guides/llm/index.html b/dev/how_to_guides/llm/index.html index 09f1da4..5eea15e 100644 --- a/dev/how_to_guides/llm/index.html +++ b/dev/how_to_guides/llm/index.html @@ -136,4 +136,4 @@ declined_transfer ┤■ 0.0232856 transfer_into_account ┤■ 0.0108771 cancel_transfer ┤ 0.00876369 - └ ┘

Below we include a short demo video that shows the REPL-based chatbot in action.

Final Remarks

This work was done in collaboration with colleagues at ING as part of the ING Analytics 2023 Experiment Week. Our team demonstrated that Conformal Prediction provides a powerful and principled alternative to top-K intent classification. We won the first prize by popular vote.

References

Casanueva, Iñigo, Tadas Temčinas, Daniela Gerz, Matthew Henderson, and Ivan Vulić. 2020. “Efficient Intent Detection with Dual Sentence Encoders.” In Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI, 38–45. Online: Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.nlp4convai-1.5.

Liu, Yinhan, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. “RoBERTa: A Robustly Optimized BERT Pretraining Approach.” arXiv. https://doi.org/10.48550/arXiv.1907.11692.

+ └ ┘

Below we include a short demo video that shows the REPL-based chatbot in action.

Final Remarks

This work was done in collaboration with colleagues at ING as part of the ING Analytics 2023 Experiment Week. Our team demonstrated that Conformal Prediction provides a powerful and principled alternative to top-K intent classification. We won the first prize by popular vote.

References

Casanueva, Iñigo, Tadas Temčinas, Daniela Gerz, Matthew Henderson, and Ivan Vulić. 2020. “Efficient Intent Detection with Dual Sentence Encoders.” In Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI, 38–45. Online: Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.nlp4convai-1.5.

Liu, Yinhan, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. “RoBERTa: A Robustly Optimized BERT Pretraining Approach.” arXiv. https://doi.org/10.48550/arXiv.1907.11692.

diff --git a/dev/how_to_guides/mnist/index.html b/dev/how_to_guides/mnist/index.html index 5a5668e..4a13e41 100644 --- a/dev/how_to_guides/mnist/index.html +++ b/dev/how_to_guides/mnist/index.html @@ -88,4 +88,4 @@ for (_mod, mach) in results push!(plt_list, bar(mach.model, mach.fitresult, X; title=String(_mod))) end -plot(plt_list..., size=(800,300))

Figure 5: Prediction interval width.

References

Angelopoulos, Anastasios N., and Stephen Bates. 2021. “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification.” https://arxiv.org/abs/2107.07511.

Angelopoulos, Anastasios, Stephen Bates, Jitendra Malik, and Michael I. Jordan. 2022. “Uncertainty Sets for Image Classifiers Using Conformal Prediction.” arXiv. https://arxiv.org/abs/2009.14193.

Goodfellow, Ian J, Jonathon Shlens, and Christian Szegedy. 2014. “Explaining and Harnessing Adversarial Examples.” https://arxiv.org/abs/1412.6572.

LeCun, Yann. 1998. “The MNIST Database of Handwritten Digits.”

[1] For a full tutorial on how to build an MNIST image classifier relying solely on Flux.jl, check out this tutorial.

+plot(plt_list..., size=(800,300))

Figure 5: Prediction interval width.

References

Angelopoulos, Anastasios N., and Stephen Bates. 2021. “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification.” https://arxiv.org/abs/2107.07511.

Angelopoulos, Anastasios, Stephen Bates, Jitendra Malik, and Michael I. Jordan. 2022. “Uncertainty Sets for Image Classifiers Using Conformal Prediction.” arXiv. https://arxiv.org/abs/2009.14193.

Goodfellow, Ian J, Jonathon Shlens, and Christian Szegedy. 2014. “Explaining and Harnessing Adversarial Examples.” https://arxiv.org/abs/1412.6572.

LeCun, Yann. 1998. “The MNIST Database of Handwritten Digits.”

[1] For a full tutorial on how to build an MNIST image classifier relying solely on Flux.jl, check out this tutorial.

diff --git a/dev/how_to_guides/timeseries/index.html b/dev/how_to_guides/timeseries/index.html index 91ae7d0..8c9e923 100644 --- a/dev/how_to_guides/timeseries/index.html +++ b/dev/how_to_guides/timeseries/index.html @@ -83,4 +83,4 @@ lb_updated, fillrange = ub_updated, fillalpha = 0.2, label = "EnbPI", color=:lake, linewidth=0, framestyle=:box) plot!(legend=:outerbottom, legendcolumns=4) -plot!(size=(850,400), left_margin = 5Plots.mm)

Results

In time series problems, unexpected incidents can lead to sudden changes, and such scenarios are highly probable. As illustrated earlier, the model’s training data lacks information about these change points, making it unable to anticipate them. The top figure demonstrates that when residuals are not updated, the prediction intervals solely rely on the distribution of residuals from the training set. Consequently, these intervals fail to encompass the true observations after the change point, resulting in a sudden drop in coverage.

However, by partially updating the residuals, the method becomes adept at capturing the increasing uncertainties in model predictions. It is important to note that the changes in uncertainty occur approximately one day after the change point. This delay is attributed to the requirement of having a sufficient number of new residuals to alter the quantiles obtained from the residual distribution.

References

Xu, Chen, and Yao Xie. 2021. “Conformal Prediction Interval for Dynamic Time-Series.” In, 11559–69. PMLR. https://proceedings.mlr.press/v139/xu21h.html.

+plot!(size=(850,400), left_margin = 5Plots.mm)

Results

In time series problems, unexpected incidents can lead to sudden changes, and such scenarios are highly probable. As illustrated earlier, the model’s training data lacks information about these change points, making it unable to anticipate them. The top figure demonstrates that when residuals are not updated, the prediction intervals solely rely on the distribution of residuals from the training set. Consequently, these intervals fail to encompass the true observations after the change point, resulting in a sudden drop in coverage.

However, by partially updating the residuals, the method becomes adept at capturing the increasing uncertainties in model predictions. It is important to note that the changes in uncertainty occur approximately one day after the change point. This delay is attributed to the requirement of having a sufficient number of new residuals to alter the quantiles obtained from the residual distribution.

References

Xu, Chen, and Yao Xie. 2021. “Conformal Prediction Interval for Dynamic Time-Series.” In, 11559–69. PMLR. https://proceedings.mlr.press/v139/xu21h.html.

diff --git a/dev/index.html b/dev/index.html index 9438690..2805240 100644 --- a/dev/index.html +++ b/dev/index.html @@ -67,4 +67,4 @@ :linear

Classification:

keys(tested_atomic_models[:classification])
KeySet for a Dict{Symbol, Expr} with 3 entries. Keys:
   :nearest_neighbor
   :evo_tree
-  :logistic

Implemented Evaluation Metrics

To evaluate conformal predictors we are typically interested in correctness and adaptiveness. The former can be evaluated by looking at the empirical coverage rate, while the latter can be assessed through metrics that address the conditional coverage (Angelopoulos and Bates 2021). To this end, the following metrics have been implemented:

There is also a simple Plots.jl recipe that can be used to inspect the set sizes. In the regression case, the interval width is stratified into discrete bins for this purpose:

bar(mach.model, mach.fitresult, X)

🛠 Contribute

Contributions are welcome! A good place to start is the list of outstanding issues. For more details, see also the Contributor’s Guide. Please follow the SciML ColPrac guide.

🙏 Thanks

To build this package I have read and re-read both Angelopoulos and Bates (2021) and Barber et al. (2021). The Awesome Conformal Prediction repository (Manokhin, n.d.) has also been a fantastic place to get started. Thanks also to @aangelopoulos, @valeman and others for actively contributing to discussions on here. Quite a few people have also recently started using and contributing to the package for which I am very grateful. Finally, many thanks to Anthony Blaom (@ablaom) for many helpful discussions about how to interface this package to MLJ.jl.

🎓 References

Angelopoulos, Anastasios N., and Stephen Bates. 2021. “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification.” https://arxiv.org/abs/2107.07511.

Barber, Rina Foygel, Emmanuel J. Candès, Aaditya Ramdas, and Ryan J. Tibshirani. 2021. “Predictive Inference with the Jackknife+.” The Annals of Statistics 49 (1): 486–507. https://doi.org/10.1214/20-AOS1965.

Blaom, Anthony D., Franz Kiraly, Thibaut Lienart, Yiannis Simillides, Diego Arenas, and Sebastian J. Vollmer. 2020. “MLJ: A Julia Package for Composable Machine Learning.” Journal of Open Source Software 5 (55): 2704. https://doi.org/10.21105/joss.02704.

Manokhin, Valery. n.d. “Awesome Conformal Prediction.”

+ :logistic

Implemented Evaluation Metrics

To evaluate conformal predictors we are typically interested in correctness and adaptiveness. The former can be evaluated by looking at the empirical coverage rate, while the latter can be assessed through metrics that address the conditional coverage (Angelopoulos and Bates 2021). To this end, the following metrics have been implemented:

There is also a simple Plots.jl recipe that can be used to inspect the set sizes. In the regression case, the interval width is stratified into discrete bins for this purpose:

bar(mach.model, mach.fitresult, X)

🛠 Contribute

Contributions are welcome! A good place to start is the list of outstanding issues. For more details, see also the Contributor’s Guide. Please follow the SciML ColPrac guide.

🙏 Thanks

To build this package I have read and re-read both Angelopoulos and Bates (2021) and Barber et al. (2021). The Awesome Conformal Prediction repository (Manokhin, n.d.) has also been a fantastic place to get started. Thanks also to @aangelopoulos, @valeman and others for actively contributing to discussions on here. Quite a few people have also recently started using and contributing to the package for which I am very grateful. Finally, many thanks to Anthony Blaom (@ablaom) for many helpful discussions about how to interface this package to MLJ.jl.

🎓 References

Angelopoulos, Anastasios N., and Stephen Bates. 2021. “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification.” https://arxiv.org/abs/2107.07511.

Barber, Rina Foygel, Emmanuel J. Candès, Aaditya Ramdas, and Ryan J. Tibshirani. 2021. “Predictive Inference with the Jackknife+.” The Annals of Statistics 49 (1): 486–507. https://doi.org/10.1214/20-AOS1965.

Blaom, Anthony D., Franz Kiraly, Thibaut Lienart, Yiannis Simillides, Diego Arenas, and Sebastian J. Vollmer. 2020. “MLJ: A Julia Package for Composable Machine Learning.” Journal of Open Source Software 5 (55): 2704. https://doi.org/10.21105/joss.02704.

Manokhin, Valery. n.d. “Awesome Conformal Prediction.”

diff --git a/dev/reference/index.html b/dev/reference/index.html index 573ff6b..aa15d33 100644 --- a/dev/reference/index.html +++ b/dev/reference/index.html @@ -1,11 +1,11 @@ -🧐 Reference · ConformalPrediction.jl

Reference

In this reference you will find a detailed overview of the package API.

Reference guides are technical descriptions of the machinery and how to operate it. Reference material is information-oriented.

Diátaxis

In other words, you come here because you want to take a very close look at the code 🧐

Content

    Index

    Public Interface

    ConformalPrediction.conformal_modelMethod
    conformal_model(model::Supervised; method::Union{Nothing, Symbol}=nothing, kwargs...)

    A simple wrapper function that turns a model::Supervised into a conformal model. It accepts an optional key argument that can be used to specify the desired method for conformal prediction as well as additinal kwargs... specific to the method.

    source
    MLJModelInterface.fitMethod
    MMI.fit(conf_model::AdaptiveInductiveClassifier, verbosity, X, y)

    For the AdaptiveInductiveClassifier nonconformity scores are computed by cumulatively summing the ranked scores of each label in descending order until reaching the true label $Y_i$:

    $S_i^{\text{CAL}} = s(X_i,Y_i) = \sum_{j=1}^k \hat\mu(X_i)_{\pi_j} \ \text{where } \ Y_i=\pi_k, i \in \mathcal{D}_{\text{calibration}}$

    source
    MLJModelInterface.fitMethod
    MMI.fit(conf_model::CVMinMaxRegressor, verbosity, X, y)

    For the CVMinMaxRegressor nonconformity scores are computed in the same way as for the CVPlusRegressor. Specifically, we have,

    $S_i^{\text{CV}} = s(X_i, Y_i) = h(\hat\mu_{-\mathcal{D}_{k(i)}}(X_i), Y_i), \ i \in \mathcal{D}_{\text{train}}$

    where $\hat\mu_{-\mathcal{D}_{k(i)}}(X_i)$ denotes the CV prediction for $X_i$. In other words, for each CV fold $k=1,...,K$ and each training instance $i=1,...,n$ the model is trained on all training data excluding the fold containing $i$. The fitted model is then used to predict out-of-sample from $X_i$. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure $h(\cdot)$ to the fitted value $\hat\mu_{-\mathcal{D}_{k(i)}}(X_i)$ and the true value $Y_i$.

    source
    MLJModelInterface.fitMethod
    MMI.fit(conf_model::CVPlusRegressor, verbosity, X, y)

    For the CVPlusRegressor nonconformity scores are computed though cross-validation (CV) as follows,

    $S_i^{\text{CV}} = s(X_i, Y_i) = h(\hat\mu_{-\mathcal{D}_{k(i)}}(X_i), Y_i), \ i \in \mathcal{D}_{\text{train}}$

    where $\hat\mu_{-\mathcal{D}_{k(i)}}(X_i)$ denotes the CV prediction for $X_i$. In other words, for each CV fold $k=1,...,K$ and each training instance $i=1,...,n$ the model is trained on all training data excluding the fold containing $i$. The fitted model is then used to predict out-of-sample from $X_i$. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure $h(\cdot)$ to the fitted value $\hat\mu_{-\mathcal{D}_{k(i)}}(X_i)$ and the true value $Y_i$.

    source
    MLJModelInterface.fitMethod
    MMI.fit(conf_model::ConformalQuantileRegressor, verbosity, X, y)

    For the ConformalQuantileRegressor nonconformity scores are computed as follows:

    $S_i^{\text{CAL}} = s(X_i, Y_i) = h(\hat\mu_{\alpha_{lo}}(X_i), \hat\mu_{\alpha_{hi}}(X_i) ,Y_i), \ i \in \mathcal{D}_{\text{calibration}}$

    A typical choice for the heuristic function is $h(\hat\mu_{\alpha_{lo}}(X_i), \hat\mu_{\alpha_{hi}}(X_i) ,Y_i)= max\{\hat\mu_{\alpha_{low}}(X_i)-Y_i, Y_i-\hat\mu_{\alpha_{hi}}(X_i)\}$ where $\hat\mu$ denotes the model fitted on training data $\mathcal{D}_{\text{train}}` and$\alpha{lo}, \alpha{hi}`` lower and higher percentile.

    source
    MLJModelInterface.fitMethod
    MMI.fit(conf_model::JackknifeMinMaxRegressor, verbosity, X, y)

    For the JackknifeMinMaxRegressor nonconformity scores are computed in the same way as for the JackknifeRegressor. Specifically, we have,

    $S_i^{\text{LOO}} = s(X_i, Y_i) = h(\hat\mu_{-i}(X_i), Y_i), \ i \in \mathcal{D}_{\text{train}}$

    where $\hat\mu_{-i}(X_i)$ denotes the leave-one-out prediction for $X_i$. In other words, for each training instance $i=1,...,n$ the model is trained on all training data excluding $i$. The fitted model is then used to predict out-of-sample from $X_i$. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure $h(\cdot)$ to the fitted value $\hat\mu_{-i}(X_i)$ and the true value $Y_i$.

    source
    MLJModelInterface.fitMethod
    MMI.fit(conf_model::JackknifePlusMinMaxAbRegressor, verbosity, X, y)

    For the JackknifePlusAbMinMaxRegressor nonconformity scores are as,

    $S_i^{\text{J+MinMax}} = s(X_i, Y_i) = h(agg(\hat\mu_{B_{K(-i)}}(X_i)), Y_i), \ i \in \mathcal{D}_{\text{train}}$

    where $agg(\hat\mu_{B_{K(-i)}}(X_i))$ denotes the aggregate predictions, typically mean or median, for each $X_i$ (with $K_{-i}$ the bootstraps not containing $X_i$). In other words, B models are trained on boostrapped sampling, the fitted models are then used to create aggregated prediction of out-of-sample $X_i$. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure $h(\cdot)$ to the fitted value $agg(\hat\mu_{B_{K(-i)}}(X_i))$ and the true value $Y_i$.

    source
    MLJModelInterface.fitMethod
    MMI.fit(conf_model::JackknifePlusAbRegressor, verbosity, X, y)

    For the JackknifePlusAbRegressor nonconformity scores are computed as

    $$ S_i^{\text{J+ab}} = s(X_i, Y_i) = h(agg(\hat\mu_{B_{K(-i)}}(X_i)), Y_i), \ i \in \mathcal{D}_{\text{train}} $$

    where $agg(\hat\mu_{B_{K(-i)}}(X_i))$ denotes the aggregate predictions, typically mean or median, for each $X_i$ (with $K_{-i}$ the bootstraps not containing $X_i$). In other words, B models are trained on boostrapped sampling, the fitted models are then used to create aggregated prediction of out-of-sample $X_i$. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure $h(\cdot)$ to the fitted value $agg(\hat\mu_{B_{K(-i)}}(X_i))$ and the true value $Y_i$.

    source
    MLJModelInterface.fitMethod
    MMI.fit(conf_model::JackknifePlusRegressor, verbosity, X, y)

    For the JackknifePlusRegressor nonconformity scores are computed in the same way as for the JackknifeRegressor. Specifically, we have,

    $S_i^{\text{LOO}} = s(X_i, Y_i) = h(\hat\mu_{-i}(X_i), Y_i), \ i \in \mathcal{D}_{\text{train}}$

    where $\hat\mu_{-i}(X_i)$ denotes the leave-one-out prediction for $X_i$. In other words, for each training instance $i=1,...,n$ the model is trained on all training data excluding $i$. The fitted model is then used to predict out-of-sample from $X_i$. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure $h(\cdot)$ to the fitted value $\hat\mu_{-i}(X_i)$ and the true value $Y_i$.

    source
    MLJModelInterface.fitMethod
    MMI.fit(conf_model::JackknifeRegressor, verbosity, X, y)

    For the JackknifeRegressor nonconformity scores are computed through a leave-one-out (LOO) procedure as follows,

    $S_i^{\text{LOO}} = s(X_i, Y_i) = h(\hat\mu_{-i}(X_i), Y_i), \ i \in \mathcal{D}_{\text{train}}$

    where $\hat\mu_{-i}(X_i)$ denotes the leave-one-out prediction for $X_i$. In other words, for each training instance $i=1,...,n$ the model is trained on all training data excluding $i$. The fitted model is then used to predict out-of-sample from $X_i$. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure $h(\cdot)$ to the fitted value $\hat\mu_{-i}(X_i)$ and the true value $Y_i$.

    source
    MLJModelInterface.fitMethod
    MMI.fit(conf_model::NaiveClassifier, verbosity, X, y)

    For the NaiveClassifier nonconformity scores are computed in-sample as follows:

    $S_i^{\text{IS}} = s(X_i, Y_i) = h(\hat\mu(X_i), Y_i), \ i \in \mathcal{D}_{\text{calibration}}$

    A typical choice for the heuristic function is $h(\hat\mu(X_i), Y_i)=1-\hat\mu(X_i)_{Y_i}$ where $\hat\mu(X_i)_{Y_i}$ denotes the softmax output of the true class and $\hat\mu$ denotes the model fitted on training data $\mathcal{D}_{\text{train}}$.

    source
    MLJModelInterface.fitMethod
    MMI.fit(conf_model::NaiveRegressor, verbosity, X, y)

    For the NaiveRegressor nonconformity scores are computed in-sample as follows:

    $S_i^{\text{IS}} = s(X_i, Y_i) = h(\hat\mu(X_i), Y_i), \ i \in \mathcal{D}_{\text{train}}$

    A typical choice for the heuristic function is $h(\hat\mu(X_i),Y_i)=|Y_i-\hat\mu(X_i)|$ where $\hat\mu$ denotes the model fitted on training data $\mathcal{D}_{\text{train}}$.

    source
    MLJModelInterface.fitMethod
    MMI.fit(conf_model::SimpleInductiveClassifier, verbosity, X, y)

    For the SimpleInductiveClassifier nonconformity scores are computed as follows:

    $S_i^{\text{CAL}} = s(X_i, Y_i) = h(\hat\mu(X_i), Y_i), \ i \in \mathcal{D}_{\text{calibration}}$

    A typical choice for the heuristic function is $h(\hat\mu(X_i), Y_i)=1-\hat\mu(X_i)_{Y_i}$ where $\hat\mu(X_i)_{Y_i}$ denotes the softmax output of the true class and $\hat\mu$ denotes the model fitted on training data $\mathcal{D}_{\text{train}}$. The simple approach only takes the softmax probability of the true label into account.

    source
    MLJModelInterface.fitMethod
    MMI.fit(conf_model::SimpleInductiveRegressor, verbosity, X, y)

    For the SimpleInductiveRegressor nonconformity scores are computed as follows:

    $S_i^{\text{CAL}} = s(X_i, Y_i) = h(\hat\mu(X_i), Y_i), \ i \in \mathcal{D}_{\text{calibration}}$

    A typical choice for the heuristic function is $h(\hat\mu(X_i),Y_i)=|Y_i-\hat\mu(X_i)|$ where $\hat\mu$ denotes the model fitted on training data $\mathcal{D}_{\text{train}}$.

    source
    MLJModelInterface.fitMethod
    MMI.fit(conf_model::TimeSeriesRegressorEnsembleBatch, verbosity, X, y)

    For the TimeSeriesRegressorEnsembleBatch nonconformity scores are computed as

    $$ S_i^{\text{J+ab}} = s(X_i, Y_i) = h(agg(\hat\mu_{B_{K(-i)}}(X_i)), Y_i), \ i \in \mathcal{D}_{\text{train}} $$

    where $agg(\hat\mu_{B_{K(-i)}}(X_i))$ denotes the aggregate predictions, typically mean or median, for each $X_i$ (with $K_{-i}$ the bootstraps not containing $X_i$). In other words, B models are trained on boostrapped sampling, the fitted models are then used to create aggregated prediction of out-of-sample $X_i$. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure $h(\cdot)$ to the fitted value $agg(\hat\mu_{B_{K(-i)}}(X_i))$ and the true value $Y_i$.

    source
    MLJModelInterface.predictMethod
    MMI.predict(conf_model::AdaptiveInductiveClassifier, fitresult, Xnew)

    For the AdaptiveInductiveClassifier prediction sets are computed as follows,

    $\hat{C}_{n,\alpha}(X_{n+1}) = \left\{y: s(X_{n+1},y) \le \hat{q}_{n, \alpha}^{+} \{S_i^{\text{CAL}}\} \right\}, i \in \mathcal{D}_{\text{calibration}}$

    where $\mathcal{D}_{\text{calibration}}$ denotes the designated calibration data.

    source
    MLJModelInterface.predictMethod
    MMI.predict(conf_model::CVMinMaxRegressor, fitresult, Xnew)

    For the CVMinMaxRegressor prediction intervals are computed as follows,

    $\hat{C}_{n,\alpha}(X_{n+1}) = \left[ \min_{i=1,...,n} \hat\mu_{-\mathcal{D}_{k(i)}}(X_{n+1}) - \hat{q}_{n, \alpha}^{+} \{S_i^{\text{CV}} \}, \max_{i=1,...,n} \hat\mu_{-\mathcal{D}_{k(i)}}(X_{n+1}) + \hat{q}_{n, \alpha}^{+} \{ S_i^{\text{CV}}\} \right] , i \in \mathcal{D}_{\text{train}}$

    where $\hat\mu_{-\mathcal{D}_{k(i)}}$ denotes the model fitted on training data with subset $\mathcal{D}_{k(i)}$ that contains the $i$ th point removed.

    source
    MLJModelInterface.predictMethod
    MMI.predict(conf_model::CVPlusRegressor, fitresult, Xnew)

    For the CVPlusRegressor prediction intervals are computed in much same way as for the JackknifePlusRegressor. Specifically, we have,

    $\hat{C}_{n,\alpha}(X_{n+1}) = \left[ \hat{q}_{n, \alpha}^{-} \{\hat\mu_{-\mathcal{D}_{k(i)}}(X_{n+1}) - S_i^{\text{CV}} \}, \hat{q}_{n, \alpha}^{+} \{\hat\mu_{-\mathcal{D}_{k(i)}}(X_{n+1}) + S_i^{\text{CV}}\} \right] , \ i \in \mathcal{D}_{\text{train}}$

    where $\hat\mu_{-\mathcal{D}_{k(i)}}$ denotes the model fitted on training data with fold $\mathcal{D}_{k(i)}$ that contains the $i$ th point removed.

    The JackknifePlusRegressor is a special case of the CVPlusRegressor for which $K=n$.

    source
    MLJModelInterface.predictMethod
    MMI.predict(conf_model::ConformalQuantileRegressor, fitresult, Xnew)

    For the ConformalQuantileRegressor prediction intervals are computed as follows,

    $\hat{C}_{n,\alpha}(X_{n+1}) = [\hat\mu_{\alpha_{lo}}(X_{n+1}) - \hat{q}_{n, \alpha} \{S_i^{\text{CAL}} \}, \hat\mu_{\alpha_{hi}}(X_{n+1}) + \hat{q}_{n, \alpha} \{S_i^{\text{CAL}} \}], \ i \in \mathcal{D}_{\text{calibration}}$

    where $\mathcal{D}_{\text{calibration}}$ denotes the designated calibration data.

    source
    MLJModelInterface.predictMethod
    MMI.predict(conf_model::JackknifeMinMaxRegressor, fitresult, Xnew)

    For the JackknifeMinMaxRegressor prediction intervals are computed as follows,

    $\hat{C}_{n,\alpha}(X_{n+1}) = \left[ \min_{i=1,...,n} \hat\mu_{-i}(X_{n+1}) - \hat{q}_{n, \alpha}^{+} \{S_i^{\text{LOO}} \}, \max_{i=1,...,n} \hat\mu_{-i}(X_{n+1}) + \hat{q}_{n, \alpha}^{+} \{S_i^{\text{LOO}}\} \right] , i \in \mathcal{D}_{\text{train}}$

    where $\hat\mu_{-i}$ denotes the model fitted on training data with $i$th point removed. The jackknife-minmax procedure is more conservative than the JackknifePlusRegressor.

    source
    MLJModelInterface.predictMethod
    MMI.predict(conf_model::JackknifePlusAbMinMaxRegressor, fitresult, Xnew)

    For the JackknifePlusAbMinMaxRegressor prediction intervals are computed as follows,

    $\hat{C}_{n,\alpha}^{J+MinMax}(X_{n+1}) = \left[ \min_{i=1,...,n} \hat\mu_{-i}(X_{n+1}) - \hat{q}_{n, \alpha}^{+} \{S_i^{\text{J+MinMax}} \}, \max_{i=1,...,n} \hat\mu_{-i}(X_{n+1}) + \hat{q}_{n, \alpha}^{+} \{S_i^{\text{J+MinMax}}\} \right] , i \in \mathcal{D}_{\text{train}}$

    where $\hat\mu_{-i}$ denotes the model fitted on training data with $i$th point removed. The jackknife+ab-minmax procedure is more conservative than the JackknifePlusAbRegressor.

    source
    MLJModelInterface.predictMethod
    MMI.predict(conf_model::JackknifePlusAbRegressor, fitresult, Xnew)

    For the JackknifePlusAbRegressor prediction intervals are computed as follows,

    $\hat{C}_{n,\alpha, B}^{J+ab}(X_{n+1}) = \left[ \hat{q}_{n, \alpha}^{-} \{\hat\mu_{agg(-i)}(X_{n+1}) - S_i^{\text{J+ab}} \}, \hat{q}_{n, \alpha}^{+} \{\hat\mu_{agg(-i)}(X_{n+1}) + S_i^{\text{J+ab}}\} \right] , i \in \mathcal{D}_{\text{train}}$

    where $\hat\mu_{agg(-i)}$ denotes the aggregated models $\hat\mu_{1}, ...., \hat\mu_{B}$ fitted on bootstrapped data (B) does not include the $i$th data point. The jackknife$+$ procedure is more stable than the JackknifeRegressor.

    source
    MLJModelInterface.predictMethod
    MMI.predict(conf_model::JackknifePlusRegressor, fitresult, Xnew)

    For the JackknifePlusRegressor prediction intervals are computed as follows,

    $\hat{C}_{n,\alpha}(X_{n+1}) = \left[ \hat{q}_{n, \alpha}^{-} \{\hat\mu_{-i}(X_{n+1}) - S_i^{\text{LOO}} \}, \hat{q}_{n, \alpha}^{+} \{\hat\mu_{-i}(X_{n+1}) + S_i^{\text{LOO}}\} \right] , i \in \mathcal{D}_{\text{train}}$

    where $\hat\mu_{-i}$ denotes the model fitted on training data with $i$th point removed. The jackknife$+$ procedure is more stable than the JackknifeRegressor.

    source
    MLJModelInterface.predictMethod
    MMI.predict(conf_model::JackknifeRegressor, fitresult, Xnew)

    For the JackknifeRegressor prediction intervals are computed as follows,

    $\hat{C}_{n,\alpha}(X_{n+1}) = \hat\mu(X_{n+1}) \pm \hat{q}_{n, \alpha}^{+} \{S_i^{\text{LOO}}\}, \ i \in \mathcal{D}_{\text{train}}$

    where $S_i^{\text{LOO}}$ denotes the nonconformity that is generated as explained in fit(conf_model::JackknifeRegressor, verbosity, X, y). The jackknife procedure addresses the overfitting issue associated with the NaiveRegressor.

    source
    MLJModelInterface.predictMethod
    MMI.predict(conf_model::NaiveClassifier, fitresult, Xnew)

    For the NaiveClassifier prediction sets are computed as follows:

    $\hat{C}_{n,\alpha}(X_{n+1}) = \left\{y: s(X_{n+1},y) \le \hat{q}_{n, \alpha}^{+} \{S_i^{\text{IS}} \} \right\}, \ i \in \mathcal{D}_{\text{train}}$

    The naive approach typically produces prediction regions that undercover due to overfitting.

    source
    MLJModelInterface.predictMethod
    MMI.predict(conf_model::NaiveRegressor, fitresult, Xnew)

    For the NaiveRegressor prediction intervals are computed as follows:

    $\hat{C}_{n,\alpha}(X_{n+1}) = \hat\mu(X_{n+1}) \pm \hat{q}_{n, \alpha}^{+} \{S_i^{\text{IS}} \}, \ i \in \mathcal{D}_{\text{train}}$

    The naive approach typically produces prediction regions that undercover due to overfitting.

    source
    MLJModelInterface.predictMethod
    MMI.predict(conf_model::SimpleInductiveClassifier, fitresult, Xnew)

    For the SimpleInductiveClassifier prediction sets are computed as follows,

    $\hat{C}_{n,\alpha}(X_{n+1}) = \left\{y: s(X_{n+1},y) \le \hat{q}_{n, \alpha}^{+} \{S_i^{\text{CAL}}\} \right\}, \ i \in \mathcal{D}_{\text{calibration}}$

    where $\mathcal{D}_{\text{calibration}}$ denotes the designated calibration data.

    source
    MLJModelInterface.predictMethod
    MMI.predict(conf_model::SimpleInductiveRegressor, fitresult, Xnew)

    For the SimpleInductiveRegressor prediction intervals are computed as follows,

    $\hat{C}_{n,\alpha}(X_{n+1}) = \hat\mu(X_{n+1}) \pm \hat{q}_{n, \alpha}^{+} \{S_i^{\text{CAL}} \}, \ i \in \mathcal{D}_{\text{calibration}}$

    where $\mathcal{D}_{\text{calibration}}$ denotes the designated calibration data.

    source
    MLJModelInterface.predictMethod
    MMI.predict(conf_model::TimeSeriesRegressorEnsembleBatch, fitresult, Xnew)

    For the TimeSeriesRegressorEnsembleBatch prediction intervals are computed as follows,

    $\hat{C}_{n,\alpha, B}^{J+ab}(X_{n+1}) = \left[ \hat{q}_{n, \alpha}^{-} \{\hat\mu_{agg(-i)}(X_{n+1}) - S_i^{\text{J+ab}} \}, \hat{q}_{n, \alpha}^{+} \{\hat\mu_{agg(-i)}(X_{n+1}) + S_i^{\text{J+ab}}\} \right] , i \in \mathcal{D}_{\text{train}}$

    where $\hat\mu_{agg(-i)}$ denotes the aggregated models $\hat\mu_{1}, ...., \hat\mu_{B}$ fitted on bootstrapped data (B) does not include the $i$th data point. The jackknife$+$ procedure is more stable than the JackknifeRegressor.

    source

    Internal functions

    ConformalPrediction.blockbootstrapMethod
    blockbootstrap(time_series_data, block_szie)
    +🧐 Reference · ConformalPrediction.jl

    Reference

    In this reference you will find a detailed overview of the package API.

    Reference guides are technical descriptions of the machinery and how to operate it. Reference material is information-oriented.

    Diátaxis

    In other words, you come here because you want to take a very close look at the code 🧐

    Content

      Index

      Public Interface

      ConformalPrediction.conformal_modelMethod
      conformal_model(model::Supervised; method::Union{Nothing, Symbol}=nothing, kwargs...)

      A simple wrapper function that turns a model::Supervised into a conformal model. It accepts an optional key argument that can be used to specify the desired method for conformal prediction as well as additinal kwargs... specific to the method.

      source
      MLJModelInterface.fitMethod
      MMI.fit(conf_model::AdaptiveInductiveClassifier, verbosity, X, y)

      For the AdaptiveInductiveClassifier nonconformity scores are computed by cumulatively summing the ranked scores of each label in descending order until reaching the true label $Y_i$:

      $S_i^{\text{CAL}} = s(X_i,Y_i) = \sum_{j=1}^k \hat\mu(X_i)_{\pi_j} \ \text{where } \ Y_i=\pi_k, i \in \mathcal{D}_{\text{calibration}}$

      source
      MLJModelInterface.fitMethod
      MMI.fit(conf_model::CVMinMaxRegressor, verbosity, X, y)

      For the CVMinMaxRegressor nonconformity scores are computed in the same way as for the CVPlusRegressor. Specifically, we have,

      $S_i^{\text{CV}} = s(X_i, Y_i) = h(\hat\mu_{-\mathcal{D}_{k(i)}}(X_i), Y_i), \ i \in \mathcal{D}_{\text{train}}$

      where $\hat\mu_{-\mathcal{D}_{k(i)}}(X_i)$ denotes the CV prediction for $X_i$. In other words, for each CV fold $k=1,...,K$ and each training instance $i=1,...,n$ the model is trained on all training data excluding the fold containing $i$. The fitted model is then used to predict out-of-sample from $X_i$. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure $h(\cdot)$ to the fitted value $\hat\mu_{-\mathcal{D}_{k(i)}}(X_i)$ and the true value $Y_i$.

      source
      MLJModelInterface.fitMethod
      MMI.fit(conf_model::CVPlusRegressor, verbosity, X, y)

      For the CVPlusRegressor nonconformity scores are computed though cross-validation (CV) as follows,

      $S_i^{\text{CV}} = s(X_i, Y_i) = h(\hat\mu_{-\mathcal{D}_{k(i)}}(X_i), Y_i), \ i \in \mathcal{D}_{\text{train}}$

      where $\hat\mu_{-\mathcal{D}_{k(i)}}(X_i)$ denotes the CV prediction for $X_i$. In other words, for each CV fold $k=1,...,K$ and each training instance $i=1,...,n$ the model is trained on all training data excluding the fold containing $i$. The fitted model is then used to predict out-of-sample from $X_i$. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure $h(\cdot)$ to the fitted value $\hat\mu_{-\mathcal{D}_{k(i)}}(X_i)$ and the true value $Y_i$.

      source
      MLJModelInterface.fitMethod
      MMI.fit(conf_model::ConformalQuantileRegressor, verbosity, X, y)

      For the ConformalQuantileRegressor nonconformity scores are computed as follows:

      $S_i^{\text{CAL}} = s(X_i, Y_i) = h(\hat\mu_{\alpha_{lo}}(X_i), \hat\mu_{\alpha_{hi}}(X_i) ,Y_i), \ i \in \mathcal{D}_{\text{calibration}}$

      A typical choice for the heuristic function is $h(\hat\mu_{\alpha_{lo}}(X_i), \hat\mu_{\alpha_{hi}}(X_i) ,Y_i)= max\{\hat\mu_{\alpha_{low}}(X_i)-Y_i, Y_i-\hat\mu_{\alpha_{hi}}(X_i)\}$ where $\hat\mu$ denotes the model fitted on training data $\mathcal{D}_{\text{train}}` and$\alpha{lo}, \alpha{hi}`` lower and higher percentile.

      source
      MLJModelInterface.fitMethod
      MMI.fit(conf_model::JackknifeMinMaxRegressor, verbosity, X, y)

      For the JackknifeMinMaxRegressor nonconformity scores are computed in the same way as for the JackknifeRegressor. Specifically, we have,

      $S_i^{\text{LOO}} = s(X_i, Y_i) = h(\hat\mu_{-i}(X_i), Y_i), \ i \in \mathcal{D}_{\text{train}}$

      where $\hat\mu_{-i}(X_i)$ denotes the leave-one-out prediction for $X_i$. In other words, for each training instance $i=1,...,n$ the model is trained on all training data excluding $i$. The fitted model is then used to predict out-of-sample from $X_i$. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure $h(\cdot)$ to the fitted value $\hat\mu_{-i}(X_i)$ and the true value $Y_i$.

      source
      MLJModelInterface.fitMethod
      MMI.fit(conf_model::JackknifePlusMinMaxAbRegressor, verbosity, X, y)

      For the JackknifePlusAbMinMaxRegressor nonconformity scores are as,

      $S_i^{\text{J+MinMax}} = s(X_i, Y_i) = h(agg(\hat\mu_{B_{K(-i)}}(X_i)), Y_i), \ i \in \mathcal{D}_{\text{train}}$

      where $agg(\hat\mu_{B_{K(-i)}}(X_i))$ denotes the aggregate predictions, typically mean or median, for each $X_i$ (with $K_{-i}$ the bootstraps not containing $X_i$). In other words, B models are trained on boostrapped sampling, the fitted models are then used to create aggregated prediction of out-of-sample $X_i$. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure $h(\cdot)$ to the fitted value $agg(\hat\mu_{B_{K(-i)}}(X_i))$ and the true value $Y_i$.

      source
      MLJModelInterface.fitMethod
      MMI.fit(conf_model::JackknifePlusAbRegressor, verbosity, X, y)

      For the JackknifePlusAbRegressor nonconformity scores are computed as

      $$ S_i^{\text{J+ab}} = s(X_i, Y_i) = h(agg(\hat\mu_{B_{K(-i)}}(X_i)), Y_i), \ i \in \mathcal{D}_{\text{train}} $$

      where $agg(\hat\mu_{B_{K(-i)}}(X_i))$ denotes the aggregate predictions, typically mean or median, for each $X_i$ (with $K_{-i}$ the bootstraps not containing $X_i$). In other words, B models are trained on boostrapped sampling, the fitted models are then used to create aggregated prediction of out-of-sample $X_i$. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure $h(\cdot)$ to the fitted value $agg(\hat\mu_{B_{K(-i)}}(X_i))$ and the true value $Y_i$.

      source
      MLJModelInterface.fitMethod
      MMI.fit(conf_model::JackknifePlusRegressor, verbosity, X, y)

      For the JackknifePlusRegressor nonconformity scores are computed in the same way as for the JackknifeRegressor. Specifically, we have,

      $S_i^{\text{LOO}} = s(X_i, Y_i) = h(\hat\mu_{-i}(X_i), Y_i), \ i \in \mathcal{D}_{\text{train}}$

      where $\hat\mu_{-i}(X_i)$ denotes the leave-one-out prediction for $X_i$. In other words, for each training instance $i=1,...,n$ the model is trained on all training data excluding $i$. The fitted model is then used to predict out-of-sample from $X_i$. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure $h(\cdot)$ to the fitted value $\hat\mu_{-i}(X_i)$ and the true value $Y_i$.

      source
      MLJModelInterface.fitMethod
      MMI.fit(conf_model::JackknifeRegressor, verbosity, X, y)

      For the JackknifeRegressor nonconformity scores are computed through a leave-one-out (LOO) procedure as follows,

      $S_i^{\text{LOO}} = s(X_i, Y_i) = h(\hat\mu_{-i}(X_i), Y_i), \ i \in \mathcal{D}_{\text{train}}$

      where $\hat\mu_{-i}(X_i)$ denotes the leave-one-out prediction for $X_i$. In other words, for each training instance $i=1,...,n$ the model is trained on all training data excluding $i$. The fitted model is then used to predict out-of-sample from $X_i$. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure $h(\cdot)$ to the fitted value $\hat\mu_{-i}(X_i)$ and the true value $Y_i$.

      source
      MLJModelInterface.fitMethod
      MMI.fit(conf_model::NaiveClassifier, verbosity, X, y)

      For the NaiveClassifier nonconformity scores are computed in-sample as follows:

      $S_i^{\text{IS}} = s(X_i, Y_i) = h(\hat\mu(X_i), Y_i), \ i \in \mathcal{D}_{\text{calibration}}$

      A typical choice for the heuristic function is $h(\hat\mu(X_i), Y_i)=1-\hat\mu(X_i)_{Y_i}$ where $\hat\mu(X_i)_{Y_i}$ denotes the softmax output of the true class and $\hat\mu$ denotes the model fitted on training data $\mathcal{D}_{\text{train}}$.

      source
      MLJModelInterface.fitMethod
      MMI.fit(conf_model::NaiveRegressor, verbosity, X, y)

      For the NaiveRegressor nonconformity scores are computed in-sample as follows:

      $S_i^{\text{IS}} = s(X_i, Y_i) = h(\hat\mu(X_i), Y_i), \ i \in \mathcal{D}_{\text{train}}$

      A typical choice for the heuristic function is $h(\hat\mu(X_i),Y_i)=|Y_i-\hat\mu(X_i)|$ where $\hat\mu$ denotes the model fitted on training data $\mathcal{D}_{\text{train}}$.

      source
      MLJModelInterface.fitMethod
      MMI.fit(conf_model::SimpleInductiveClassifier, verbosity, X, y)

      For the SimpleInductiveClassifier nonconformity scores are computed as follows:

      $S_i^{\text{CAL}} = s(X_i, Y_i) = h(\hat\mu(X_i), Y_i), \ i \in \mathcal{D}_{\text{calibration}}$

      A typical choice for the heuristic function is $h(\hat\mu(X_i), Y_i)=1-\hat\mu(X_i)_{Y_i}$ where $\hat\mu(X_i)_{Y_i}$ denotes the softmax output of the true class and $\hat\mu$ denotes the model fitted on training data $\mathcal{D}_{\text{train}}$. The simple approach only takes the softmax probability of the true label into account.

      source
      MLJModelInterface.fitMethod
      MMI.fit(conf_model::SimpleInductiveRegressor, verbosity, X, y)

      For the SimpleInductiveRegressor nonconformity scores are computed as follows:

      $S_i^{\text{CAL}} = s(X_i, Y_i) = h(\hat\mu(X_i), Y_i), \ i \in \mathcal{D}_{\text{calibration}}$

      A typical choice for the heuristic function is $h(\hat\mu(X_i),Y_i)=|Y_i-\hat\mu(X_i)|$ where $\hat\mu$ denotes the model fitted on training data $\mathcal{D}_{\text{train}}$.

      source
      MLJModelInterface.fitMethod
      MMI.fit(conf_model::TimeSeriesRegressorEnsembleBatch, verbosity, X, y)

      For the TimeSeriesRegressorEnsembleBatch nonconformity scores are computed as

      $$ S_i^{\text{J+ab}} = s(X_i, Y_i) = h(agg(\hat\mu_{B_{K(-i)}}(X_i)), Y_i), \ i \in \mathcal{D}_{\text{train}} $$

      where $agg(\hat\mu_{B_{K(-i)}}(X_i))$ denotes the aggregate predictions, typically mean or median, for each $X_i$ (with $K_{-i}$ the bootstraps not containing $X_i$). In other words, B models are trained on boostrapped sampling, the fitted models are then used to create aggregated prediction of out-of-sample $X_i$. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure $h(\cdot)$ to the fitted value $agg(\hat\mu_{B_{K(-i)}}(X_i))$ and the true value $Y_i$.

      source
      MLJModelInterface.predictMethod
      MMI.predict(conf_model::AdaptiveInductiveClassifier, fitresult, Xnew)

      For the AdaptiveInductiveClassifier prediction sets are computed as follows,

      $\hat{C}_{n,\alpha}(X_{n+1}) = \left\{y: s(X_{n+1},y) \le \hat{q}_{n, \alpha}^{+} \{S_i^{\text{CAL}}\} \right\}, i \in \mathcal{D}_{\text{calibration}}$

      where $\mathcal{D}_{\text{calibration}}$ denotes the designated calibration data.

      source
      MLJModelInterface.predictMethod
      MMI.predict(conf_model::CVMinMaxRegressor, fitresult, Xnew)

      For the CVMinMaxRegressor prediction intervals are computed as follows,

      $\hat{C}_{n,\alpha}(X_{n+1}) = \left[ \min_{i=1,...,n} \hat\mu_{-\mathcal{D}_{k(i)}}(X_{n+1}) - \hat{q}_{n, \alpha}^{+} \{S_i^{\text{CV}} \}, \max_{i=1,...,n} \hat\mu_{-\mathcal{D}_{k(i)}}(X_{n+1}) + \hat{q}_{n, \alpha}^{+} \{ S_i^{\text{CV}}\} \right] , i \in \mathcal{D}_{\text{train}}$

      where $\hat\mu_{-\mathcal{D}_{k(i)}}$ denotes the model fitted on training data with subset $\mathcal{D}_{k(i)}$ that contains the $i$ th point removed.

      source
      MLJModelInterface.predictMethod
      MMI.predict(conf_model::CVPlusRegressor, fitresult, Xnew)

      For the CVPlusRegressor prediction intervals are computed in much same way as for the JackknifePlusRegressor. Specifically, we have,

      $\hat{C}_{n,\alpha}(X_{n+1}) = \left[ \hat{q}_{n, \alpha}^{-} \{\hat\mu_{-\mathcal{D}_{k(i)}}(X_{n+1}) - S_i^{\text{CV}} \}, \hat{q}_{n, \alpha}^{+} \{\hat\mu_{-\mathcal{D}_{k(i)}}(X_{n+1}) + S_i^{\text{CV}}\} \right] , \ i \in \mathcal{D}_{\text{train}}$

      where $\hat\mu_{-\mathcal{D}_{k(i)}}$ denotes the model fitted on training data with fold $\mathcal{D}_{k(i)}$ that contains the $i$ th point removed.

      The JackknifePlusRegressor is a special case of the CVPlusRegressor for which $K=n$.

      source
      MLJModelInterface.predictMethod
      MMI.predict(conf_model::ConformalQuantileRegressor, fitresult, Xnew)

      For the ConformalQuantileRegressor prediction intervals are computed as follows,

      $\hat{C}_{n,\alpha}(X_{n+1}) = [\hat\mu_{\alpha_{lo}}(X_{n+1}) - \hat{q}_{n, \alpha} \{S_i^{\text{CAL}} \}, \hat\mu_{\alpha_{hi}}(X_{n+1}) + \hat{q}_{n, \alpha} \{S_i^{\text{CAL}} \}], \ i \in \mathcal{D}_{\text{calibration}}$

      where $\mathcal{D}_{\text{calibration}}$ denotes the designated calibration data.

      source
      MLJModelInterface.predictMethod
      MMI.predict(conf_model::JackknifeMinMaxRegressor, fitresult, Xnew)

      For the JackknifeMinMaxRegressor prediction intervals are computed as follows,

      $\hat{C}_{n,\alpha}(X_{n+1}) = \left[ \min_{i=1,...,n} \hat\mu_{-i}(X_{n+1}) - \hat{q}_{n, \alpha}^{+} \{S_i^{\text{LOO}} \}, \max_{i=1,...,n} \hat\mu_{-i}(X_{n+1}) + \hat{q}_{n, \alpha}^{+} \{S_i^{\text{LOO}}\} \right] , i \in \mathcal{D}_{\text{train}}$

      where $\hat\mu_{-i}$ denotes the model fitted on training data with $i$th point removed. The jackknife-minmax procedure is more conservative than the JackknifePlusRegressor.

      source
      MLJModelInterface.predictMethod
      MMI.predict(conf_model::JackknifePlusAbMinMaxRegressor, fitresult, Xnew)

      For the JackknifePlusAbMinMaxRegressor prediction intervals are computed as follows,

      $\hat{C}_{n,\alpha}^{J+MinMax}(X_{n+1}) = \left[ \min_{i=1,...,n} \hat\mu_{-i}(X_{n+1}) - \hat{q}_{n, \alpha}^{+} \{S_i^{\text{J+MinMax}} \}, \max_{i=1,...,n} \hat\mu_{-i}(X_{n+1}) + \hat{q}_{n, \alpha}^{+} \{S_i^{\text{J+MinMax}}\} \right] , i \in \mathcal{D}_{\text{train}}$

      where $\hat\mu_{-i}$ denotes the model fitted on training data with $i$th point removed. The jackknife+ab-minmax procedure is more conservative than the JackknifePlusAbRegressor.

      source
      MLJModelInterface.predictMethod
      MMI.predict(conf_model::JackknifePlusAbRegressor, fitresult, Xnew)

      For the JackknifePlusAbRegressor prediction intervals are computed as follows,

      $\hat{C}_{n,\alpha, B}^{J+ab}(X_{n+1}) = \left[ \hat{q}_{n, \alpha}^{-} \{\hat\mu_{agg(-i)}(X_{n+1}) - S_i^{\text{J+ab}} \}, \hat{q}_{n, \alpha}^{+} \{\hat\mu_{agg(-i)}(X_{n+1}) + S_i^{\text{J+ab}}\} \right] , i \in \mathcal{D}_{\text{train}}$

      where $\hat\mu_{agg(-i)}$ denotes the aggregated models $\hat\mu_{1}, ...., \hat\mu_{B}$ fitted on bootstrapped data (B) does not include the $i$th data point. The jackknife$+$ procedure is more stable than the JackknifeRegressor.

      source
      MLJModelInterface.predictMethod
      MMI.predict(conf_model::JackknifePlusRegressor, fitresult, Xnew)

      For the JackknifePlusRegressor prediction intervals are computed as follows,

      $\hat{C}_{n,\alpha}(X_{n+1}) = \left[ \hat{q}_{n, \alpha}^{-} \{\hat\mu_{-i}(X_{n+1}) - S_i^{\text{LOO}} \}, \hat{q}_{n, \alpha}^{+} \{\hat\mu_{-i}(X_{n+1}) + S_i^{\text{LOO}}\} \right] , i \in \mathcal{D}_{\text{train}}$

      where $\hat\mu_{-i}$ denotes the model fitted on training data with $i$th point removed. The jackknife$+$ procedure is more stable than the JackknifeRegressor.

      source
      MLJModelInterface.predictMethod
      MMI.predict(conf_model::JackknifeRegressor, fitresult, Xnew)

      For the JackknifeRegressor prediction intervals are computed as follows,

      $\hat{C}_{n,\alpha}(X_{n+1}) = \hat\mu(X_{n+1}) \pm \hat{q}_{n, \alpha}^{+} \{S_i^{\text{LOO}}\}, \ i \in \mathcal{D}_{\text{train}}$

      where $S_i^{\text{LOO}}$ denotes the nonconformity that is generated as explained in fit(conf_model::JackknifeRegressor, verbosity, X, y). The jackknife procedure addresses the overfitting issue associated with the NaiveRegressor.

      source
      MLJModelInterface.predictMethod
      MMI.predict(conf_model::NaiveClassifier, fitresult, Xnew)

      For the NaiveClassifier prediction sets are computed as follows:

      $\hat{C}_{n,\alpha}(X_{n+1}) = \left\{y: s(X_{n+1},y) \le \hat{q}_{n, \alpha}^{+} \{S_i^{\text{IS}} \} \right\}, \ i \in \mathcal{D}_{\text{train}}$

      The naive approach typically produces prediction regions that undercover due to overfitting.

      source
      MLJModelInterface.predictMethod
      MMI.predict(conf_model::NaiveRegressor, fitresult, Xnew)

      For the NaiveRegressor prediction intervals are computed as follows:

      $\hat{C}_{n,\alpha}(X_{n+1}) = \hat\mu(X_{n+1}) \pm \hat{q}_{n, \alpha}^{+} \{S_i^{\text{IS}} \}, \ i \in \mathcal{D}_{\text{train}}$

      The naive approach typically produces prediction regions that undercover due to overfitting.

      source
      MLJModelInterface.predictMethod
      MMI.predict(conf_model::SimpleInductiveClassifier, fitresult, Xnew)

      For the SimpleInductiveClassifier prediction sets are computed as follows,

      $\hat{C}_{n,\alpha}(X_{n+1}) = \left\{y: s(X_{n+1},y) \le \hat{q}_{n, \alpha}^{+} \{S_i^{\text{CAL}}\} \right\}, \ i \in \mathcal{D}_{\text{calibration}}$

      where $\mathcal{D}_{\text{calibration}}$ denotes the designated calibration data.

      source
      MLJModelInterface.predictMethod
      MMI.predict(conf_model::SimpleInductiveRegressor, fitresult, Xnew)

      For the SimpleInductiveRegressor prediction intervals are computed as follows,

      $\hat{C}_{n,\alpha}(X_{n+1}) = \hat\mu(X_{n+1}) \pm \hat{q}_{n, \alpha}^{+} \{S_i^{\text{CAL}} \}, \ i \in \mathcal{D}_{\text{calibration}}$

      where $\mathcal{D}_{\text{calibration}}$ denotes the designated calibration data.

      source
      MLJModelInterface.predictMethod
      MMI.predict(conf_model::TimeSeriesRegressorEnsembleBatch, fitresult, Xnew)

      For the TimeSeriesRegressorEnsembleBatch prediction intervals are computed as follows,

      $\hat{C}_{n,\alpha, B}^{J+ab}(X_{n+1}) = \left[ \hat{q}_{n, \alpha}^{-} \{\hat\mu_{agg(-i)}(X_{n+1}) - S_i^{\text{J+ab}} \}, \hat{q}_{n, \alpha}^{+} \{\hat\mu_{agg(-i)}(X_{n+1}) + S_i^{\text{J+ab}}\} \right] , i \in \mathcal{D}_{\text{train}}$

      where $\hat\mu_{agg(-i)}$ denotes the aggregated models $\hat\mu_{1}, ...., \hat\mu_{B}$ fitted on bootstrapped data (B) does not include the $i$th data point. The jackknife$+$ procedure is more stable than the JackknifeRegressor.

      source

      Internal functions

      ConformalPrediction.is_coveredMethod
      is_covered(ŷ, y)

      Helper function to check if y is contained in conformal region. Based on whether conformal predictions are set- or interval-valued, different checks are executed.

      source
      ConformalPrediction.qminusFunction
      qminus(v::AbstractArray, coverage::AbstractFloat=0.9)

      Implements the $\hat{q}_{n,\alpha}^{-}$ finite-sample corrected quantile function as defined in Barber et al. (2020): https://arxiv.org/pdf/1905.02928.pdf.

      source
      ConformalPrediction.qplusFunction
      qplus(v::AbstractArray, coverage::AbstractFloat=0.9)

      Implements the $\hat{q}_{n,\alpha}^{+}$ finite-sample corrected quantile function as defined in Barber et al. (2020): https://arxiv.org/pdf/1905.02928.pdf.

      source
      ConformalPrediction.reformat_mlj_predictionMethod
      reformat_mlj_prediction(ŷ)

      A helper function that extracts only the output (predicted values) for whatever is returned from MMI.predict(model, fitresult, Xnew). This is currently used to avoid issues when calling MMI.predict(model, fitresult, Xnew) in pipelines.

      source
      ConformalPrediction.scoreFunction
      score(conf_model::ConformalProbabilisticSet, fitresult, X, y::Union{Nothing,AbstractArray}=nothing)

      Generic score method for the ConformalProbabilisticSet. It computes nonconformity scores using the heuristic function h and the softmax probabilities of the true class. Method is dispatched for different Conformal Probabilistic Sets and atomic models.

      source
      ConformalPrediction.split_dataMethod
      split_data(conf_model::ConformalProbabilisticSet, indices::Base.OneTo{Int})

      Splits the data into a proper training and calibration set.

      source
      ConformalPrediction.is_coveredMethod
      is_covered(ŷ, y)

      Helper function to check if y is contained in conformal region. Based on whether conformal predictions are set- or interval-valued, different checks are executed.

      source
      ConformalPrediction.qminusFunction
      qminus(v::AbstractArray, coverage::AbstractFloat=0.9)

      Implements the $\hat{q}_{n,\alpha}^{-}$ finite-sample corrected quantile function as defined in Barber et al. (2020): https://arxiv.org/pdf/1905.02928.pdf.

      source
      ConformalPrediction.qplusFunction
      qplus(v::AbstractArray, coverage::AbstractFloat=0.9)

      Implements the $\hat{q}_{n,\alpha}^{+}$ finite-sample corrected quantile function as defined in Barber et al. (2020): https://arxiv.org/pdf/1905.02928.pdf.

      source
      ConformalPrediction.reformat_mlj_predictionMethod
      reformat_mlj_prediction(ŷ)

      A helper function that extracts only the output (predicted values) for whatever is returned from MMI.predict(model, fitresult, Xnew). This is currently used to avoid issues when calling MMI.predict(model, fitresult, Xnew) in pipelines.

      source
      ConformalPrediction.scoreFunction
      score(conf_model::ConformalProbabilisticSet, fitresult, X, y::Union{Nothing,AbstractArray}=nothing)

      Generic score method for the ConformalProbabilisticSet. It computes nonconformity scores using the heuristic function h and the softmax probabilities of the true class. Method is dispatched for different Conformal Probabilistic Sets and atomic models.

      source
      ConformalPrediction.split_dataMethod
      split_data(conf_model::ConformalProbabilisticSet, indices::Base.OneTo{Int})

      Splits the data into a proper training and calibration set.

      source
      ConformalPrediction.ConformalTraining.classification_lossMethod
      classification_loss(
           conf_model::ConformalProbabilisticSet, fitresult, X, y;
           loss_matrix::Union{AbstractMatrix,UniformScaling}=UniformScaling(1.0),
           temp::Real=0.1
      -)

      Computes the calibration loss following Stutz et al. (2022): https://openreview.net/pdf?id=t8O-4LKFVx. Following the notation in the paper, the loss is computed as,

      \[\mathcal{L}(C_{\theta}(x;\tau),y) = \sum_k L_{y,k} \left[ (1 - C_{\theta,k}(x;\tau)) \mathbf{I}_{y=k} + C_{\theta,k}(x;\tau) \mathbf{I}_{y\ne k} \right]\]

      where $\tau$ is just the quantile and $\kappa$ is the target set size (defaults to $1$).

      source
      ConformalPrediction.ConformalTraining.qminus_smoothFunction
      qminus_smooth(v::AbstractArray, coverage::AbstractFloat=0.9)

      Implements the $\hat{q}_{n,\alpha}^{-}$ finite-sample corrected quantile function as defined in Barber et al. (2020): https://arxiv.org/pdf/1905.02928.pdf. To allow for differentiability, we use the soft sort function from InferOpt.jl.

      source
      ConformalPrediction.ConformalTraining.qplus_smoothFunction
      qplus_smooth(v::AbstractArray, coverage::AbstractFloat=0.9)

      Implements the $\hat{q}_{n,\alpha}^{+}$ finite-sample corrected quantile function as defined in Barber et al. (2020): https://arxiv.org/pdf/1905.02928.pdf. To allow for differentiability, we use the soft sort function from InferOpt.jl.

      source
      ConformalPrediction.ConformalTraining.scoreFunction
      ConformalPrediction.score(conf_model::AdaptiveInductiveClassifier, ::Type{<:EitherEnsembleModel{<:MLJFluxModel}}, fitresult, X, y::Union{Nothing,AbstractArray}=nothing)

      Overloads the score function for ensembles of MLJFluxModel types.

      source
      ConformalPrediction.ConformalTraining.scoreFunction
      ConformalPrediction.score(conf_model::AdaptiveInductiveClassifier, ::Type{<:MLJFluxModel}, fitresult, X, y::Union{Nothing,AbstractArray}=nothing)

      Overloads the score function for the MLJFluxModel type.

      source
      ConformalPrediction.ConformalTraining.smooth_size_lossMethod
      function smooth_size_loss(
      +)

      Computes the calibration loss following Stutz et al. (2022): https://openreview.net/pdf?id=t8O-4LKFVx. Following the notation in the paper, the loss is computed as,

      \[\mathcal{L}(C_{\theta}(x;\tau),y) = \sum_k L_{y,k} \left[ (1 - C_{\theta,k}(x;\tau)) \mathbf{I}_{y=k} + C_{\theta,k}(x;\tau) \mathbf{I}_{y\ne k} \right]\]

      where $\tau$ is just the quantile and $\kappa$ is the target set size (defaults to $1$).

      source
      ConformalPrediction.ConformalTraining.qminus_smoothFunction
      qminus_smooth(v::AbstractArray, coverage::AbstractFloat=0.9)

      Implements the $\hat{q}_{n,\alpha}^{-}$ finite-sample corrected quantile function as defined in Barber et al. (2020): https://arxiv.org/pdf/1905.02928.pdf. To allow for differentiability, we use the soft sort function from InferOpt.jl.

      source
      ConformalPrediction.ConformalTraining.qplus_smoothFunction
      qplus_smooth(v::AbstractArray, coverage::AbstractFloat=0.9)

      Implements the $\hat{q}_{n,\alpha}^{+}$ finite-sample corrected quantile function as defined in Barber et al. (2020): https://arxiv.org/pdf/1905.02928.pdf. To allow for differentiability, we use the soft sort function from InferOpt.jl.

      source
      ConformalPrediction.ConformalTraining.scoreFunction
      ConformalPrediction.score(conf_model::AdaptiveInductiveClassifier, ::Type{<:MLJFluxModel}, fitresult, X, y::Union{Nothing,AbstractArray}=nothing)

      Overloads the score function for the MLJFluxModel type.

      source
      ConformalPrediction.ConformalTraining.scoreFunction
      ConformalPrediction.score(conf_model::AdaptiveInductiveClassifier, ::Type{<:EitherEnsembleModel{<:MLJFluxModel}}, fitresult, X, y::Union{Nothing,AbstractArray}=nothing)

      Overloads the score function for ensembles of MLJFluxModel types.

      source
      ConformalPrediction.ConformalTraining.smooth_size_lossMethod
      function smooth_size_loss(
           conf_model::ConformalProbabilisticSet, fitresult, X;
           temp::Real=0.1, κ::Real=1.0
      -)

      Computes the smooth (differentiable) size loss following Stutz et al. (2022): https://openreview.net/pdf?id=t8O-4LKFVx. First, soft assignment probabilities are computed for new data X. Then (following the notation in the paper) the loss is computed as,

      \[\Omega(C_{\theta}(x;\tau)) = \max (0, \sum_k C_{\theta,k}(x;\tau) - \kappa)\]

      where $\tau$ is just the quantile and $\kappa$ is the target set size (defaults to $1$). For empty sets, the loss is computed as $K - \kappa$, that is the maximum set size minus the target set size.

      source
      ConformalPrediction.ConformalTraining.soft_assignmentMethod
      soft_assignment(conf_model::ConformalProbabilisticSet, fitresult, X; temp::Real=0.1)

      This function can be used to compute soft assigment probabilities for new data X as in soft_assignment(conf_model::ConformalProbabilisticSet; temp::Real=0.1). When a fitted model $\mu$ (fitresult) and new samples X are supplied, non-conformity scores are first computed for the new data points. Then the existing threshold/quantile is used to compute the final soft assignments.

      source
      ConformalPrediction.ConformalTraining.soft_assignmentMethod
      soft_assignment(conf_model::ConformalProbabilisticSet; temp::Real=0.1)

      Computes soft assignment scores for each label and sample. That is, the probability of label k being included in the confidence set. This implementation follows Stutz et al. (2022): https://openreview.net/pdf?id=t8O-4LKFVx. Contrary to the paper, we use non-conformity scores instead of conformity scores, hence the sign swap.

      source
      ConformalPrediction.scoreFunction
      ConformalPrediction.score(conf_model::InductiveModel, model::MLJFluxModel, fitresult, X, y::Union{Nothing,AbstractArray}=nothing)

      Overloads the score function for the MLJFluxModel type.

      source
      ConformalPrediction.scoreFunction
      ConformalPrediction.score(conf_model::SimpleInductiveClassifier, ::Type{<:EitherEnsembleModel{<:MLJFluxModel}}, fitresult, X, y::Union{Nothing,AbstractArray}=nothing)

      Overloads the score function for ensembles of MLJFluxModel types.

      source
      MLJFlux.shapeMethod
      shape(model::NeuralNetworkRegressor, X, y)

      A private method that returns the shape of the input and output of the model for given data X and y.

      source
      MLJFlux.train!Method
      MLJFlux.train!(model::ConformalNN, penalty, chain, optimiser, X, y)

      Implements the conformal traning procedure for the ConformalNN type.

      source
      +)

      Computes the smooth (differentiable) size loss following Stutz et al. (2022): https://openreview.net/pdf?id=t8O-4LKFVx. First, soft assignment probabilities are computed for new data X. Then (following the notation in the paper) the loss is computed as,

      \[\Omega(C_{\theta}(x;\tau)) = \max (0, \sum_k C_{\theta,k}(x;\tau) - \kappa)\]

      where $\tau$ is just the quantile and $\kappa$ is the target set size (defaults to $1$). For empty sets, the loss is computed as $K - \kappa$, that is the maximum set size minus the target set size.

      source
      ConformalPrediction.ConformalTraining.soft_assignmentMethod
      soft_assignment(conf_model::ConformalProbabilisticSet, fitresult, X; temp::Real=0.1)

      This function can be used to compute soft assigment probabilities for new data X as in soft_assignment(conf_model::ConformalProbabilisticSet; temp::Real=0.1). When a fitted model $\mu$ (fitresult) and new samples X are supplied, non-conformity scores are first computed for the new data points. Then the existing threshold/quantile is used to compute the final soft assignments.

      source
      ConformalPrediction.ConformalTraining.soft_assignmentMethod
      soft_assignment(conf_model::ConformalProbabilisticSet; temp::Real=0.1)

      Computes soft assignment scores for each label and sample. That is, the probability of label k being included in the confidence set. This implementation follows Stutz et al. (2022): https://openreview.net/pdf?id=t8O-4LKFVx. Contrary to the paper, we use non-conformity scores instead of conformity scores, hence the sign swap.

      source
      ConformalPrediction.scoreFunction
      ConformalPrediction.score(conf_model::InductiveModel, model::MLJFluxModel, fitresult, X, y::Union{Nothing,AbstractArray}=nothing)

      Overloads the score function for the MLJFluxModel type.

      source
      ConformalPrediction.scoreFunction
      ConformalPrediction.score(conf_model::SimpleInductiveClassifier, ::Type{<:EitherEnsembleModel{<:MLJFluxModel}}, fitresult, X, y::Union{Nothing,AbstractArray}=nothing)

      Overloads the score function for ensembles of MLJFluxModel types.

      source
      MLJFlux.shapeMethod
      shape(model::NeuralNetworkRegressor, X, y)

      A private method that returns the shape of the input and output of the model for given data X and y.

      source
      MLJFlux.train!Method
      MLJFlux.train!(model::ConformalNN, penalty, chain, optimiser, X, y)

      Implements the conformal traning procedure for the ConformalNN type.

      source
      diff --git a/dev/search_index.js b/dev/search_index.js index 39ad27a..8afdfaa 100644 --- a/dev/search_index.js +++ b/dev/search_index.js @@ -1,3 +1,3 @@ var documenterSearchIndex = {"docs": -[{"location":"how_to_guides/","page":"Overview","title":"Overview","text":"CurrentModule = ConformalPrediction","category":"page"},{"location":"how_to_guides/#How-To-Guides","page":"Overview","title":"How-To Guides","text":"","category":"section"},{"location":"how_to_guides/","page":"Overview","title":"Overview","text":"In this section you will find a series of how-to-guides that showcase specific use cases of Conformal Prediction.","category":"page"},{"location":"how_to_guides/","page":"Overview","title":"Overview","text":"How-to guides are directions that take the reader through the steps required to solve a real-world problem. How-to guides are goal-oriented.— Diátaxis","category":"page"},{"location":"how_to_guides/","page":"Overview","title":"Overview","text":"In other words, you come here because you may have some particular problem in mind, would like to see how it can be solved using CP and then most likely head off again 🫡","category":"page"},{"location":"tutorials/regression/#Regression","page":"Regression","title":"Regression","text":"","category":"section"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"This tutorial presents and compares different approaches to Conformal Regression using a simple synthetic dataset. It is inspired by this MAPIE tutorial.","category":"page"},{"location":"tutorials/regression/#Data","page":"Regression","title":"Data","text":"","category":"section"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"We begin by generating some synthetic regression data below:","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"# Regression data:\n\n# Inputs:\nN = 600\nxmax = 5.0\nusing Distributions\nd = Uniform(-xmax, xmax)\nX = rand(d, N)\nX = reshape(X, :, 1)\n\n# Outputs:\nnoise = 0.5\nfun(X) = X * sin(X)\nε = randn(N) .* noise\ny = @.(fun(X)) + ε\ny = vec(y)\n\n# Partition:\nusing MLJ\ntrain, test = partition(eachindex(y), 0.4, 0.4, shuffle=true)\n\nusing Plots\nscatter(X, y, label=\"Observed\")\nxrange = range(-xmax,xmax,length=N)\nplot!(xrange, @.(fun(xrange)), lw=4, label=\"Ground truth\", ls=:dash, colour=:black)","category":"page"},{"location":"tutorials/regression/#Model","page":"Regression","title":"Model","text":"","category":"section"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"To model this data we will use polynomial regression. There is currently no out-of-the-box support for polynomial feature transformations in MLJ, but it is easy enough to add a little helper function for this. Note how we define a linear pipeline pipe here. Since pipelines in MLJ are just models, we can use the generated object as an input to conformal_model below.","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"LinearRegressor = @load LinearRegressor pkg=MLJLinearModels\ndegree_polynomial = 10\npolynomial_features(X, degree::Int) = reduce(hcat, map(i -> X.^i, 1:degree))\npipe = (X -> MLJ.table(polynomial_features(MLJ.matrix(X), degree_polynomial))) |> LinearRegressor()","category":"page"},{"location":"tutorials/regression/#Conformal-Prediction","page":"Regression","title":"Conformal Prediction","text":"","category":"section"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"Next, we conformalize our polynomial regressor using every available approach (except the Naive approach):","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"using ConformalPrediction\nconformal_models = merge(values(available_models[:regression])...)\nresults = Dict()\nfor _mod in keys(conformal_models) \n conf_model = conformal_model(pipe; method=_mod, coverage=0.95)\n global mach = machine(conf_model, X, y)\n MLJ.fit!(mach, rows=train)\n results[_mod] = mach\nend","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"Finally, let us look at the resulting conformal predictions in each case. The chart below shows the results: for the first 4 methods it displays the training data (dots) overlaid with the conformal prediction interval (shaded area). At first glance it is hard to spot any major differences between the different approaches. Next, we will look at how we can evaluate and benchmark these predictions.","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"using Plots\nzoom = -0.5\nxrange = range(-xmax+zoom,xmax-zoom,length=N)\nplt_list = []\n\nfor (_mod, mach) in first(results, n_charts)\n plt = plot(mach.model, mach.fitresult, X, y, zoom=zoom, title=_mod)\n plot!(plt, xrange, @.(fun(xrange)), lw=1, ls=:dash, colour=:black, label=\"Ground truth\")\n push!(plt_list, plt)\nend\n\nplot(plt_list..., size=(800,500))","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"(Image: Figure 1: Conformal prediction regions.)","category":"page"},{"location":"tutorials/regression/#Evaluation","page":"Regression","title":"Evaluation","text":"","category":"section"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"For evaluation of conformal predictors we follow Angelopoulos and Bates (2021) (Section 3). As a first step towards adaptiveness (adaptivity), the authors recommend to inspect the set size of conformal predictions. The chart below shows the interval width for the different methods along with the ground truth interval width:","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"xrange = range(-xmax,xmax,length=N)\nplt = plot(xrange, ones(N) .* (1.96*2*noise), ls=:dash, colour=:black, label=\"Ground truth\", lw=2)\nfor (_mod, mach) in results\n ŷ = predict(mach, reshape([x for x in xrange], :, 1))\n y_size = set_size.(ŷ)\n plot!(xrange, y_size, label=String(_mod))\nend\nplt","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"(Image: Figure 2: Prediction interval width.)","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"We can also use specific metrics like empirical coverage and size-stratified coverage to check for correctness and adaptiveness, respectively (angelopoulus2021gentle?). To this end, the package provides custom measures that are compatible with MLJ.jl. In other words, we can evaluate model performance in true MLJ.jl fashion (see here).","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"The code below runs the evaluation with respect to both metrics, emp_coverage and ssc for a single conformal machine:","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"_mod, mach = first(results)\n_eval = evaluate!(\n mach,\n operation=predict,\n measure=[emp_coverage, ssc]\n)\ndisplay(_eval)\nprintln(\"Empirical coverage for $(_mod): $(round(_eval.measurement[1], digits=3))\")\nprintln(\"SSC for $(_mod): $(round(_eval.measurement[2], digits=3))\")","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"PerformanceEvaluation object with these fields:\n measure, operation, measurement, per_fold,\n per_observation, fitted_params_per_fold,\n report_per_fold, train_test_rows\nExtract:\n┌──────────────────────────────────────────────┬───────────┬─────────────┬──────\n│ measure │ operation │ measurement │ 1.9 ⋯\n├──────────────────────────────────────────────┼───────────┼─────────────┼──────\n│ ConformalPrediction.emp_coverage │ predict │ 0.94 │ 0.0 ⋯\n│ ConformalPrediction.size_stratified_coverage │ predict │ 0.94 │ 0.0 ⋯\n└──────────────────────────────────────────────┴───────────┴─────────────┴──────\n 2 columns omitted\n\nEmpirical coverage for jackknife_plus_ab: 0.94\nSSC for jackknife_plus_ab: 0.94","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"Note that, in the regression case, stratified set sizes correspond to discretized interval widths.","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"To benchmark the different approaches, we evaluate them iteratively below. As expected, more conservative approaches like Jackknife-min max  and CV-min max  attain higher aggregate and conditional coverage. Note that size-stratified is not available for methods that produce constant intervals, like standard Jackknife.","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"using DataFrames\nbmk = DataFrame()\nfor (_mod, mach) in results\n _eval = evaluate!(\n mach,\n resampling=CV(;nfolds=5),\n operation=predict,\n measure=[emp_coverage, ssc]\n )\n _bmk = DataFrame(\n Dict(\n :model => _mod,\n :emp_coverage => _eval.measurement[1],\n :ssc => _eval.measurement[2]\n )\n )\n bmk = vcat(bmk, _bmk)\nend\n\nshow(sort(select!(bmk, [2,1,3]), 2, rev=true))","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"9×3 DataFrame\n Row │ model emp_coverage ssc \n │ Symbol Float64 Float64 \n─────┼──────────────────────────────────────────────────\n 1 │ jackknife_plus_ab_minmax 0.988333 0.980547\n 2 │ cv_minmax 0.96 0.910873\n 3 │ simple_inductive 0.953333 0.953333\n 4 │ jackknife_minmax 0.946667 0.869103\n 5 │ cv_plus 0.945 0.866549\n 6 │ jackknife_plus_ab 0.941667 0.941667\n 7 │ jackknife_plus 0.941667 0.871606\n 8 │ jackknife 0.941667 0.941667\n 9 │ naive 0.938333 0.938333","category":"page"},{"location":"tutorials/regression/#References","page":"Regression","title":"References","text":"","category":"section"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"Angelopoulos, Anastasios N., and Stephen Bates. 2021. “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification.” https://arxiv.org/abs/2107.07511.","category":"page"},{"location":"how_to_guides/mnist/#How-to-Conformalize-a-Deep-Image-Classifier","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"","category":"section"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"Deep Learning is popular and — for some tasks like image classification — remarkably powerful. But it is also well-known that Deep Neural Networks (DNN) can be unstable (Goodfellow, Shlens, and Szegedy 2014) and poorly calibrated. Conformal Prediction can be used to mitigate these pitfalls. This how-to guide demonstrates how you can build an image classifier in Flux.jl and conformalize its predictions. For a formal treatment see A. Angelopoulos et al. (2022).","category":"page"},{"location":"how_to_guides/mnist/#The-Task-at-Hand","page":"How to Conformalize a Deep Image Classifier","title":"The Task at Hand","text":"","category":"section"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"The task at hand is to predict the labels of handwritten images of digits using the famous MNIST dataset (LeCun 1998). Importing this popular machine learning dataset in Julia is made remarkably easy through MLDatasets.jl:","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"using MLDatasets\nN = 1000\nXraw, yraw = MNIST(split=:train)[:]\nXraw = Xraw[:,:,1:N]\nyraw = yraw[1:N]","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"The chart below shows a few random samples from the training data:","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"using MLJ\nusing Images\nX = map(x -> convert2image(MNIST, x), eachslice(Xraw, dims=3))\ny = coerce(yraw, Multiclass)\n\nn_samples = 10\nmosaic(rand(X, n_samples)..., ncol=n_samples)","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"(Image: Figure 1: Random samples from the MNIST dataset.)","category":"page"},{"location":"how_to_guides/mnist/#Building-the-Network","page":"How to Conformalize a Deep Image Classifier","title":"Building the Network","text":"","category":"section"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"To model the mapping from image inputs to labels will rely on a simple Multi-Layer Perceptron (MLP). A great Julia library for Deep Learning is Flux.jl. But wait … doesn’t ConformalPrediction.jl work with models trained in MLJ.jl? That’s right, but fortunately there exists a Flux.jl interface to MLJ.jl, namely MLJFlux.jl. The interface is still in its early stages, but already very powerful and easily accessible for anyone (like myself) who is used to building Neural Networks in Flux.jl.","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"In Flux.jl, you could build an MLP for this task as follows,","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"using Flux\n\nmlp = Chain(\n Flux.flatten,\n Dense(prod((28,28)), 32, relu),\n Dense(32, 10)\n)","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"where (28,28) is just the input dimension (28x28 pixel images). Since we have ten digits, our output dimension is ten.[1]","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"We can do the exact same thing in MLJFlux.jl as follows,","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"using MLJFlux\n\nbuilder = MLJFlux.@builder Chain(\n Flux.flatten,\n Dense(prod(n_in), 32, relu),\n Dense(32, n_out)\n)","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"where here we rely on the @builder macro to make the transition from Flux.jl to MLJ.jl as seamless as possible. Finally, MLJFlux.jl already comes with a number of helper functions to define plain-vanilla networks. In this case, we will use the ImageClassifier with our custom builder and cross-entropy loss:","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"ImageClassifier = @load ImageClassifier\nclf = ImageClassifier(\n builder=builder,\n epochs=10,\n loss=Flux.crossentropy\n)","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"The generated instance clf is a model (in the MLJ.jl sense) so from this point on we can rely on standard MLJ.jl workflows. For example, we can wrap our model in data to create a machine and then evaluate it on a holdout set as follows:","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"mach = machine(clf, X, y)\n\nevaluate!(\n mach,\n resampling=Holdout(rng=123, fraction_train=0.8),\n operation=predict_mode,\n measure=[accuracy]\n)","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"The accuracy of our very simple model is not amazing, but good enough for the purpose of this tutorial. For each image, our MLP returns a softmax output for each possible digit: 0,1,2,3,…,9. Since each individual softmax output is valued between zero and one, y(k) ∈ (0,1), this is commonly interpreted as a probability: y(k) ≔ p(y=k|X). Edge cases – that is values close to either zero or one – indicate high predictive certainty. But this is only a heuristic notion of predictive uncertainty (A. N. Angelopoulos and Bates 2021). Next, we will turn this heuristic notion of uncertainty into a rigorous one using Conformal Prediction.","category":"page"},{"location":"how_to_guides/mnist/#Conformalizing-the-Network","page":"How to Conformalize a Deep Image Classifier","title":"Conformalizing the Network","text":"","category":"section"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"Since clf is a model, it is also compatible with our package: ConformalPrediction.jl. To conformalize our MLP, we therefore only need to call conformal_model(clf). Since the generated instance conf_model is also just a model, we can still rely on standard MLJ.jl workflows. Below we first wrap it in data and then fit it. Aaaand … we’re done! Let’s look at the results in the next section.","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"using ConformalPrediction\nconf_model = conformal_model(clf; method=:simple_inductive)\nmach = machine(conf_model, X, y)\nfit!(mach)","category":"page"},{"location":"how_to_guides/mnist/#Results","page":"How to Conformalize a Deep Image Classifier","title":"Results","text":"","category":"section"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"The charts below present the results. The first row displays highly certain predictions, now defined in the rigorous sense of Conformal Prediction: in each case, the conformal set (just beneath the image) includes only one label.","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"The following two rows display increasingly uncertain predictions of set size two and three, respectively. They demonstrate that CP is well equipped to deal with samples characterized by high aleatoric uncertainty: digits four (4), seven (7) and nine (9) share certain similarities. So do digits five (5) and six (6) as well as three (3) and eight (8). These may be hard to distinguish from each other even after seeing many examples (and even for a human). It is therefore unsurprising to see that these digits often end up together in conformal sets.","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"(Image: Figure 2: Plot 1)","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"(Image: Figure 3: Plot 2)","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"(Image: Figure 4: Plot 3)","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"Conformalized predictions from an image classifier.","category":"page"},{"location":"how_to_guides/mnist/#Evaluation","page":"How to Conformalize a Deep Image Classifier","title":"Evaluation","text":"","category":"section"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"As always, we can also evaluate our conformal model in terms of coverage (correctness) and size-stratified coverage (adaptiveness).","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"_eval = evaluate!(\n mach,\n resampling=Holdout(rng=123, fraction_train=0.8),\n operation=predict,\n measure=[emp_coverage, ssc]\n)\ndisplay(_eval)\nprintln(\"Empirical coverage: $(round(_eval.measurement[1], digits=3))\")\nprintln(\"SSC: $(round(_eval.measurement[2], digits=3))\")","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"PerformanceEvaluation object with these fields:\n measure, operation, measurement, per_fold,\n per_observation, fitted_params_per_fold,\n report_per_fold, train_test_rows\nExtract:\n┌──────────────────────────────────────────────┬───────────┬─────────────┬──────\n│ measure │ operation │ measurement │ per ⋯\n├──────────────────────────────────────────────┼───────────┼─────────────┼──────\n│ ConformalPrediction.emp_coverage │ predict │ 0.96 │ [0. ⋯\n│ ConformalPrediction.size_stratified_coverage │ predict │ 0.885 │ [0. ⋯\n└──────────────────────────────────────────────┴───────────┴─────────────┴──────\n 1 column omitted\n\nEmpirical coverage: 0.96\nSSC: 0.885","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"Unsurprisingly, we can attain higher adaptivity (SSC) when using adaptive prediction sets:","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"conf_model = conformal_model(clf; method=:adaptive_inductive)\nmach = machine(conf_model, X, y)\nfit!(mach)\n_eval = evaluate!(\n mach,\n resampling=Holdout(rng=123, fraction_train=0.8),\n operation=predict,\n measure=[emp_coverage, ssc]\n)\nresults[:adaptive_inductive] = mach\ndisplay(_eval)\nprintln(\"Empirical coverage: $(round(_eval.measurement[1], digits=3))\")\nprintln(\"SSC: $(round(_eval.measurement[2], digits=3))\")","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"PerformanceEvaluation object with these fields:\n measure, operation, measurement, per_fold,\n per_observation, fitted_params_per_fold,\n report_per_fold, train_test_rows\nExtract:\n┌──────────────────────────────────────────────┬───────────┬─────────────┬──────\n│ measure │ operation │ measurement │ per ⋯\n├──────────────────────────────────────────────┼───────────┼─────────────┼──────\n│ ConformalPrediction.emp_coverage │ predict │ 1.0 │ [1. ⋯\n│ ConformalPrediction.size_stratified_coverage │ predict │ 1.0 │ [1. ⋯\n└──────────────────────────────────────────────┴───────────┴─────────────┴──────\n 1 column omitted\n\nEmpirical coverage: 1.0\nSSC: 1.0","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"We can also have a look at the resulting set size for both approaches:","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"plt_list = []\nfor (_mod, mach) in results\n push!(plt_list, bar(mach.model, mach.fitresult, X; title=String(_mod)))\nend\nplot(plt_list..., size=(800,300))","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"(Image: Figure 5: Prediction interval width.)","category":"page"},{"location":"how_to_guides/mnist/#References","page":"How to Conformalize a Deep Image Classifier","title":"References","text":"","category":"section"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"Angelopoulos, Anastasios N., and Stephen Bates. 2021. “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification.” https://arxiv.org/abs/2107.07511.","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"Angelopoulos, Anastasios, Stephen Bates, Jitendra Malik, and Michael I. Jordan. 2022. “Uncertainty Sets for Image Classifiers Using Conformal Prediction.” arXiv. https://arxiv.org/abs/2009.14193.","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"Goodfellow, Ian J, Jonathon Shlens, and Christian Szegedy. 2014. “Explaining and Harnessing Adversarial Examples.” https://arxiv.org/abs/1412.6572.","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"LeCun, Yann. 1998. “The MNIST Database of Handwritten Digits.”","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"[1] For a full tutorial on how to build an MNIST image classifier relying solely on Flux.jl, check out this tutorial.","category":"page"},{"location":"tutorials/","page":"Overview","title":"Overview","text":"CurrentModule = ConformalPrediction","category":"page"},{"location":"tutorials/#Tutorials","page":"Overview","title":"Tutorials","text":"","category":"section"},{"location":"tutorials/","page":"Overview","title":"Overview","text":"In this section you will find a series of tutorials that should help you gain a basic understanding of Conformal Prediction and how to apply it in Julia using this package.","category":"page"},{"location":"tutorials/","page":"Overview","title":"Overview","text":"Tutorials are lessons that take the reader by the hand through a series of steps to complete a project of some kind. Tutorials are learning-oriented.— Diátaxis","category":"page"},{"location":"tutorials/","page":"Overview","title":"Overview","text":"In other words, you come here because you are new to this topic and are looking for a first peek at the methodology and code 🫣","category":"page"},{"location":"contribute/#Contributor’s-Guide","page":"🛠 Contribute","title":"Contributor’s Guide","text":"","category":"section"},{"location":"contribute/","page":"🛠 Contribute","title":"🛠 Contribute","text":"CurrentModule = ConformalPrediction","category":"page"},{"location":"contribute/#Contents","page":"🛠 Contribute","title":"Contents","text":"","category":"section"},{"location":"contribute/","page":"🛠 Contribute","title":"🛠 Contribute","text":"Pages = [\"contribute.md\"]\nDepth = 2","category":"page"},{"location":"contribute/#Contributing-to-ConformalPrediction.jl","page":"🛠 Contribute","title":"Contributing to ConformalPrediction.jl","text":"","category":"section"},{"location":"contribute/","page":"🛠 Contribute","title":"🛠 Contribute","text":"Contributions are welcome! Please follow the SciML ColPrac guide. To get started we recommend you have a look at the Explanation section in the docs. The subsection explaining the package architecture may be particularly useful. You may already have a specific idea about what you want to contribute, in which case please feel free to open an issue and pull request. If you don’t have anything specific in mind, the list of outstanding issues may be a good source of inspiration. If you decide to work on an outstanding issue, be sure to check its current status: if it’s “In Progress”, check in with the developer who last worked on the issue to see how you may help.","category":"page"},{"location":"how_to_guides/timeseries/#How-to-Conformalize-a-Time-Series-Model","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"","category":"section"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"Time series data is prevalent across various domains, such as finance, weather forecasting, energy, and supply chains. However, accurately quantifying uncertainty in time series predictions is often a complex task due to inherent temporal dependencies, non-stationarity, and noise in the data. In this context, Conformal Prediction offers a valuable solution by providing prediction intervals which offer a sound way to quantify uncertainty.","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"This how-to guide demonstrates how you can conformalize a time series model using Ensemble Batch Prediction Intervals (EnbPI) (Xu and Xie 2021). This method enables the updating of prediction intervals whenever new observations are available. This dynamic update process allows the method to adapt to changing conditions, accounting for the potential degradation of predictions or the increase in noise levels in the data.","category":"page"},{"location":"how_to_guides/timeseries/#The-Task-at-Hand","page":"How to Conformalize a Time Series Model","title":"The Task at Hand","text":"","category":"section"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"Inspired by MAPIE, we employ the Victoria electricity demand dataset. This dataset contains hourly electricity demand (in GW) for Victoria state in Australia, along with corresponding temperature data (in Celsius degrees).","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"using CSV, DataFrames\ndf = CSV.read(\"./dev/artifacts/electricity_demand.csv\", DataFrame)","category":"page"},{"location":"how_to_guides/timeseries/#Feature-engineering","page":"How to Conformalize a Time Series Model","title":"Feature engineering","text":"","category":"section"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"In this how-to guide, we only focus on date, time and lag features.","category":"page"},{"location":"how_to_guides/timeseries/#Date-and-Time-related-features","page":"How to Conformalize a Time Series Model","title":"Date and Time-related features","text":"","category":"section"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"We create temporal features out of the date and hour:","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"using Dates\ndf.Datetime = Dates.DateTime.(df.Datetime, \"yyyy-mm-dd HH:MM:SS\")\ndf.Weekofyear = Dates.week.(df.Datetime)\ndf.Weekday = Dates.dayofweek.(df.Datetime)\ndf.hour = Dates.hour.(df.Datetime) ","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"Additionally, to simulate sudden changes caused by unforeseen events, such as blackouts or lockdowns, we deliberately reduce the electricity demand by 2GW from February 22nd onward.","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"df.Demand_updated = copy(df.Demand)\ncondition = df.Datetime .>= Date(\"2014-02-22\")\ndf[condition, :Demand_updated] .= df[condition, :Demand_updated] .- 2","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"That is how the data looks like after our manipulation","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"cutoff_point = 200\nplot(df[cutoff_point:split_index, [:Datetime]].Datetime, df[cutoff_point:split_index, :].Demand ,\n label=\"training data\", color=:green, xlabel = \"Date\" , ylabel=\"Electricity demand(GW)\")\nplot!(df[split_index+1 : size(df,1), [:Datetime]].Datetime, df[split_index+1 : size(df,1), : ].Demand,\n label=\"test data\", color=:orange, xlabel = \"Date\" , ylabel=\"Electricity demand(GW)\")\nplot!(df[split_index+1 : size(df,1), [:Datetime]].Datetime, df[split_index+1 : size(df,1), : ].Demand_updated, label=\"updated test data\", color=:red, linewidth=1, framestyle=:box)\nplot!(legend=:outerbottom, legendcolumns=3)\nplot!(size=(850,400), left_margin = 5Plots.mm)","category":"page"},{"location":"how_to_guides/timeseries/#Lag-features","page":"How to Conformalize a Time Series Model","title":"Lag features","text":"","category":"section"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"using ShiftedArrays\nn_lags = 5\nfor i = 1:n_lags\n DataFrames.transform!(df, \"Demand\" => (x -> ShiftedArrays.lag(x, i)) => \"lag_hour_$i\")\nend\n\ndf_dropped_missing = dropmissing(df)\ndf_dropped_missing","category":"page"},{"location":"how_to_guides/timeseries/#Train-test-split","page":"How to Conformalize a Time Series Model","title":"Train-test split","text":"","category":"section"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"As usual, we split the data into train and test sets. We use the first 90% of the data for training and the remaining 10% for testing.","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"features_cols = DataFrames.select(df_dropped_missing, Not([:Datetime, :Demand, :Demand_updated]))\nX = Matrix(features_cols)\ny = Matrix(df_dropped_missing[:, [:Demand_updated]])\nsplit_index = floor(Int, 0.9 * size(y , 1)) \nprintln(split_index)\nX_train = X[1:split_index, :]\ny_train = y[1:split_index, :]\nX_test = X[split_index+1 : size(y,1), :]\ny_test = y[split_index+1 : size(y,1), :]","category":"page"},{"location":"how_to_guides/timeseries/#Loading-model-using-MLJ-interface","page":"How to Conformalize a Time Series Model","title":"Loading model using MLJ interface","text":"","category":"section"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"As our baseline model, we use a boosted tree regressor:","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"using MLJ\nEvoTreeRegressor = @load EvoTreeRegressor pkg=EvoTrees verbosity=0\nmodel = EvoTreeRegressor(nrounds =100, max_depth=10, rng=123)","category":"page"},{"location":"how_to_guides/timeseries/#Conformal-time-series","page":"How to Conformalize a Time Series Model","title":"Conformal time series","text":"","category":"section"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"Next, we conformalize the model using EnbPI. First, we will proceed without updating training set residuals to build prediction intervals. The result is shown in the following figure:","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"using ConformalPrediction\n\nconf_model = conformal_model(model; method=:time_series_ensemble_batch, coverage=0.95)\nmach = machine(conf_model, X_train, y_train)\ntrain = [1:split_index;]\nfit!(mach, rows=train)\n\ny_pred_interval = MLJ.predict(conf_model, mach.fitresult, X_test)\nlb = [ minimum(tuple_data) for tuple_data in y_pred_interval]\nub = [ maximum(tuple_data) for tuple_data in y_pred_interval]\ny_pred = [mean(tuple_data) for tuple_data in y_pred_interval]","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"#| echo: false\n#| output: true\ncutoff_point = findfirst(df_dropped_missing.Datetime .== Date(\"2014-02-15\"))\nplot(df_dropped_missing[cutoff_point:split_index, [:Datetime]].Datetime, y_train[cutoff_point:split_index] ,\n label=\"train\", color=:green , xlabel = \"Date\" , ylabel=\"Electricity demand(GW)\", linewidth=1)\nplot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime,\n y_test, label=\"test\", color=:red)\nplot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime ,\n y_pred, label =\"prediction\", color=:blue)\nplot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime,\n lb, fillrange = ub, fillalpha = 0.2, label = \"prediction interval w/o EnbPI\",\n color=:lake, linewidth=0, framestyle=:box)\nplot!(legend=:outerbottom, legendcolumns=4, legendfontsize=6)\nplot!(size=(850,400), left_margin = 5Plots.mm)","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"We can use partial_fit method in EnbPI implementation in ConformalPrediction in order to adjust prediction intervals to sudden change points on test sets that have not been seen by the model during training. In the below experiment, samplesize indicates the batch of new observations. You can decide if you want to update residuals by samplesize or update and remove first n residuals (shift_size = n). The latter will allow to remove early residuals that will not have a positive impact on the current observations.","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"The chart below compares the results to the previous experiment without updating residuals:","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"sample_size = 30\nshift_size = 100\nlast_index = size(X_test , 1)\nlb_updated , ub_updated = ([], [])\nfor step in 1:sample_size:last_index\n if last_index - step < sample_size\n y_interval = MLJ.predict(conf_model, mach.fitresult, X_test[step:last_index , :])\n partial_fit(mach.model , mach.fitresult, X_test[step:last_index , :], y_test[step:last_index , :], shift_size)\n else\n y_interval = MLJ.predict(conf_model, mach.fitresult, X_test[step:step+sample_size-1 , :])\n partial_fit(mach.model , mach.fitresult, X_test[step:step+sample_size-1 , :], y_test[step:step+sample_size-1 , :], shift_size) \n end \n lb_updatedᵢ= [ minimum(tuple_data) for tuple_data in y_interval]\n push!(lb_updated,lb_updatedᵢ)\n ub_updatedᵢ = [ maximum(tuple_data) for tuple_data in y_interval]\n push!(ub_updated, ub_updatedᵢ)\nend\nlb_updated = reduce(vcat, lb_updated)\nub_updated = reduce(vcat, ub_updated)","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"#| echo: false\n#| output: true\nplot(df_dropped_missing[cutoff_point:split_index, [:Datetime]].Datetime, y_train[cutoff_point:split_index] ,\n label=\"train\", color=:green , xlabel = \"Date\" , ylabel=\"Electricity demand(GW)\", linewidth=1)\nplot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime, y_test,\n label=\"test\", color=:red)\nplot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime ,\n y_pred, label =\"prediction\", color=:blue)\nplot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime,\n lb_updated, fillrange = ub_updated, fillalpha = 0.2, label = \"EnbPI\",\n color=:lake, linewidth=0, framestyle=:box)\nplot!(legend=:outerbottom, legendcolumns=4)\nplot!(size=(850,400), left_margin = 5Plots.mm)","category":"page"},{"location":"how_to_guides/timeseries/#Results","page":"How to Conformalize a Time Series Model","title":"Results","text":"","category":"section"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"In time series problems, unexpected incidents can lead to sudden changes, and such scenarios are highly probable. As illustrated earlier, the model’s training data lacks information about these change points, making it unable to anticipate them. The top figure demonstrates that when residuals are not updated, the prediction intervals solely rely on the distribution of residuals from the training set. Consequently, these intervals fail to encompass the true observations after the change point, resulting in a sudden drop in coverage.","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"However, by partially updating the residuals, the method becomes adept at capturing the increasing uncertainties in model predictions. It is important to note that the changes in uncertainty occur approximately one day after the change point. This delay is attributed to the requirement of having a sufficient number of new residuals to alter the quantiles obtained from the residual distribution.","category":"page"},{"location":"how_to_guides/timeseries/#References","page":"How to Conformalize a Time Series Model","title":"References","text":"","category":"section"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"Xu, Chen, and Yao Xie. 2021. “Conformal Prediction Interval for Dynamic Time-Series.” In, 11559–69. PMLR. https://proceedings.mlr.press/v139/xu21h.html.","category":"page"},{"location":"explanation/architecture/#Package-Architecture","page":"Package Architecture","title":"Package Architecture","text":"","category":"section"},{"location":"explanation/architecture/","page":"Package Architecture","title":"Package Architecture","text":"The diagram below demonstrates the package architecture at the time of writing. This is still subject to change, so any thoughts and comments are very much welcome.","category":"page"},{"location":"explanation/architecture/","page":"Package Architecture","title":"Package Architecture","text":"The goal is to make this package as compatible as possible with MLJ to tab into existing functionality. The basic idea is to subtype MLJ Supervised models and then use concrete types to implement different approaches to conformal prediction. For each of these concrete types the compulsory MMI.fit and MMI.predict methods need be implemented (see here).","category":"page"},{"location":"explanation/architecture/","page":"Package Architecture","title":"Package Architecture","text":"(Image: )","category":"page"},{"location":"explanation/architecture/#Abstract-Subtypes","page":"Package Architecture","title":"Abstract Subtypes","text":"","category":"section"},{"location":"explanation/architecture/","page":"Package Architecture","title":"Package Architecture","text":"Currently, I intend to work with three different abstract subtypes:","category":"page"},{"location":"explanation/architecture/","page":"Package Architecture","title":"Package Architecture","text":"ConformalPrediction.ConformalInterval\nConformalPrediction.ConformalProbabilisticSet\nConformalPrediction.ConformalProbabilistic","category":"page"},{"location":"explanation/architecture/#fit-and-predict","page":"Package Architecture","title":"fit and predict","text":"","category":"section"},{"location":"explanation/architecture/","page":"Package Architecture","title":"Package Architecture","text":"The fit and predict methods are compulsory in order to prepare models for general use with MLJ. They also serve us to implement the logic underlying the various approaches to conformal prediction. To understand how this currently works, have a look at the ConformalPrediction.AdaptiveInductiveClassifier as an example: fit(conf_model::ConformalPrediction.AdaptiveInductiveClassifier, verbosity, X, y) and predict(conf_model::ConformalPrediction.AdaptiveInductiveClassifier, fitresult, Xnew).","category":"page"},{"location":"explanation/","page":"Overview","title":"Overview","text":"CurrentModule = ConformalPrediction","category":"page"},{"location":"explanation/#Explanation","page":"Overview","title":"Explanation","text":"","category":"section"},{"location":"explanation/","page":"Overview","title":"Overview","text":"In this section you will find detailed explanations about the methodology and code.","category":"page"},{"location":"explanation/","page":"Overview","title":"Overview","text":"Explanation clarifies, deepens and broadens the reader’s understanding of a subject.— Diátaxis","category":"page"},{"location":"explanation/","page":"Overview","title":"Overview","text":"In other words, you come here because you are interested in understanding how all of this actually works 🤓","category":"page"},{"location":"tutorials/classification/#Classification","page":"Classification","title":"Classification","text":"","category":"section"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"This tutorial is based in parts on this blog post.","category":"page"},{"location":"tutorials/classification/#Split-Conformal-Classification","page":"Classification","title":"Split Conformal Classification","text":"","category":"section"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"We consider a simple binary classification problem. Let (X(i),Y(i)), i = 1, ..., n denote our feature-label pairs and let μ : 𝒳 ↦ 𝒴 denote the mapping from features to labels. For illustration purposes we will use the moons dataset 🌙. Using MLJ.jl we first generate the data and split into into a training and test set:","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"using MLJ\nusing Random\nRandom.seed!(123)\n\n# Data:\nX, y = make_moons(500; noise=0.15)\nX = MLJ.table(convert.(Float32, MLJ.matrix(X)))\ntrain, test = partition(eachindex(y), 0.8, shuffle=true)","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"Here we will use a specific case of CP called split conformal prediction which can then be summarized as follows:[1]","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"Partition the training into a proper training set and a separate calibration set: 𝒟_(n) = 𝒟^(train) ∪ 𝒟^(cali).\nTrain the machine learning model on the proper training set: μ̂(i ∈ 𝒟^(train))(X(i),Y_(i)).\nCompute nonconformity scores, 𝒮, using the calibration data 𝒟^(cali) and the fitted model μ̂_(i ∈ 𝒟^(train)).\nFor a user-specified desired coverage ratio (1−α) compute the corresponding quantile, q̂, of the empirical distribution of nonconformity scores, 𝒮.\nFor the given quantile and test sample X_(test), form the corresponding conformal prediction set:","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"C(X_texttest)=ys(X_texttesty) le hatq","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"This is the default procedure used for classification and regression in ConformalPrediction.jl.","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"Now let’s take this to our 🌙 data. To illustrate the package functionality we will demonstrate the envisioned workflow. We first define our atomic machine learning model following standard MLJ.jl conventions. Using ConformalPrediction.jl we then wrap our atomic model in a conformal model using the standard API call conformal_model(model::Supervised; kwargs...). To train and predict from our conformal model we can then rely on the conventional MLJ.jl procedure again. In particular, we wrap our conformal model in data (turning it into a machine) and then fit it to the training data. Finally, we use our machine to predict the label for a new test sample Xtest:","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"# Model:\nKNNClassifier = @load KNNClassifier pkg=NearestNeighborModels\nmodel = KNNClassifier(;K=50) \n\n# Training:\nusing ConformalPrediction\nconf_model = conformal_model(model; coverage=.9)\nmach = machine(conf_model, X, y)\nfit!(mach, rows=train)\n\n# Conformal Prediction:\nXtest = selectrows(X, test)\nytest = y[test]\nŷ = predict(mach, Xtest)\nŷ[1]","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"import NearestNeighborModels ✔\n\nUnivariateFinite{Multiclass{2}}(0=>0.94)","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"The final predictions are set-valued. While the softmax output remains unchanged for the SimpleInductiveClassifier, the size of the prediction set depends on the chosen coverage rate, (1−α).","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"When specifying a coverage rate very close to one, the prediction set will typically include many (in some cases all) of the possible labels. Below, for example, both classes are included in the prediction set when setting the coverage rate equal to (1−α)=1.0. This is intuitive, since high coverage quite literally requires that the true label is covered by the prediction set with high probability.","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"conf_model = conformal_model(model; coverage=coverage, method=:simple_inductive)\nmach = machine(conf_model, X, y)\nfit!(mach, rows=train)\n\n# Conformal Prediction:\nXtest = (x1=[1],x2=[0])\npredict(mach, Xtest)[1]","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"UnivariateFinite{Multiclass{2}}(0=>0.5, 1=>0.5)","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"Conversely, for low coverage rates, prediction sets can also be empty. For a choice of (1−α)=0.1, for example, the prediction set for our test sample is empty. This is a bit difficult to think about intuitively and I have not yet come across a satisfactory, intuitive interpretation.[2] When the prediction set is empty, the predict call currently returns missing:","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"conf_model = conformal_model(model; coverage=coverage)\nmach = machine(conf_model, X, y)\nfit!(mach, rows=train)\n\n# Conformal Prediction:\npredict(mach, Xtest)[1]","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"missing","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"cov_ = .95\nconf_model = conformal_model(model; coverage=cov_)\nmach = machine(conf_model, X, y)\nfit!(mach, rows=train)\nMarkdown.parse(\"\"\"\nThe following chart shows the resulting predicted probabilities for ``y=1`` (left) and set size (right) for a choice of ``(1-\\\\alpha)``=$cov_.\n\"\"\")","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"The following chart shows the resulting predicted probabilities for y = 1 (left) and set size (right) for a choice of (1−α)=0.95.","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"using Plots\np_proba = contourf(mach.model, mach.fitresult, X, y)\np_set_size = contourf(mach.model, mach.fitresult, X, y; plot_set_size=true)\nplot(p_proba, p_set_size, size=(800,250))","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"(Image: )","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"The animation below should provide some more intuition as to what exactly is happening here. It illustrates the effect of the chosen coverage rate on the predicted softmax output and the set size in the two-dimensional feature space. Contours are overlayed with the moon data points (including test data). The two samples highlighted in red, X₁ and X₂, have been manually added for illustration purposes. Let’s look at these one by one.","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"Firstly, note that X₁ (red cross) falls into a region of the domain that is characterized by high predictive uncertainty. It sits right at the bottom-right corner of our class-zero moon 🌜 (orange), a region that is almost entirely enveloped by our class-one moon 🌛 (green). For low coverage rates the prediction set for X₁ is empty: on the left-hand side this is indicated by the missing contour for the softmax probability; on the right-hand side we can observe that the corresponding set size is indeed zero. For high coverage rates the prediction set includes both y = 0 and y = 1, indicative of the fact that the conformal classifier is uncertain about the true label.","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"With respect to X₂, we observe that while also sitting on the fringe of our class-zero moon, this sample populates a region that is not fully enveloped by data points from the opposite class. In this region, the underlying atomic classifier can be expected to be more certain about its predictions, but still not highly confident. How is this reflected by our corresponding conformal prediction sets?","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"Xtest_2 = (x1=[-0.5],x2=[0.25])\np̂_2 = pdf(predict(mach, Xtest_2)[1], 0)","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"Well, for low coverage rates (roughly  \\< 0.9) the conformal prediction set does not include y = 0: the set size is zero (right panel). Only for higher coverage rates do we have C(X₂) = {0}: the coverage rate is high enough to include y = 0, but the corresponding softmax probability is still fairly low. For example, for (1−α) = 0.95 we have p̂(y=0|X₂) = 0.72.","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"These two examples illustrate an interesting point: for regions characterized by high predictive uncertainty, conformal prediction sets are typically empty (for low coverage) or large (for high coverage). While set-valued predictions may be something to get used to, this notion is overall intuitive.","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"# Setup\ncoverages = range(0.75,1.0,length=5)\nn = 100\nx1_range = range(extrema(X.x1)...,length=n)\nx2_range = range(extrema(X.x2)...,length=n)\n\nanim = @animate for coverage in coverages\n conf_model = conformal_model(model; coverage=coverage)\n mach = machine(conf_model, X, y)\n fit!(mach, rows=train)\n # Probabilities:\n p1 = contourf(mach.model, mach.fitresult, X, y)\n scatter!(p1, Xtest.x1, Xtest.x2, ms=6, c=:red, label=\"X₁\", shape=:cross, msw=6)\n scatter!(p1, Xtest_2.x1, Xtest_2.x2, ms=6, c=:red, label=\"X₂\", shape=:diamond, msw=6)\n p2 = contourf(mach.model, mach.fitresult, X, y; plot_set_size=true)\n scatter!(p2, Xtest.x1, Xtest.x2, ms=6, c=:red, label=\"X₁\", shape=:cross, msw=6)\n scatter!(p2, Xtest_2.x1, Xtest_2.x2, ms=6, c=:red, label=\"X₂\", shape=:diamond, msw=6)\n plot(p1, p2, plot_title=\"(1-α)=$(round(coverage,digits=2))\", size=(800,300))\nend\n\ngif(anim, joinpath(www_path,\"classification.gif\"), fps=1)","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"(Image: )","category":"page"},{"location":"tutorials/classification/#Adaptive-Sets","page":"Classification","title":"Adaptive Sets","text":"","category":"section"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"Instead of using the simple approach, we can use adaptive prediction sets (Angelopoulos and Bates 2021):","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"conf_model = conformal_model(model; coverage=cov_, method=:adaptive_inductive)\nmach = machine(conf_model, X, y)\nfit!(mach, rows=train)\nresults[:adaptive_inductive] = mach","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"using Plots\np_proba = contourf(mach.model, mach.fitresult, X, y)\np_set_size = contourf(mach.model, mach.fitresult, X, y; plot_set_size=true)\nplot(p_proba, p_set_size, size=(800,250))","category":"page"},{"location":"tutorials/classification/#Evaluation","page":"Classification","title":"Evaluation","text":"","category":"section"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"For evaluation of conformal predictors we follow Angelopoulos and Bates (2021) (Section 3). As a first step towards adaptiveness (adaptivity), the authors recommend to inspect the set size of conformal predictions. The chart below shows the interval width for the different methods along with the ground truth interval width:","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"plt_list = []\nfor (_mod, mach) in results\n push!(plt_list, bar(mach.model, mach.fitresult, X; title=String(_mod)))\nend\nplot(plt_list..., size=(800,300))","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"(Image: Figure 1: Prediction interval width.)","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"We can also use specific metrics like empirical coverage and size-stratified coverage to check for correctness and adaptiveness, respectively. To this end, the package provides custom measures that are compatible with MLJ.jl. In other words, we can evaluate model performance in true MLJ.jl fashion (see here).","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"The code below runs the evaluation with respect to both metrics, emp_coverage and ssc for a single conformal machine:","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"_mod, mach = first(results)\n_eval = evaluate!(\n mach,\n operation=predict,\n measure=[emp_coverage, ssc]\n)\n# display(_eval)\nprintln(\"Empirical coverage for $(_mod): $(round(_eval.measurement[1], digits=3))\")\nprintln(\"SSC for $(_mod): $(round(_eval.measurement[2], digits=3))\")","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"Empirical coverage for adaptive_inductive: 0.962\nSSC for adaptive_inductive: 0.962","category":"page"},{"location":"tutorials/classification/#References","page":"Classification","title":"References","text":"","category":"section"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"Angelopoulos, Anastasios N., and Stephen Bates. 2021. “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification.” https://arxiv.org/abs/2107.07511.","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"[1] In other places split conformal prediction is sometimes referred to as inductive conformal prediction.","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"[2] Any thoughts/comments welcome!","category":"page"},{"location":"tutorials/plotting/#Visualization-using-TaijaPlotting.jl","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"","category":"section"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"CurrentModule = ConformalPrediction","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"This tutorial demonstrates how various custom plotting methods can be used to visually analyze conformal predictors.","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"using ConformalPrediction\nusing Plots, TaijaPlotting","category":"page"},{"location":"tutorials/plotting/#Regression","page":"Visualization using TaijaPlotting.jl","title":"Regression","text":"","category":"section"},{"location":"tutorials/plotting/#Visualizing-Prediction-Intervals","page":"Visualization using TaijaPlotting.jl","title":"Visualizing Prediction Intervals","text":"","category":"section"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"For conformal regressors, the TaijaPlotting.plot can be used to visualize the prediction intervals for given data points.","category":"page"},{"location":"tutorials/plotting/#Univariate-Input","page":"Visualization using TaijaPlotting.jl","title":"Univariate Input","text":"","category":"section"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"using MLJ\nX, y = make_regression(100, 1; noise=0.3)","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"EvoTreeRegressor = @load EvoTreeRegressor pkg=EvoTrees\nmodel = EvoTreeRegressor() \nconf_model = conformal_model(model)\nmach = machine(conf_model, X, y)\nfit!(mach)","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"plot(mach.model, mach.fitresult, X, y; input_var=1)","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"(Image: )","category":"page"},{"location":"tutorials/plotting/#Multivariate-Input","page":"Visualization using TaijaPlotting.jl","title":"Multivariate Input","text":"","category":"section"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"using MLJ\nX, y = @load_boston\nschema(X)","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"EvoTreeRegressor = @load EvoTreeRegressor pkg=EvoTrees\nmodel = EvoTreeRegressor() \nconf_model = conformal_model(model)\nmach = machine(conf_model, X, y)\nfit!(mach)","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"input_vars = [:Crim, :Age, :Tax]\nnvars = length(input_vars)\nplt_list = []\nfor input_var in input_vars\n plt = plot(mach.model, mach.fitresult, X, y; input_var=input_var, title=input_var)\n push!(plt_list, plt)\nend\nplot(plt_list..., layout=(1,nvars), size=(nvars*200, 200))","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"(Image: )","category":"page"},{"location":"tutorials/plotting/#Visualizing-Set-Size","page":"Visualization using TaijaPlotting.jl","title":"Visualizing Set Size","text":"","category":"section"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"To visualize the set size distribution, the TaijaPlotting.bar can be used. For regression models, the prediction interval widths are stratified into discrete bins.","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"bar(mach.model, mach.fitresult, X)","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"(Image: )","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"EvoTreeRegressor = @load EvoTreeRegressor pkg=EvoTrees\nmodel = EvoTreeRegressor() \nconf_model = conformal_model(model, method=:jackknife_plus)\nmach = machine(conf_model, X, y)\nfit!(mach)","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"bar(mach.model, mach.fitresult, X)","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"(Image: )","category":"page"},{"location":"tutorials/plotting/#Classification","page":"Visualization using TaijaPlotting.jl","title":"Classification","text":"","category":"section"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"KNNClassifier = @load KNNClassifier pkg=NearestNeighborModels\nmodel = KNNClassifier(;K=3)","category":"page"},{"location":"tutorials/plotting/#Visualizing-Predictions","page":"Visualization using TaijaPlotting.jl","title":"Visualizing Predictions","text":"","category":"section"},{"location":"tutorials/plotting/#Stacked-Area-Charts","page":"Visualization using TaijaPlotting.jl","title":"Stacked Area Charts","text":"","category":"section"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"Stacked area charts can be used to visualize prediction sets for any conformal classifier.","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"using MLJ\nn_input = 4\nX, y = make_blobs(100, n_input)","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"conf_model = conformal_model(model)\nmach = machine(conf_model, X, y)\nfit!(mach)","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"plt_list = []\nfor i in 1:n_input\n plt = areaplot(mach.model, mach.fitresult, X, y; input_var=i, title=\"Input $i\")\n push!(plt_list, plt)\nend\nplot(plt_list..., size=(220*n_input,200), layout=(1, n_input))","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"(Image: )","category":"page"},{"location":"tutorials/plotting/#Contour-Plots-for-Two-Dimensional-Inputs","page":"Visualization using TaijaPlotting.jl","title":"Contour Plots for Two-Dimensional Inputs","text":"","category":"section"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"For conformal classifiers with exactly two input variables, the TaijaPlotting.contourf method can be used to visualize conformal predictions in the two-dimensional feature space.","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"using MLJ\nX, y = make_blobs(100, 2)","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"conf_model = conformal_model(model)\nmach = machine(conf_model, X, y)\nfit!(mach)","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"p1 = contourf(mach.model, mach.fitresult, X, y)\np2 = contourf(mach.model, mach.fitresult, X, y; plot_set_size=true)\nplot(p1, p2, size=(700,300))","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"(Image: )","category":"page"},{"location":"tutorials/plotting/#Visualizing-Set-Size-2","page":"Visualization using TaijaPlotting.jl","title":"Visualizing Set Size","text":"","category":"section"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"To visualize the set size distribution, the TaijaPlotting.bar can be used. Recall that for more adaptive predictors the distribution of set sizes is typically spread out more widely, which reflects that “the procedure is effectively distinguishing between easy and hard inputs” (Angelopoulos and Bates 2021). This is desirable: when for a given sample it is difficult to make predictions, this should be reflected in the set size (or interval width in the regression case). Since ‘difficult’ lies on some spectrum that ranges from ‘very easy’ to ‘very difficult’ the set size should vary across the spectrum of ‘empty set’ to ‘all labels included’.","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"X, y = make_moons(500; noise=0.15)\nKNNClassifier = @load KNNClassifier pkg=NearestNeighborModels\nmodel = KNNClassifier(;K=50) ","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"conf_model = conformal_model(model)\nmach = machine(conf_model, X, y)\nfit!(mach)","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"p1 = contourf(mach.model, mach.fitresult, X, y; plot_set_size=true)\np2 = bar(mach.model, mach.fitresult, X)\nplot(p1, p2, size=(700,300))","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"(Image: )","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"conf_model = conformal_model(model, method=:adaptive_inductive)\nmach = machine(conf_model, X, y)\nfit!(mach)","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"p1 = contourf(mach.model, mach.fitresult, X, y; plot_set_size=true)\np2 = bar(mach.model, mach.fitresult, X)\nplot(p1, p2, size=(700,300))","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"(Image: )","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"Angelopoulos, Anastasios N., and Stephen Bates. 2021. “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification.” https://arxiv.org/abs/2107.07511.","category":"page"},{"location":"faq/#Frequently-Asked-Questions","page":"❓ FAQ","title":"Frequently Asked Questions","text":"","category":"section"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"CurrentModule = ConformalPrediction","category":"page"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"In this section we attempt to provide some reflections on frequently asked questions about the package and implemented methodologies. If you have a particular question that is not listed here, please feel free to also open an issue. While can answer questions regarding the package with a certain degree of confidence, I do not pretend to have any definite answers to methodological questions, but merely reflections (see the disclaimer below).","category":"page"},{"location":"faq/#Package","page":"❓ FAQ","title":"Package","text":"","category":"section"},{"location":"faq/#Why-the-interface-to-MLJ.jl?","page":"❓ FAQ","title":"Why the interface to MLJ.jl?","text":"","category":"section"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"An important design choice. MLJ.jl is a one-stop shop for common machine learning models and pipelines in Julia. It’s growing fast and the development team is very accessible, friendly and enthusiastic. Conformal Prediction is a model-agnostic approach to uncertainty quantification, so it can be applied to any common (supervised) machine learning model. For these reasons I decided to interface this package to MLJ.jl. The idea is that any (supervised) MLJ.jl model can be conformalized using ConformalPrediction.jl. By leveraging existing MLJ.jl functionality for common tasks like training, prediction and model evaluation, this package is light-weight and scalable.","category":"page"},{"location":"faq/#Methodology","page":"❓ FAQ","title":"Methodology","text":"","category":"section"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"For methodological questions about Conformal Prediction, my best advice is to consult the literature on the topic. A good place to start is “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification” (Angelopoulos and Bates 2021): the tutorial is comprehensive, accessible and continuously updated. Below you will find a list of high-level questions and reflections.","category":"page"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"warning: Disclaimer\n","category":"page"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"    I want to emphasize that these are merely my own reflections. I provide these to the best of my knowledge and understanding of the topic, but please be aware that I am still on a learning journey myself. I have not read the entire literature on this topic (and won’t be able to in the future either). If you spot anything that doesn’t look right or sits at odds with something your read in the literature, please open an issue. Even better: if you want to add your own reflections and thoughts, feel free to open a pull request.","category":"page"},{"location":"faq/#What-is-Predictive-Uncertainty-Quantification?","page":"❓ FAQ","title":"What is Predictive Uncertainty Quantification?","text":"","category":"section"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"Predictive Uncertainty Quantification deals with quantifying the uncertainty around predictions for the output variable of a supervised model. It is a subset of Uncertainty Quantification, which can also relate to uncertainty around model parameters, for example. I will sometimes use both terms interchangeably, even though I shouldn’t (please bare with me, or if you’re bothered by a particular slip-up, open a PR).","category":"page"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"Uncertainty of model parameters is a very important topic itself: we might be interested in understanding, for example, if the estimated effect θ of some input variable x on the output variable y is statistically significant. This typically hinges on being able to quantify the uncertainty around the parameter θ. This package does not offer this sort of functionality. I have so far not come across any work on Conformal Inference that deals with parameter uncertainty, but I also haven’t properly looked for it.","category":"page"},{"location":"faq/#What-is-the-(marginal)-coverage-guarantee?","page":"❓ FAQ","title":"What is the (marginal) coverage guarantee?","text":"","category":"section"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"The (marginal) coverage guarantee states that:","category":"page"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"[…] the probability that the prediction set contains the correct label [for a fresh test point from the same distribution] is almost exactly 1 − α.— Angelopoulos and Bates (2021)","category":"page"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"See Angelopoulos and Bates (2021) for a formal proof of this property or check out this section or Pluto.jl 🎈 notebook to convince yourself through a small empirical exercise. Note that this property relates to a special case of conformal prediction, namely Split Conformal Prediction (Angelopoulos and Bates 2021).","category":"page"},{"location":"faq/#What-does-marginal-mean-in-this-context?","page":"❓ FAQ","title":"What does marginal mean in this context?","text":"","category":"section"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"The property is “marginal” in the sense that the probability is averaged over the randomness in the data (Angelopoulos and Bates 2021). Depending on the size of the calibration set (context: Split Conformal Prediction), the realized coverage or estimated empirical coverage may deviate slightly from the user specified value 1 − α. To get a sense of this effect, you may want to check out this Pluto.jl 🎈 notebook: it allows you to adjust the calibration set size and check the resulting empirical coverage. See also Section 3 of Angelopoulos and Bates (2021).","category":"page"},{"location":"faq/#Is-CP-really-distribution-free?","page":"❓ FAQ","title":"Is CP really distribution-free?","text":"","category":"section"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"The marginal coverage property holds under the assumption that the input data is exchangeable, which is a minimal distributional assumption. So, in my view, the short answer to this question is “No”. I believe that when people use the term “distribution-free” in this context, they mean that no prior assumptions are being made about the actual form or family of distribution(s) that generate the model parameters and data. If we define “distribution-free” in this sense, then the answer to me seems “Yes”.","category":"page"},{"location":"faq/#What-happens-if-this-minimal-distributional-assumption-is-violated?","page":"❓ FAQ","title":"What happens if this minimal distributional assumption is violated?","text":"","category":"section"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"Then the marginal coverage property does not hold. See here for an example.","category":"page"},{"location":"faq/#What-are-set-valued-predictions?","page":"❓ FAQ","title":"What are set-valued predictions?","text":"","category":"section"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"This should be clearer after reading through some of the other tutorials and explanations. For conformal classifiers of type ConformalProbabilisticSet, predictions are set-valued: these conformal classifiers may return multiple labels, a single label or no labels at all. Larger prediction sets indicate higher predictive uncertainty: for sets of size greater than one the conformal predictor cannot with certainty narrow down its prediction down to a single label, so it returns all labels that meet the specified marginal coverage.","category":"page"},{"location":"faq/#How-do-I-interpret-the-distribution-of-set-size?","page":"❓ FAQ","title":"How do I interpret the distribution of set size?","text":"","category":"section"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"It can be useful to plot the distribution of set sizes in order to visually asses how adaptive a conformal predictor is. For more adaptive predictors the distribution of set sizes is typically spread out more widely, which reflects that “the procedure is effectively distinguishing between easy and hard inputs” (Angelopoulos and Bates 2021). This is desirable: when for a given sample it is difficult to make predictions, this should be reflected in the set size (or interval width in the regression case). Since ‘difficult’ lies on some spectrum that ranges from ‘very easy’ to ‘very difficult’ the set size should very across the spectrum of ‘empty set’ to ‘all labels included’.","category":"page"},{"location":"faq/#What-is-aleatoric-uncertainty?-What-is-epistemic-uncertainty?","page":"❓ FAQ","title":"What is aleatoric uncertainty? What is epistemic uncertainty?","text":"","category":"section"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"Loosely speaking: aleatoric uncertainty relates to uncertainty that cannot be “learned away” by observing more data (think points near the decision boundary); epistemic uncertainty relates to uncertainty that can be “learned away” by observing more data.","category":"page"},{"location":"faq/#References","page":"❓ FAQ","title":"References","text":"","category":"section"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"Angelopoulos, Anastasios N., and Stephen Bates. 2021. “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification.” https://arxiv.org/abs/2107.07511.","category":"page"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"CurrentModule = ConformalPrediction","category":"page"},{"location":"reference/#Reference","page":"🧐 Reference","title":"Reference","text":"","category":"section"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"In this reference you will find a detailed overview of the package API.","category":"page"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"Reference guides are technical descriptions of the machinery and how to operate it. Reference material is information-oriented.— Diátaxis","category":"page"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"In other words, you come here because you want to take a very close look at the code 🧐","category":"page"},{"location":"reference/#Content","page":"🧐 Reference","title":"Content","text":"","category":"section"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"Pages = [\"_reference.md\"]","category":"page"},{"location":"reference/#Index","page":"🧐 Reference","title":"Index","text":"","category":"section"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"","category":"page"},{"location":"reference/#Public-Interface","page":"🧐 Reference","title":"Public Interface","text":"","category":"section"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"Modules = [\n ConformalPrediction,\n ConformalPrediction.ConformalTraining,\n]\nPrivate = false","category":"page"},{"location":"reference/#ConformalPrediction.available_models","page":"🧐 Reference","title":"ConformalPrediction.available_models","text":"A container listing all available methods for conformal prediction.\n\n\n\n\n\n","category":"constant"},{"location":"reference/#ConformalPrediction.tested_atomic_models","page":"🧐 Reference","title":"ConformalPrediction.tested_atomic_models","text":"A container listing all atomic MLJ models that have been tested for use with this package.\n\n\n\n\n\n","category":"constant"},{"location":"reference/#ConformalPrediction.conformal_model-Tuple{MLJModelInterface.Supervised}","page":"🧐 Reference","title":"ConformalPrediction.conformal_model","text":"conformal_model(model::Supervised; method::Union{Nothing, Symbol}=nothing, kwargs...)\n\nA simple wrapper function that turns a model::Supervised into a conformal model. It accepts an optional key argument that can be used to specify the desired method for conformal prediction as well as additinal kwargs... specific to the method.\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.emp_coverage-Tuple{Any, Any}","page":"🧐 Reference","title":"ConformalPrediction.emp_coverage","text":"emp_coverage(ŷ, y)\n\nComputes the empirical coverage for conformal predictions ŷ.\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.ineff","page":"🧐 Reference","title":"ConformalPrediction.ineff","text":"ineff(ŷ)\n\nComputes the inefficiency (average set size) for conformal predictions ŷ.\n\n\n\n\n\n","category":"function"},{"location":"reference/#ConformalPrediction.partial_fit","page":"🧐 Reference","title":"ConformalPrediction.partial_fit","text":"partial_fit(conf_model::TimeSeriesRegressorEnsembleBatch, fitresult, X, y, shift_size)\n\nFor the TimeSeriesRegressorEnsembleBatch Non-conformity scores are updated by the most recent data (X,y). shift_size determines how many points in Non-conformity scores will be discarded.\n\n\n\n\n\n","category":"function"},{"location":"reference/#ConformalPrediction.set_size-Tuple{Any}","page":"🧐 Reference","title":"ConformalPrediction.set_size","text":"set_size(ŷ)\n\nHelper function that computes the set size for conformal predictions. \n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.size_stratified_coverage-Tuple{Any, Any}","page":"🧐 Reference","title":"ConformalPrediction.size_stratified_coverage","text":"size_stratified_coverage(ŷ, y)\n\nComputes the size-stratified coverage for conformal predictions ŷ.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.fit-Tuple{ConformalPrediction.AdaptiveInductiveClassifier, Any, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.fit","text":"MMI.fit(conf_model::AdaptiveInductiveClassifier, verbosity, X, y)\n\nFor the AdaptiveInductiveClassifier nonconformity scores are computed by cumulatively summing the ranked scores of each label in descending order until reaching the true label Y_i:\n\nS_i^textCAL = s(X_iY_i) = sum_j=1^k hatmu(X_i)_pi_j textwhere Y_i=pi_k i in mathcalD_textcalibration\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.fit-Tuple{ConformalPrediction.CVMinMaxRegressor, Any, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.fit","text":"MMI.fit(conf_model::CVMinMaxRegressor, verbosity, X, y)\n\nFor the CVMinMaxRegressor nonconformity scores are computed in the same way as for the CVPlusRegressor. Specifically, we have,\n\nS_i^textCV = s(X_i Y_i) = h(hatmu_-mathcalD_k(i)(X_i) Y_i) i in mathcalD_texttrain\n\nwhere hatmu_-mathcalD_k(i)(X_i) denotes the CV prediction for X_i. In other words, for each CV fold k=1K and each training instance i=1n the model is trained on all training data excluding the fold containing i. The fitted model is then used to predict out-of-sample from X_i. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure h(cdot) to the fitted value hatmu_-mathcalD_k(i)(X_i) and the true value Y_i.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.fit-Tuple{ConformalPrediction.CVPlusRegressor, Any, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.fit","text":"MMI.fit(conf_model::CVPlusRegressor, verbosity, X, y)\n\nFor the CVPlusRegressor nonconformity scores are computed though cross-validation (CV) as follows,\n\nS_i^textCV = s(X_i Y_i) = h(hatmu_-mathcalD_k(i)(X_i) Y_i) i in mathcalD_texttrain\n\nwhere hatmu_-mathcalD_k(i)(X_i) denotes the CV prediction for X_i. In other words, for each CV fold k=1K and each training instance i=1n the model is trained on all training data excluding the fold containing i. The fitted model is then used to predict out-of-sample from X_i. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure h(cdot) to the fitted value hatmu_-mathcalD_k(i)(X_i) and the true value Y_i.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.fit-Tuple{ConformalPrediction.ConformalQuantileRegressor, Any, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.fit","text":"MMI.fit(conf_model::ConformalQuantileRegressor, verbosity, X, y)\n\nFor the ConformalQuantileRegressor nonconformity scores are computed as follows:\n\nS_i^textCAL = s(X_i Y_i) = h(hatmu_alpha_lo(X_i) hatmu_alpha_hi(X_i) Y_i) i in mathcalD_textcalibration\n\nA typical choice for the heuristic function is h(hatmu_alpha_lo(X_i) hatmu_alpha_hi(X_i) Y_i)= maxhatmu_alpha_low(X_i)-Y_i Y_i-hatmu_alpha_hi(X_i) where hatmu denotes the model fitted on training data mathcalD_texttrain and\\alpha{lo}, \\alpha{hi}`` lower and higher percentile.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.fit-Tuple{ConformalPrediction.JackknifeMinMaxRegressor, Any, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.fit","text":"MMI.fit(conf_model::JackknifeMinMaxRegressor, verbosity, X, y)\n\nFor the JackknifeMinMaxRegressor nonconformity scores are computed in the same way as for the JackknifeRegressor. Specifically, we have,\n\nS_i^textLOO = s(X_i Y_i) = h(hatmu_-i(X_i) Y_i) i in mathcalD_texttrain\n\nwhere hatmu_-i(X_i) denotes the leave-one-out prediction for X_i. In other words, for each training instance i=1n the model is trained on all training data excluding i. The fitted model is then used to predict out-of-sample from X_i. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure h(cdot) to the fitted value hatmu_-i(X_i) and the true value Y_i.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.fit-Tuple{ConformalPrediction.JackknifePlusAbMinMaxRegressor, Any, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.fit","text":"MMI.fit(conf_model::JackknifePlusMinMaxAbRegressor, verbosity, X, y)\n\nFor the JackknifePlusAbMinMaxRegressor nonconformity scores are as,\n\nS_i^textJ+MinMax = s(X_i Y_i) = h(agg(hatmu_B_K(-i)(X_i)) Y_i) i in mathcalD_texttrain\n\nwhere agg(hatmu_B_K(-i)(X_i)) denotes the aggregate predictions, typically mean or median, for each X_i (with K_-i the bootstraps not containing X_i). In other words, B models are trained on boostrapped sampling, the fitted models are then used to create aggregated prediction of out-of-sample X_i. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure h(cdot) to the fitted value agg(hatmu_B_K(-i)(X_i)) and the true value Y_i.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.fit-Tuple{ConformalPrediction.JackknifePlusAbRegressor, Any, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.fit","text":"MMI.fit(conf_model::JackknifePlusAbRegressor, verbosity, X, y)\n\nFor the JackknifePlusAbRegressor nonconformity scores are computed as\n\n S_i^textJ+ab = s(X_i Y_i) = h(agg(hatmu_B_K(-i)(X_i)) Y_i) i in mathcalD_texttrain \n\nwhere agg(hatmu_B_K(-i)(X_i)) denotes the aggregate predictions, typically mean or median, for each X_i (with K_-i the bootstraps not containing X_i). In other words, B models are trained on boostrapped sampling, the fitted models are then used to create aggregated prediction of out-of-sample X_i. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure h(cdot) to the fitted value agg(hatmu_B_K(-i)(X_i)) and the true value Y_i.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.fit-Tuple{ConformalPrediction.JackknifePlusRegressor, Any, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.fit","text":"MMI.fit(conf_model::JackknifePlusRegressor, verbosity, X, y)\n\nFor the JackknifePlusRegressor nonconformity scores are computed in the same way as for the JackknifeRegressor. Specifically, we have,\n\nS_i^textLOO = s(X_i Y_i) = h(hatmu_-i(X_i) Y_i) i in mathcalD_texttrain\n\nwhere hatmu_-i(X_i) denotes the leave-one-out prediction for X_i. In other words, for each training instance i=1n the model is trained on all training data excluding i. The fitted model is then used to predict out-of-sample from X_i. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure h(cdot) to the fitted value hatmu_-i(X_i) and the true value Y_i.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.fit-Tuple{ConformalPrediction.JackknifeRegressor, Any, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.fit","text":"MMI.fit(conf_model::JackknifeRegressor, verbosity, X, y)\n\nFor the JackknifeRegressor nonconformity scores are computed through a leave-one-out (LOO) procedure as follows,\n\nS_i^textLOO = s(X_i Y_i) = h(hatmu_-i(X_i) Y_i) i in mathcalD_texttrain\n\nwhere hatmu_-i(X_i) denotes the leave-one-out prediction for X_i. In other words, for each training instance i=1n the model is trained on all training data excluding i. The fitted model is then used to predict out-of-sample from X_i. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure h(cdot) to the fitted value hatmu_-i(X_i) and the true value Y_i.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.fit-Tuple{ConformalPrediction.NaiveClassifier, Any, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.fit","text":"MMI.fit(conf_model::NaiveClassifier, verbosity, X, y)\n\nFor the NaiveClassifier nonconformity scores are computed in-sample as follows:\n\nS_i^textIS = s(X_i Y_i) = h(hatmu(X_i) Y_i) i in mathcalD_textcalibration\n\nA typical choice for the heuristic function is h(hatmu(X_i) Y_i)=1-hatmu(X_i)_Y_i where hatmu(X_i)_Y_i denotes the softmax output of the true class and hatmu denotes the model fitted on training data mathcalD_texttrain.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.fit-Tuple{ConformalPrediction.NaiveRegressor, Any, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.fit","text":"MMI.fit(conf_model::NaiveRegressor, verbosity, X, y)\n\nFor the NaiveRegressor nonconformity scores are computed in-sample as follows:\n\nS_i^textIS = s(X_i Y_i) = h(hatmu(X_i) Y_i) i in mathcalD_texttrain\n\nA typical choice for the heuristic function is h(hatmu(X_i)Y_i)=Y_i-hatmu(X_i) where hatmu denotes the model fitted on training data mathcalD_texttrain.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.fit-Tuple{ConformalPrediction.SimpleInductiveClassifier, Any, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.fit","text":"MMI.fit(conf_model::SimpleInductiveClassifier, verbosity, X, y)\n\nFor the SimpleInductiveClassifier nonconformity scores are computed as follows:\n\nS_i^textCAL = s(X_i Y_i) = h(hatmu(X_i) Y_i) i in mathcalD_textcalibration\n\nA typical choice for the heuristic function is h(hatmu(X_i) Y_i)=1-hatmu(X_i)_Y_i where hatmu(X_i)_Y_i denotes the softmax output of the true class and hatmu denotes the model fitted on training data mathcalD_texttrain. The simple approach only takes the softmax probability of the true label into account.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.fit-Tuple{ConformalPrediction.SimpleInductiveRegressor, Any, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.fit","text":"MMI.fit(conf_model::SimpleInductiveRegressor, verbosity, X, y)\n\nFor the SimpleInductiveRegressor nonconformity scores are computed as follows:\n\nS_i^textCAL = s(X_i Y_i) = h(hatmu(X_i) Y_i) i in mathcalD_textcalibration\n\nA typical choice for the heuristic function is h(hatmu(X_i)Y_i)=Y_i-hatmu(X_i) where hatmu denotes the model fitted on training data mathcalD_texttrain.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.fit-Tuple{ConformalPrediction.TimeSeriesRegressorEnsembleBatch, Any, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.fit","text":"MMI.fit(conf_model::TimeSeriesRegressorEnsembleBatch, verbosity, X, y)\n\nFor the TimeSeriesRegressorEnsembleBatch nonconformity scores are computed as\n\n S_i^textJ+ab = s(X_i Y_i) = h(agg(hatmu_B_K(-i)(X_i)) Y_i) i in mathcalD_texttrain \n\nwhere agg(hatmu_B_K(-i)(X_i)) denotes the aggregate predictions, typically mean or median, for each X_i (with K_-i the bootstraps not containing X_i). In other words, B models are trained on boostrapped sampling, the fitted models are then used to create aggregated prediction of out-of-sample X_i. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure h(cdot) to the fitted value agg(hatmu_B_K(-i)(X_i)) and the true value Y_i.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.predict-Tuple{ConformalPrediction.AdaptiveInductiveClassifier, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.predict","text":"MMI.predict(conf_model::AdaptiveInductiveClassifier, fitresult, Xnew)\n\nFor the AdaptiveInductiveClassifier prediction sets are computed as follows,\n\nhatC_nalpha(X_n+1) = lefty s(X_n+1y) le hatq_n alpha^+ S_i^textCAL right i in mathcalD_textcalibration\n\nwhere mathcalD_textcalibration denotes the designated calibration data.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.predict-Tuple{ConformalPrediction.CVMinMaxRegressor, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.predict","text":"MMI.predict(conf_model::CVMinMaxRegressor, fitresult, Xnew)\n\nFor the CVMinMaxRegressor prediction intervals are computed as follows,\n\nhatC_nalpha(X_n+1) = left min_i=1n hatmu_-mathcalD_k(i)(X_n+1) - hatq_n alpha^+ S_i^textCV max_i=1n hatmu_-mathcalD_k(i)(X_n+1) + hatq_n alpha^+ S_i^textCV right i in mathcalD_texttrain\n\nwhere hatmu_-mathcalD_k(i) denotes the model fitted on training data with subset mathcalD_k(i) that contains the i th point removed.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.predict-Tuple{ConformalPrediction.CVPlusRegressor, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.predict","text":"MMI.predict(conf_model::CVPlusRegressor, fitresult, Xnew)\n\nFor the CVPlusRegressor prediction intervals are computed in much same way as for the JackknifePlusRegressor. Specifically, we have,\n\nhatC_nalpha(X_n+1) = left hatq_n alpha^- hatmu_-mathcalD_k(i)(X_n+1) - S_i^textCV hatq_n alpha^+ hatmu_-mathcalD_k(i)(X_n+1) + S_i^textCV right i in mathcalD_texttrain\n\nwhere hatmu_-mathcalD_k(i) denotes the model fitted on training data with fold mathcalD_k(i) that contains the i th point removed.\n\nThe JackknifePlusRegressor is a special case of the CVPlusRegressor for which K=n.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.predict-Tuple{ConformalPrediction.ConformalQuantileRegressor, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.predict","text":"MMI.predict(conf_model::ConformalQuantileRegressor, fitresult, Xnew)\n\nFor the ConformalQuantileRegressor prediction intervals are computed as follows,\n\nhatC_nalpha(X_n+1) = hatmu_alpha_lo(X_n+1) - hatq_n alpha S_i^textCAL hatmu_alpha_hi(X_n+1) + hatq_n alpha S_i^textCAL i in mathcalD_textcalibration\n\nwhere mathcalD_textcalibration denotes the designated calibration data.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.predict-Tuple{ConformalPrediction.JackknifeMinMaxRegressor, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.predict","text":"MMI.predict(conf_model::JackknifeMinMaxRegressor, fitresult, Xnew)\n\nFor the JackknifeMinMaxRegressor prediction intervals are computed as follows,\n\nhatC_nalpha(X_n+1) = left min_i=1n hatmu_-i(X_n+1) - hatq_n alpha^+ S_i^textLOO max_i=1n hatmu_-i(X_n+1) + hatq_n alpha^+ S_i^textLOO right i in mathcalD_texttrain\n\nwhere hatmu_-i denotes the model fitted on training data with ith point removed. The jackknife-minmax procedure is more conservative than the JackknifePlusRegressor.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.predict-Tuple{ConformalPrediction.JackknifePlusAbMinMaxRegressor, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.predict","text":"MMI.predict(conf_model::JackknifePlusAbMinMaxRegressor, fitresult, Xnew)\n\nFor the JackknifePlusAbMinMaxRegressor prediction intervals are computed as follows,\n\nhatC_nalpha^J+MinMax(X_n+1) = left min_i=1n hatmu_-i(X_n+1) - hatq_n alpha^+ S_i^textJ+MinMax max_i=1n hatmu_-i(X_n+1) + hatq_n alpha^+ S_i^textJ+MinMax right i in mathcalD_texttrain\n\nwhere hatmu_-i denotes the model fitted on training data with ith point removed. The jackknife+ab-minmax procedure is more conservative than the JackknifePlusAbRegressor.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.predict-Tuple{ConformalPrediction.JackknifePlusAbRegressor, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.predict","text":"MMI.predict(conf_model::JackknifePlusAbRegressor, fitresult, Xnew)\n\nFor the JackknifePlusAbRegressor prediction intervals are computed as follows,\n\nhatC_nalpha B^J+ab(X_n+1) = left hatq_n alpha^- hatmu_agg(-i)(X_n+1) - S_i^textJ+ab hatq_n alpha^+ hatmu_agg(-i)(X_n+1) + S_i^textJ+ab right i in mathcalD_texttrain\n\nwhere hatmu_agg(-i) denotes the aggregated models hatmu_1 hatmu_B fitted on bootstrapped data (B) does not include the ith data point. The jackknife+ procedure is more stable than the JackknifeRegressor.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.predict-Tuple{ConformalPrediction.JackknifePlusRegressor, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.predict","text":"MMI.predict(conf_model::JackknifePlusRegressor, fitresult, Xnew)\n\nFor the JackknifePlusRegressor prediction intervals are computed as follows,\n\nhatC_nalpha(X_n+1) = left hatq_n alpha^- hatmu_-i(X_n+1) - S_i^textLOO hatq_n alpha^+ hatmu_-i(X_n+1) + S_i^textLOO right i in mathcalD_texttrain\n\nwhere hatmu_-i denotes the model fitted on training data with ith point removed. The jackknife+ procedure is more stable than the JackknifeRegressor.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.predict-Tuple{ConformalPrediction.JackknifeRegressor, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.predict","text":"MMI.predict(conf_model::JackknifeRegressor, fitresult, Xnew)\n\nFor the JackknifeRegressor prediction intervals are computed as follows,\n\nhatC_nalpha(X_n+1) = hatmu(X_n+1) pm hatq_n alpha^+ S_i^textLOO i in mathcalD_texttrain\n\nwhere S_i^textLOO denotes the nonconformity that is generated as explained in fit(conf_model::JackknifeRegressor, verbosity, X, y). The jackknife procedure addresses the overfitting issue associated with the NaiveRegressor.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.predict-Tuple{ConformalPrediction.NaiveClassifier, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.predict","text":"MMI.predict(conf_model::NaiveClassifier, fitresult, Xnew)\n\nFor the NaiveClassifier prediction sets are computed as follows:\n\nhatC_nalpha(X_n+1) = lefty s(X_n+1y) le hatq_n alpha^+ S_i^textIS right i in mathcalD_texttrain\n\nThe naive approach typically produces prediction regions that undercover due to overfitting.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.predict-Tuple{ConformalPrediction.NaiveRegressor, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.predict","text":"MMI.predict(conf_model::NaiveRegressor, fitresult, Xnew)\n\nFor the NaiveRegressor prediction intervals are computed as follows:\n\nhatC_nalpha(X_n+1) = hatmu(X_n+1) pm hatq_n alpha^+ S_i^textIS i in mathcalD_texttrain\n\nThe naive approach typically produces prediction regions that undercover due to overfitting.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.predict-Tuple{ConformalPrediction.SimpleInductiveClassifier, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.predict","text":"MMI.predict(conf_model::SimpleInductiveClassifier, fitresult, Xnew)\n\nFor the SimpleInductiveClassifier prediction sets are computed as follows,\n\nhatC_nalpha(X_n+1) = lefty s(X_n+1y) le hatq_n alpha^+ S_i^textCAL right i in mathcalD_textcalibration\n\nwhere mathcalD_textcalibration denotes the designated calibration data.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.predict-Tuple{ConformalPrediction.SimpleInductiveRegressor, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.predict","text":"MMI.predict(conf_model::SimpleInductiveRegressor, fitresult, Xnew)\n\nFor the SimpleInductiveRegressor prediction intervals are computed as follows,\n\nhatC_nalpha(X_n+1) = hatmu(X_n+1) pm hatq_n alpha^+ S_i^textCAL i in mathcalD_textcalibration\n\nwhere mathcalD_textcalibration denotes the designated calibration data.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.predict-Tuple{ConformalPrediction.TimeSeriesRegressorEnsembleBatch, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.predict","text":"MMI.predict(conf_model::TimeSeriesRegressorEnsembleBatch, fitresult, Xnew)\n\nFor the TimeSeriesRegressorEnsembleBatch prediction intervals are computed as follows,\n\nhatC_nalpha B^J+ab(X_n+1) = left hatq_n alpha^- hatmu_agg(-i)(X_n+1) - S_i^textJ+ab hatq_n alpha^+ hatmu_agg(-i)(X_n+1) + S_i^textJ+ab right i in mathcalD_texttrain\n\nwhere hatmu_agg(-i) denotes the aggregated models hatmu_1 hatmu_B fitted on bootstrapped data (B) does not include the ith data point. The jackknife+ procedure is more stable than the JackknifeRegressor.\n\n\n\n\n\n","category":"method"},{"location":"reference/#Internal-functions","page":"🧐 Reference","title":"Internal functions","text":"","category":"section"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"Modules = [\n ConformalPrediction,\n ConformalPrediction.ConformalTraining,\n]\nPublic = false","category":"page"},{"location":"reference/#ConformalPrediction.AdaptiveInductiveClassifier","page":"🧐 Reference","title":"ConformalPrediction.AdaptiveInductiveClassifier","text":"The AdaptiveInductiveClassifier is an improvement to the SimpleInductiveClassifier and the NaiveClassifier. Contrary to the NaiveClassifier it computes nonconformity scores using a designated calibration dataset like the SimpleInductiveClassifier. Contrary to the SimpleInductiveClassifier it utilizes the softmax output of all classes.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.CVMinMaxRegressor","page":"🧐 Reference","title":"ConformalPrediction.CVMinMaxRegressor","text":"Constructor for CVMinMaxRegressor.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.CVPlusRegressor","page":"🧐 Reference","title":"ConformalPrediction.CVPlusRegressor","text":"Constructor for CVPlusRegressor.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.ConformalInterval","page":"🧐 Reference","title":"ConformalPrediction.ConformalInterval","text":"An abstract base type for conformal models that produce interval-valued predictions. This includes most conformal regression models.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.ConformalProbabilistic","page":"🧐 Reference","title":"ConformalPrediction.ConformalProbabilistic","text":"An abstract base type for conformal models that produce probabilistic predictions. This includes some conformal classifier like Venn-ABERS.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.ConformalProbabilisticSet","page":"🧐 Reference","title":"ConformalPrediction.ConformalProbabilisticSet","text":"An abstract base type for conformal models that produce set-valued probabilistic predictions. This includes most conformal classification models.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.ConformalQuantileRegressor","page":"🧐 Reference","title":"ConformalPrediction.ConformalQuantileRegressor","text":"Constructor for ConformalQuantileRegressor.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.JackknifeMinMaxRegressor","page":"🧐 Reference","title":"ConformalPrediction.JackknifeMinMaxRegressor","text":"Constructor for JackknifeMinMaxRegressor.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.JackknifePlusAbMinMaxRegressor","page":"🧐 Reference","title":"ConformalPrediction.JackknifePlusAbMinMaxRegressor","text":"Constructor for JackknifePlusAbMinMaxRegressor.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.JackknifePlusAbRegressor","page":"🧐 Reference","title":"ConformalPrediction.JackknifePlusAbRegressor","text":"Constructor for JackknifePlusAbPlusRegressor.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.JackknifePlusRegressor","page":"🧐 Reference","title":"ConformalPrediction.JackknifePlusRegressor","text":"Constructor for JackknifePlusRegressor.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.JackknifeRegressor","page":"🧐 Reference","title":"ConformalPrediction.JackknifeRegressor","text":"Constructor for JackknifeRegressor.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.NaiveClassifier","page":"🧐 Reference","title":"ConformalPrediction.NaiveClassifier","text":"The NaiveClassifier is the simplest approach to Inductive Conformal Classification. Contrary to the NaiveClassifier it computes nonconformity scores using a designated training dataset.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.NaiveRegressor","page":"🧐 Reference","title":"ConformalPrediction.NaiveRegressor","text":"The NaiveRegressor for conformal prediction is the simplest approach to conformal regression.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.QuantileModel","page":"🧐 Reference","title":"ConformalPrediction.QuantileModel","text":"Union type for quantile models.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.SimpleInductiveClassifier","page":"🧐 Reference","title":"ConformalPrediction.SimpleInductiveClassifier","text":"The SimpleInductiveClassifier is the simplest approach to Inductive Conformal Classification. Contrary to the NaiveClassifier it computes nonconformity scores using a designated calibration dataset.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.SimpleInductiveRegressor","page":"🧐 Reference","title":"ConformalPrediction.SimpleInductiveRegressor","text":"The SimpleInductiveRegressor is the simplest approach to Inductive Conformal Regression. Contrary to the NaiveRegressor it computes nonconformity scores using a designated calibration dataset.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.TimeSeriesRegressorEnsembleBatch","page":"🧐 Reference","title":"ConformalPrediction.TimeSeriesRegressorEnsembleBatch","text":"Constructor for TimeSeriesRegressorEnsembleBatch.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction._aggregate-Tuple{Any, Union{String, Symbol}}","page":"🧐 Reference","title":"ConformalPrediction._aggregate","text":"_aggregate(y, aggregate::Union{Symbol,String})\n\nHelper function that performs aggregation across vector of predictions.\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.absolute_error-Tuple{Any, Any}","page":"🧐 Reference","title":"ConformalPrediction.absolute_error","text":"absolute_error(y,ŷ)\n\nComputes abs(y - ŷ) where ŷ is the predicted value.\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.blockbootstrap-Tuple{Any, Any}","page":"🧐 Reference","title":"ConformalPrediction.blockbootstrap","text":"blockbootstrap(time_series_data, block_szie)\n\nGenerate a sampling method, that block bootstraps the given data\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.is_classification-Tuple{Any}","page":"🧐 Reference","title":"ConformalPrediction.is_classification","text":"is_classification(ŷ)\n\nHelper function that checks if conformal prediction ŷ comes from a conformal classification model.\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.is_covered-Tuple{Any, Any}","page":"🧐 Reference","title":"ConformalPrediction.is_covered","text":"is_covered(ŷ, y)\n\nHelper function to check if y is contained in conformal region. Based on whether conformal predictions ŷ are set- or interval-valued, different checks are executed.\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.is_covered_interval-Tuple{Any, Any}","page":"🧐 Reference","title":"ConformalPrediction.is_covered_interval","text":"is_covered_interval(ŷ, y)\n\nHelper function to check if y is contained in conformal interval.\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.is_covered_set-Tuple{Any, Any}","page":"🧐 Reference","title":"ConformalPrediction.is_covered_set","text":"is_covered_set(ŷ, y)\n\nHelper function to check if y is contained in conformal set.\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.is_regression-Tuple{Any}","page":"🧐 Reference","title":"ConformalPrediction.is_regression","text":"is_regression(ŷ)\n\nHelper function that checks if conformal prediction ŷ comes from a conformal regression model.\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.minus_softmax-Tuple{Any, Any}","page":"🧐 Reference","title":"ConformalPrediction.minus_softmax","text":"minus_softmax(y,ŷ)\n\nComputes 1.0 - ŷ where ŷ is the softmax output for a given class.\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.qminus","page":"🧐 Reference","title":"ConformalPrediction.qminus","text":"qminus(v::AbstractArray, coverage::AbstractFloat=0.9)\n\nImplements the hatq_nalpha^- finite-sample corrected quantile function as defined in Barber et al. (2020): https://arxiv.org/pdf/1905.02928.pdf. \n\n\n\n\n\n","category":"function"},{"location":"reference/#ConformalPrediction.qplus","page":"🧐 Reference","title":"ConformalPrediction.qplus","text":"qplus(v::AbstractArray, coverage::AbstractFloat=0.9)\n\nImplements the hatq_nalpha^+ finite-sample corrected quantile function as defined in Barber et al. (2020): https://arxiv.org/pdf/1905.02928.pdf. \n\n\n\n\n\n","category":"function"},{"location":"reference/#ConformalPrediction.reformat_interval-Tuple{Any}","page":"🧐 Reference","title":"ConformalPrediction.reformat_interval","text":"reformat_interval(ŷ)\n\nReformats conformal interval predictions.\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.reformat_mlj_prediction-Tuple{Any}","page":"🧐 Reference","title":"ConformalPrediction.reformat_mlj_prediction","text":"reformat_mlj_prediction(ŷ)\n\nA helper function that extracts only the output (predicted values) for whatever is returned from MMI.predict(model, fitresult, Xnew). This is currently used to avoid issues when calling MMI.predict(model, fitresult, Xnew) in pipelines.\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.score","page":"🧐 Reference","title":"ConformalPrediction.score","text":"score(conf_model::SimpleInductiveClassifier, ::Type{<:Supervised}, fitresult, X, y::Union{Nothing,AbstractArray}=nothing)\n\nScore method for the SimpleInductiveClassifier dispatched for any <:Supervised model.\n\n\n\n\n\n","category":"function"},{"location":"reference/#ConformalPrediction.score-2","page":"🧐 Reference","title":"ConformalPrediction.score","text":"score(conf_model::AdaptiveInductiveClassifier, ::Type{<:Supervised}, fitresult, X, y::Union{Nothing,AbstractArray}=nothing)\n\nScore method for the AdaptiveInductiveClassifier dispatched for any <:Supervised model.\n\n\n\n\n\n","category":"function"},{"location":"reference/#ConformalPrediction.score-3","page":"🧐 Reference","title":"ConformalPrediction.score","text":"score(conf_model::ConformalProbabilisticSet, fitresult, X, y::Union{Nothing,AbstractArray}=nothing)\n\nGeneric score method for the ConformalProbabilisticSet. It computes nonconformity scores using the heuristic function h and the softmax probabilities of the true class. Method is dispatched for different Conformal Probabilistic Sets and atomic models.\n\n\n\n\n\n","category":"function"},{"location":"reference/#ConformalPrediction.split_data-Tuple{ConformalPrediction.ConformalProbabilisticSet, Any, Any}","page":"🧐 Reference","title":"ConformalPrediction.split_data","text":"split_data(conf_model::ConformalProbabilisticSet, indices::Base.OneTo{Int})\n\nSplits the data into a proper training and calibration set.\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.ConformalTraining.ConformalNNClassifier","page":"🧐 Reference","title":"ConformalPrediction.ConformalTraining.ConformalNNClassifier","text":"The ConformalNNClassifier struct is a wrapper for a ConformalModel that can be used with MLJFlux.jl.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.ConformalTraining.ConformalNNRegressor","page":"🧐 Reference","title":"ConformalPrediction.ConformalTraining.ConformalNNRegressor","text":"The ConformalNNRegressor struct is a wrapper for a ConformalModel that can be used with MLJFlux.jl.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.ConformalTraining.classification_loss-Tuple{ConformalPrediction.ConformalProbabilisticSet, Any, Any, Any}","page":"🧐 Reference","title":"ConformalPrediction.ConformalTraining.classification_loss","text":"classification_loss(\n conf_model::ConformalProbabilisticSet, fitresult, X, y;\n loss_matrix::Union{AbstractMatrix,UniformScaling}=UniformScaling(1.0),\n temp::Real=0.1\n)\n\nComputes the calibration loss following Stutz et al. (2022): https://openreview.net/pdf?id=t8O-4LKFVx. Following the notation in the paper, the loss is computed as,\n\nmathcalL(C_theta(xtau)y) = sum_k L_yk left (1 - C_thetak(xtau)) mathbfI_y=k + C_thetak(xtau) mathbfI_yne k right\n\nwhere tau is just the quantile q̂ and kappa is the target set size (defaults to 1).\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.ConformalTraining.qminus_smooth","page":"🧐 Reference","title":"ConformalPrediction.ConformalTraining.qminus_smooth","text":"qminus_smooth(v::AbstractArray, coverage::AbstractFloat=0.9)\n\nImplements the hatq_nalpha^- finite-sample corrected quantile function as defined in Barber et al. (2020): https://arxiv.org/pdf/1905.02928.pdf. To allow for differentiability, we use the soft sort function from InferOpt.jl.\n\n\n\n\n\n","category":"function"},{"location":"reference/#ConformalPrediction.ConformalTraining.qplus_smooth","page":"🧐 Reference","title":"ConformalPrediction.ConformalTraining.qplus_smooth","text":"qplus_smooth(v::AbstractArray, coverage::AbstractFloat=0.9)\n\nImplements the hatq_nalpha^+ finite-sample corrected quantile function as defined in Barber et al. (2020): https://arxiv.org/pdf/1905.02928.pdf. To allow for differentiability, we use the soft sort function from InferOpt.jl.\n\n\n\n\n\n","category":"function"},{"location":"reference/#ConformalPrediction.ConformalTraining.score","page":"🧐 Reference","title":"ConformalPrediction.ConformalTraining.score","text":"ConformalPrediction.score(conf_model::AdaptiveInductiveClassifier, ::Type{<:EitherEnsembleModel{<:MLJFluxModel}}, fitresult, X, y::Union{Nothing,AbstractArray}=nothing)\n\nOverloads the score function for ensembles of MLJFluxModel types.\n\n\n\n\n\n","category":"function"},{"location":"reference/#ConformalPrediction.ConformalTraining.score-2","page":"🧐 Reference","title":"ConformalPrediction.ConformalTraining.score","text":"ConformalPrediction.score(conf_model::AdaptiveInductiveClassifier, ::Type{<:MLJFluxModel}, fitresult, X, y::Union{Nothing,AbstractArray}=nothing)\n\nOverloads the score function for the MLJFluxModel type.\n\n\n\n\n\n","category":"function"},{"location":"reference/#ConformalPrediction.ConformalTraining.smooth_size_loss-Tuple{ConformalPrediction.ConformalProbabilisticSet, Any, Any}","page":"🧐 Reference","title":"ConformalPrediction.ConformalTraining.smooth_size_loss","text":"function smooth_size_loss(\n conf_model::ConformalProbabilisticSet, fitresult, X;\n temp::Real=0.1, κ::Real=1.0\n)\n\nComputes the smooth (differentiable) size loss following Stutz et al. (2022): https://openreview.net/pdf?id=t8O-4LKFVx. First, soft assignment probabilities are computed for new data X. Then (following the notation in the paper) the loss is computed as, \n\nOmega(C_theta(xtau)) = max (0 sum_k C_thetak(xtau) - kappa)\n\nwhere tau is just the quantile q̂ and kappa is the target set size (defaults to 1). For empty sets, the loss is computed as K - kappa, that is the maximum set size minus the target set size.\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.ConformalTraining.soft_assignment-Tuple{ConformalPrediction.ConformalProbabilisticSet, Any, Any}","page":"🧐 Reference","title":"ConformalPrediction.ConformalTraining.soft_assignment","text":"soft_assignment(conf_model::ConformalProbabilisticSet, fitresult, X; temp::Real=0.1)\n\nThis function can be used to compute soft assigment probabilities for new data X as in soft_assignment(conf_model::ConformalProbabilisticSet; temp::Real=0.1). When a fitted model mu (fitresult) and new samples X are supplied, non-conformity scores are first computed for the new data points. Then the existing threshold/quantile q̂ is used to compute the final soft assignments. \n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.ConformalTraining.soft_assignment-Tuple{ConformalPrediction.ConformalProbabilisticSet}","page":"🧐 Reference","title":"ConformalPrediction.ConformalTraining.soft_assignment","text":"soft_assignment(conf_model::ConformalProbabilisticSet; temp::Real=0.1)\n\nComputes soft assignment scores for each label and sample. That is, the probability of label k being included in the confidence set. This implementation follows Stutz et al. (2022): https://openreview.net/pdf?id=t8O-4LKFVx. Contrary to the paper, we use non-conformity scores instead of conformity scores, hence the sign swap. \n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.score-4","page":"🧐 Reference","title":"ConformalPrediction.score","text":"ConformalPrediction.score(conf_model::InductiveModel, model::MLJFluxModel, fitresult, X, y::Union{Nothing,AbstractArray}=nothing)\n\nOverloads the score function for the MLJFluxModel type.\n\n\n\n\n\n","category":"function"},{"location":"reference/#ConformalPrediction.score-5","page":"🧐 Reference","title":"ConformalPrediction.score","text":"ConformalPrediction.score(conf_model::SimpleInductiveClassifier, ::Type{<:EitherEnsembleModel{<:MLJFluxModel}}, fitresult, X, y::Union{Nothing,AbstractArray}=nothing)\n\nOverloads the score function for ensembles of MLJFluxModel types.\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJFlux.shape-Tuple{ConformalPrediction.ConformalTraining.ConformalNNRegressor, Any, Any}","page":"🧐 Reference","title":"MLJFlux.shape","text":"shape(model::NeuralNetworkRegressor, X, y)\n\nA private method that returns the shape of the input and output of the model for given data X and y.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJFlux.train!-Tuple{Union{ConformalPrediction.ConformalTraining.ConformalNNClassifier, ConformalPrediction.ConformalTraining.ConformalNNRegressor}, Vararg{Any, 5}}","page":"🧐 Reference","title":"MLJFlux.train!","text":"MLJFlux.train!(model::ConformalNN, penalty, chain, optimiser, X, y)\n\nImplements the conformal traning procedure for the ConformalNN type.\n\n\n\n\n\n","category":"method"},{"location":"explanation/finite_sample_correction/#Finite-sample-Correction","page":"Finite-sample Correction","title":"Finite-sample Correction","text":"","category":"section"},{"location":"explanation/finite_sample_correction/","page":"Finite-sample Correction","title":"Finite-sample Correction","text":"We follow the convention used in Angelopoulos and Bates (2021) and Barber et al. (2021) to correct for the finite-sample bias of the empirical quantile. Specifically, we use the following definition of the (1−α) empirical quantile:","category":"page"},{"location":"explanation/finite_sample_correction/","page":"Finite-sample Correction","title":"Finite-sample Correction","text":"hatq_nalpha^+v = fraclceil (n+1)(1-alpha)rceiln","category":"page"},{"location":"explanation/finite_sample_correction/","page":"Finite-sample Correction","title":"Finite-sample Correction","text":"Barber et al. (2021) further define as the α empirical quantile:","category":"page"},{"location":"explanation/finite_sample_correction/","page":"Finite-sample Correction","title":"Finite-sample Correction","text":"hatq_nalpha^-v = fraclfloor (n+1)alpha rfloorn = - hatq_nalpha^+-v","category":"page"},{"location":"explanation/finite_sample_correction/","page":"Finite-sample Correction","title":"Finite-sample Correction","text":"Below we test this equality numerically by generating a large number of random vectors and comparing the two quantiles. We then plot the density of the difference between the two quantiles. While the errors are small, they are not negligible for small n. In our computations, we use q̂(n, α)⁻{v} exactly as it is defined above, rather than relying on  − q̂(n, α)⁺{ − v}.","category":"page"},{"location":"explanation/finite_sample_correction/","page":"Finite-sample Correction","title":"Finite-sample Correction","text":"using ConformalPrediction: qplus, qminus\nnobs = [100, 1000, 10000]\nn = 1000\nalpha = 0.1\nplts = []\nΔ = Float32[]\nfor _nobs in nobs\n for i in 1:n\n v = rand(_nobs)\n δ = qminus(v, alpha) - (-qplus(-v, 1-alpha))\n push!(Δ, δ)\n end\n plt = density(Δ)\n vline!([mean(Δ)], color=:red, label=\"mean\")\n push!(plts, plt)\nend\nplot(plts..., layout=(1,3), size=(900, 300), legend=:topleft, title=[\"nobs = 100\" \"nobs = 1000\" \"nobs = 10000\"])","category":"page"},{"location":"explanation/finite_sample_correction/","page":"Finite-sample Correction","title":"Finite-sample Correction","text":"(Image: )","category":"page"},{"location":"explanation/finite_sample_correction/","page":"Finite-sample Correction","title":"Finite-sample Correction","text":"See also this related discussion.","category":"page"},{"location":"explanation/finite_sample_correction/#References","page":"Finite-sample Correction","title":"References","text":"","category":"section"},{"location":"explanation/finite_sample_correction/","page":"Finite-sample Correction","title":"Finite-sample Correction","text":"Angelopoulos, Anastasios N., and Stephen Bates. 2021. “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification.” https://arxiv.org/abs/2107.07511.","category":"page"},{"location":"explanation/finite_sample_correction/","page":"Finite-sample Correction","title":"Finite-sample Correction","text":"Barber, Rina Foygel, Emmanuel J. Candès, Aaditya Ramdas, and Ryan J. Tibshirani. 2021. “Predictive Inference with the Jackknife+.” The Annals of Statistics 49 (1): 486–507. https://doi.org/10.1214/20-AOS1965.","category":"page"},{"location":"how_to_guides/llm/#How-to-Build-a-Conformal-Chatbot","page":"How to Conformalize a Large Language Model","title":"How to Build a Conformal Chatbot","text":"","category":"section"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"Large Language Models are all the buzz right now. They are used for a variety of tasks, including text classification, question answering, and text generation. In this tutorial, we will show how to conformalize a transformer language model for text classification. We will use the Banking77 dataset (Casanueva et al. 2020), which consists of 13,083 queries from 77 intents. On the model side, we will use the DistilRoBERTa model, which is a distilled version of RoBERTa (Liu et al. 2019) finetuned on the Banking77 dataset.","category":"page"},{"location":"how_to_guides/llm/#Data","page":"How to Conformalize a Large Language Model","title":"Data","text":"","category":"section"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"The data was downloaded from HuggingFace 🤗 (HF) and split into a proper training, calibration, and test set. All that’s left to do is to load the data and preprocess it. We add 1 to the labels to make them 1-indexed (sorry Pythonistas 😜)","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"# Get labels:\ndf_labels = CSV.read(\"dev/artifacts/data/banking77/labels.csv\", DataFrame, drop=[1])\nlabels = df_labels[:,1]\n\n# Get data:\ndf_train = CSV.read(\"dev/artifacts/data/banking77/train.csv\", DataFrame, drop=[1])\ndf_cal = CSV.read(\"dev/artifacts/data/banking77/calibration.csv\", DataFrame, drop=[1])\ndf_full_train = vcat(df_train, df_cal)\ntrain_ratio = round(nrow(df_train)/nrow(df_full_train), digits=2)\ndf_test = CSV.read(\"dev/artifacts/data/banking77/test.csv\", DataFrame, drop=[1])\n\n# Preprocess data:\nqueries_train, y_train = collect(df_train.text), categorical(df_train.labels .+ 1)\nqueries_cal, y_cal = collect(df_cal.text), categorical(df_cal.labels .+ 1)\nqueries, y = collect(df_full_train.text), categorical(df_full_train.labels .+ 1)\nqueries_test, y_test = collect(df_test.text), categorical(df_test.labels .+ 1)","category":"page"},{"location":"how_to_guides/llm/#HuggingFace-Model","page":"How to Conformalize a Large Language Model","title":"HuggingFace Model","text":"","category":"section"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"The model can be loaded from HF straight into our running Julia session using the Transformers.jl package. Below we load the tokenizer tkr and the model mod. The tokenizer is used to convert the text into a sequence of integers, which is then fed into the model. The model outputs a hidden state, which is then fed into a classifier to get the logits for each class. Finally, the logits are then passed through a softmax function to get the corresponding predicted probabilities. Below we run a few queries through the model to see how it performs.","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"# Load model from HF 🤗:\ntkr = hgf\"mrm8488/distilroberta-finetuned-banking77:tokenizer\"\nmod = hgf\"mrm8488/distilroberta-finetuned-banking77:ForSequenceClassification\"\n\n# Test model:\nquery = [\n \"What is the base of the exchange rates?\",\n \"Why is my card not working?\",\n \"My Apple Pay is not working, what should I do?\",\n]\na = encode(tkr, query)\nb = mod.model(a)\nc = mod.cls(b.hidden_state)\nd = softmax(c.logit)\n[labels[i] for i in Flux.onecold(d)]","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"3-element Vector{String}:\n \"exchange_rate\"\n \"card_not_working\"\n \"apple_pay_or_google_pay\"","category":"page"},{"location":"how_to_guides/llm/#MLJ-Interface","page":"How to Conformalize a Large Language Model","title":"MLJ Interface","text":"","category":"section"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"Since our package is interfaced to MLJ.jl, we need to define a wrapper model that conforms to the MLJ interface. In order to add the model for general use, we would probably go through MLJFlux.jl, but for this tutorial, we will make our life easy and simply overload the MLJBase.fit and MLJBase.predict methods. Since the model from HF is already pre-trained and we are not interested in further fine-tuning, we will simply return the model object in the MLJBase.fit method. The MLJBase.predict method will then take the model object and the query and return the predicted probabilities. We also need to define the MLJBase.target_scitype and MLJBase.predict_mode methods. The former tells MLJ what the output type of the model is, and the latter can be used to retrieve the label with the highest predicted probability.","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"struct IntentClassifier <: MLJBase.Probabilistic\n tkr::TextEncoders.AbstractTransformerTextEncoder\n mod::HuggingFace.HGFRobertaForSequenceClassification\nend\n\nfunction IntentClassifier(;\n tokenizer::TextEncoders.AbstractTransformerTextEncoder, \n model::HuggingFace.HGFRobertaForSequenceClassification,\n)\n IntentClassifier(tkr, mod)\nend\n\nfunction get_hidden_state(clf::IntentClassifier, query::Union{AbstractString, Vector{<:AbstractString}})\n token = encode(clf.tkr, query)\n hidden_state = clf.mod.model(token).hidden_state\n return hidden_state\nend\n\n# This doesn't actually retrain the model, but it retrieves the classifier object\nfunction MLJBase.fit(clf::IntentClassifier, verbosity, X, y)\n cache=nothing\n report=nothing\n fitresult = (clf = clf.mod.cls, labels = levels(y))\n return fitresult, cache, report\nend\n\nfunction MLJBase.predict(clf::IntentClassifier, fitresult, Xnew)\n output = fitresult.clf(get_hidden_state(clf, Xnew))\n p̂ = UnivariateFinite(fitresult.labels,softmax(output.logit)',pool=missing)\n return p̂\nend\n\nMLJBase.target_scitype(clf::IntentClassifier) = AbstractVector{<:Finite}\n\nMLJBase.predict_mode(clf::IntentClassifier, fitresult, Xnew) = mode.(MLJBase.predict(clf, fitresult, Xnew))","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"To test that everything is working as expected, we fit the model and generated predictions for a subset of the test data:","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"clf = IntentClassifier(tkr, mod)\ntop_n = 10\nfitresult, _, _ = MLJBase.fit(clf, 1, nothing, y_test[1:top_n])\n@time ŷ = MLJBase.predict(clf, fitresult, queries_test[1:top_n]);","category":"page"},{"location":"how_to_guides/llm/#Conformal-Chatbot","page":"How to Conformalize a Large Language Model","title":"Conformal Chatbot","text":"","category":"section"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"To turn the wrapped, pre-trained model into a conformal intent classifier, we can now rely on standard API calls. We first wrap our atomic model where we also specify the desired coverage rate and method. Since even simple forward passes are computationally expensive for our (small) LLM, we rely on Simple Inductive Conformal Classification.","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"#| eval: false\n\nconf_model = conformal_model(clf; coverage=0.95, method=:simple_inductive, train_ratio=train_ratio)\nmach = machine(conf_model, queries, y)\n@time fit!(mach)\nSerialization.serialize(\"dev/artifacts/models/banking77/simple_inductive.jls\", mach)","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"Finally, we use our conformal LLM to build a simple and yet powerful chatbot that runs directly in the Julia REPL. Without dwelling on the details too much, the conformal_chatbot works as follows:","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"Prompt user to explain their intent.\nFeed user input through conformal LLM and present the output to the user.\nIf the conformal prediction sets includes more than one label, prompt the user to either refine their input or choose one of the options included in the set.","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"mach = Serialization.deserialize(\"dev/artifacts/models/banking77/simple_inductive.jls\")\n\nfunction prediction_set(mach, query::String)\n p̂ = MLJBase.predict(mach, query)[1]\n probs = pdf.(p̂, collect(1:77))\n in_set = findall(probs .!= 0)\n labels_in_set = labels[in_set]\n probs_in_set = probs[in_set]\n _order = sortperm(-probs_in_set)\n plt = UnicodePlots.barplot(labels_in_set[_order], probs_in_set[_order], title=\"Possible Intents\")\n return labels_in_set, plt\nend\n\nfunction conformal_chatbot()\n println(\"👋 Hi, I'm a Julia, your conformal chatbot. I'm here to help you with your banking query. Ask me anything or type 'exit' to exit ...\\n\")\n completed = false\n queries = \"\"\n while !completed\n query = readline()\n queries = queries * \",\" * query\n labels, plt = prediction_set(mach, queries)\n if length(labels) > 1\n println(\"🤔 Hmmm ... I can think of several options here. If any of these applies, simply type the corresponding number (e.g. '1' for the first option). Otherwise, can you refine your question, please?\\n\")\n println(plt)\n else\n println(\"🥳 I think you mean $(labels[1]). Correct?\")\n end\n\n # Exit:\n if query == \"exit\"\n println(\"👋 Bye!\")\n break\n end\n if query ∈ string.(collect(1:77))\n println(\"👍 Great! You've chosen '$(labels[parse(Int64, query)])'. I'm glad I could help you. Have a nice day!\")\n completed = true\n end\n end\nend","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"Below we show the output for two example queries. The first one is very ambiguous. As expected, the size of the prediction set is therefore large.","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"ambiguous_query = \"transfer mondey?\"\nprediction_set(mach, ambiguous_query)[2]","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":" Possible Intents \n ┌ ┐ \n beneficiary_not_allowed ┤■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■ 0.150517 \n balance_not_updated_after_bank_transfer ┤■■■■■■■■■■■■■■■■■■■■■■ 0.111409 \n transfer_into_account ┤■■■■■■■■■■■■■■■■■■■ 0.0939535 \n transfer_not_received_by_recipient ┤■■■■■■■■■■■■■■■■■■ 0.091163 \n top_up_by_bank_transfer_charge ┤■■■■■■■■■■■■■■■■■■ 0.089306 \n failed_transfer ┤■■■■■■■■■■■■■■■■■■ 0.0888322 \n transfer_timing ┤■■■■■■■■■■■■■ 0.0641952 \n transfer_fee_charged ┤■■■■■■■ 0.0361131 \n pending_transfer ┤■■■■■ 0.0270795 \n receiving_money ┤■■■■■ 0.0252126 \n declined_transfer ┤■■■ 0.0164443 \n cancel_transfer ┤■■■ 0.0150444 \n └ ┘","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"The more refined version of the prompt yields a smaller prediction set: less ambiguous prompts result in lower predictive uncertainty.","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"refined_query = \"I tried to transfer money to my friend, but it failed.\"\nprediction_set(mach, refined_query)[2]","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":" Possible Intents \n ┌ ┐ \n failed_transfer ┤■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■ 0.59042 \n beneficiary_not_allowed ┤■■■■■■■ 0.139806 \n transfer_not_received_by_recipient ┤■■ 0.0449783 \n balance_not_updated_after_bank_transfer ┤■■ 0.037894 \n declined_transfer ┤■ 0.0232856 \n transfer_into_account ┤■ 0.0108771 \n cancel_transfer ┤ 0.00876369 \n └ ┘","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"Below we include a short demo video that shows the REPL-based chatbot in action.","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"(Image: )","category":"page"},{"location":"how_to_guides/llm/#Final-Remarks","page":"How to Conformalize a Large Language Model","title":"Final Remarks","text":"","category":"section"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"This work was done in collaboration with colleagues at ING as part of the ING Analytics 2023 Experiment Week. Our team demonstrated that Conformal Prediction provides a powerful and principled alternative to top-K intent classification. We won the first prize by popular vote.","category":"page"},{"location":"how_to_guides/llm/#References","page":"How to Conformalize a Large Language Model","title":"References","text":"","category":"section"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"Casanueva, Iñigo, Tadas Temčinas, Daniela Gerz, Matthew Henderson, and Ivan Vulić. 2020. “Efficient Intent Detection with Dual Sentence Encoders.” In Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI, 38–45. Online: Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.nlp4convai-1.5.","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"Liu, Yinhan, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. “RoBERTa: A Robustly Optimized BERT Pretraining Approach.” arXiv. https://doi.org/10.48550/arXiv.1907.11692.","category":"page"},{"location":"#ConformalPrediction","page":"🏠 Home","title":"ConformalPrediction","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"CurrentModule = ConformalPrediction","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"(Image: )","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Documentation for ConformalPrediction.jl.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"ConformalPrediction.jl is a package for Predictive Uncertainty Quantification (UQ) through Conformal Prediction (CP) in Julia. It is designed to work with supervised models trained in MLJ (Blaom et al. 2020). Conformal Prediction is easy-to-understand, easy-to-use and model-agnostic and it works under minimal distributional assumptions.","category":"page"},{"location":"#Quick-Tour","page":"🏠 Home","title":"🏃 Quick Tour","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"First time here? Take a quick interactive tour to see what this package can do right on JuliaHub (To run the notebook, hit login and then edit).","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"This Pluto.jl 🎈 notebook won the 2nd Price in the JuliaCon 2023 Notebook Competition.","category":"page"},{"location":"#Local-Tour","page":"🏠 Home","title":"Local Tour","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"To run the tour locally, just clone this repo and start Pluto.jl as follows:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"] add Pluto\nusing Pluto\nPluto.run()","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"All notebooks are contained in docs/pluto.","category":"page"},{"location":"#Background","page":"🏠 Home","title":"📖 Background","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Don’t worry, we’re not about to deep-dive into methodology. But just to give you a high-level description of Conformal Prediction (CP) upfront:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Conformal prediction (a.k.a. conformal inference) is a user-friendly paradigm for creating statistically rigorous uncertainty sets/intervals for the predictions of such models. Critically, the sets are valid in a distribution-free sense: they possess explicit, non-asymptotic guarantees even without distributional assumptions or model assumptions.— Angelopoulos and Bates (2021)","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Intuitively, CP works under the premise of turning heuristic notions of uncertainty into rigorous uncertainty estimates through repeated sampling or the use of dedicated calibration data.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"(Image: Conformal Prediction in action: prediction intervals at varying coverage rates. As coverage grows, so does the width of the prediction interval.)","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"The animation above is lifted from a small blog post that introduces Conformal Prediction and this package in the context of regression. It shows how the prediction interval and the test points that it covers varies in size as the user-specified coverage rate changes.","category":"page"},{"location":"#Installation","page":"🏠 Home","title":"🚩 Installation","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"You can install the latest stable release from the general registry:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"using Pkg\nPkg.add(\"ConformalPrediction\")","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"The development version can be installed as follows:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"using Pkg\nPkg.add(url=\"https://github.com/juliatrustworthyai/ConformalPrediction.jl\")","category":"page"},{"location":"#Usage-Example","page":"🏠 Home","title":"🔍 Usage Example","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"To illustrate the intended use of the package, let’s have a quick look at a simple regression problem. We first generate some synthetic data and then determine indices for our training and test data using MLJ:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"using MLJ\n\n# Inputs:\nN = 600\nxmax = 3.0\nusing Distributions\nd = Uniform(-xmax, xmax)\nX = rand(d, N)\nX = reshape(X, :, 1)\n\n# Outputs:\nnoise = 0.5\nfun(X) = sin(X)\nε = randn(N) .* noise\ny = @.(fun(X)) + ε\ny = vec(y)\n\n# Partition:\ntrain, test = partition(eachindex(y), 0.4, 0.4, shuffle=true)","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"We then import Symbolic Regressor (SymbolicRegression.jl) following the standard MLJ procedure.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"regressor = @load SRRegressor pkg=SymbolicRegression\nmodel = regressor(\n niterations=50,\n binary_operators=[+, -, *],\n unary_operators=[sin],\n)","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"To turn our conventional model into a conformal model, we just need to declare it as such by using conformal_model wrapper function. The generated conformal model instance can wrapped in data to create a machine. Finally, we proceed by fitting the machine on training data using the generic fit! method:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"using ConformalPrediction\nconf_model = conformal_model(model)\nmach = machine(conf_model, X, y)\nfit!(mach, rows=train)","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Predictions can then be computed using the generic predict method. The code below produces predictions for the first n samples. Each tuple contains the lower and upper bound for the prediction interval.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"show_first = 5\nXtest = selectrows(X, test)\nytest = y[test]\nŷ = predict(mach, Xtest)\nŷ[1:show_first]","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"5-element Vector{Tuple{Float64, Float64}}:\n (-0.04087262272113379, 1.8635644669554758)\n (0.04647464096907805, 1.9509117306456876)\n (-0.24248802236397216, 1.6619490673126376)\n (-0.07841928163933476, 1.8260178080372749)\n (-0.02268628324126465, 1.881750806435345)","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"For simple models like this one, we can call a custom Plots recipe on our instance, fit result and data to generate the chart below:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"using Plots\nzoom = 0\nplt = plot(mach.model, mach.fitresult, Xtest, ytest, lw=5, zoom=zoom, observed_lab=\"Test points\")\nxrange = range(-xmax+zoom,xmax-zoom,length=N)\nplot!(plt, xrange, @.(fun(xrange)), lw=2, ls=:dash, colour=:darkorange, label=\"Ground truth\")","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"(Image: )","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"We can evaluate the conformal model using the standard MLJ workflow with a custom performance measure. You can use either emp_coverage for the overall empirical coverage (correctness) or ssc for the size-stratified coverage rate (adaptiveness).","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"_eval = evaluate!(mach; measure=[emp_coverage, ssc], verbosity=0)\ndisplay(_eval)\nprintln(\"Empirical coverage: $(round(_eval.measurement[1], digits=3))\")\nprintln(\"SSC: $(round(_eval.measurement[2], digits=3))\")","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"PerformanceEvaluation object with these fields:\n model, measure, operation, measurement, per_fold,\n per_observation, fitted_params_per_fold,\n report_per_fold, train_test_rows, resampling, repeats\nExtract:\n┌──────────────────────────────────────────────┬───────────┬─────────────┬──────\n│ measure │ operation │ measurement │ 1.9 ⋯\n├──────────────────────────────────────────────┼───────────┼─────────────┼──────\n│ ConformalPrediction.emp_coverage │ predict │ 0.953 │ 0.0 ⋯\n│ ConformalPrediction.size_stratified_coverage │ predict │ 0.953 │ 0.0 ⋯\n└──────────────────────────────────────────────┴───────────┴─────────────┴──────\n 2 columns omitted\n\nEmpirical coverage: 0.953\nSSC: 0.953","category":"page"},{"location":"#Read-on","page":"🏠 Home","title":"📚 Read on","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"If after reading the usage example above you are just left with more questions about the topic, that’s normal. Below we have have collected a number of further resources to help you get started with this package and the topic itself:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Blog post introducing conformal classifiers: [Quarto], [TDS], [Forem].\nBlog post applying CP to a deep learning image classifier: [Quarto], [TDS], [Forem].\nThe package docs and in particular the FAQ.","category":"page"},{"location":"#External-Resources","page":"🏠 Home","title":"External Resources","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification by Angelopoulos and Bates (2021) (pdf).\nPredictive inference with the jackknife+ by Barber et al. (2021) (pdf)\nAwesome Conformal Prediction repository by Valery Manokhin (repo).\nDocumentation for the Python package MAPIE.","category":"page"},{"location":"#Status","page":"🏠 Home","title":"🔁 Status","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"This package is in its early stages of development and therefore still subject to changes to the core architecture and API.","category":"page"},{"location":"#Implemented-Methodologies","page":"🏠 Home","title":"Implemented Methodologies","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"The following CP approaches have been implemented:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Regression:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Inductive\nNaive Transductive\nJackknife\nJackknife+\nJackknife-minmax\nCV+\nCV-minmax","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Classification:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Inductive\nNaive Transductive\nAdaptive Inductive","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"The package has been tested for the following supervised models offered by MLJ.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Regression:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"keys(tested_atomic_models[:regression])","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"KeySet for a Dict{Symbol, Expr} with 5 entries. Keys:\n :ridge\n :lasso\n :evo_tree\n :nearest_neighbor\n :linear","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Classification:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"keys(tested_atomic_models[:classification])","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"KeySet for a Dict{Symbol, Expr} with 3 entries. Keys:\n :nearest_neighbor\n :evo_tree\n :logistic","category":"page"},{"location":"#Implemented-Evaluation-Metrics","page":"🏠 Home","title":"Implemented Evaluation Metrics","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"To evaluate conformal predictors we are typically interested in correctness and adaptiveness. The former can be evaluated by looking at the empirical coverage rate, while the latter can be assessed through metrics that address the conditional coverage (Angelopoulos and Bates 2021). To this end, the following metrics have been implemented:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"emp_coverage (empirical coverage)\nssc (size-stratified coverage)","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"There is also a simple Plots.jl recipe that can be used to inspect the set sizes. In the regression case, the interval width is stratified into discrete bins for this purpose:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"bar(mach.model, mach.fitresult, X)","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"(Image: )","category":"page"},{"location":"#Contribute","page":"🏠 Home","title":"🛠 Contribute","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Contributions are welcome! A good place to start is the list of outstanding issues. For more details, see also the Contributor’s Guide. Please follow the SciML ColPrac guide.","category":"page"},{"location":"#Thanks","page":"🏠 Home","title":"🙏 Thanks","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"To build this package I have read and re-read both Angelopoulos and Bates (2021) and Barber et al. (2021). The Awesome Conformal Prediction repository (Manokhin, n.d.) has also been a fantastic place to get started. Thanks also to @aangelopoulos, @valeman and others for actively contributing to discussions on here. Quite a few people have also recently started using and contributing to the package for which I am very grateful. Finally, many thanks to Anthony Blaom (@ablaom) for many helpful discussions about how to interface this package to MLJ.jl.","category":"page"},{"location":"#References","page":"🏠 Home","title":"🎓 References","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Angelopoulos, Anastasios N., and Stephen Bates. 2021. “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification.” https://arxiv.org/abs/2107.07511.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Barber, Rina Foygel, Emmanuel J. Candès, Aaditya Ramdas, and Ryan J. Tibshirani. 2021. “Predictive Inference with the Jackknife+.” The Annals of Statistics 49 (1): 486–507. https://doi.org/10.1214/20-AOS1965.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Blaom, Anthony D., Franz Kiraly, Thibaut Lienart, Yiannis Simillides, Diego Arenas, and Sebastian J. Vollmer. 2020. “MLJ: A Julia Package for Composable Machine Learning.” Journal of Open Source Software 5 (55): 2704. https://doi.org/10.21105/joss.02704.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Manokhin, Valery. n.d. “Awesome Conformal Prediction.”","category":"page"}] +[{"location":"how_to_guides/","page":"Overview","title":"Overview","text":"CurrentModule = ConformalPrediction","category":"page"},{"location":"how_to_guides/#How-To-Guides","page":"Overview","title":"How-To Guides","text":"","category":"section"},{"location":"how_to_guides/","page":"Overview","title":"Overview","text":"In this section you will find a series of how-to-guides that showcase specific use cases of Conformal Prediction.","category":"page"},{"location":"how_to_guides/","page":"Overview","title":"Overview","text":"How-to guides are directions that take the reader through the steps required to solve a real-world problem. How-to guides are goal-oriented.— Diátaxis","category":"page"},{"location":"how_to_guides/","page":"Overview","title":"Overview","text":"In other words, you come here because you may have some particular problem in mind, would like to see how it can be solved using CP and then most likely head off again 🫡","category":"page"},{"location":"tutorials/regression/#Regression","page":"Regression","title":"Regression","text":"","category":"section"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"This tutorial presents and compares different approaches to Conformal Regression using a simple synthetic dataset. It is inspired by this MAPIE tutorial.","category":"page"},{"location":"tutorials/regression/#Data","page":"Regression","title":"Data","text":"","category":"section"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"We begin by generating some synthetic regression data below:","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"# Regression data:\n\n# Inputs:\nN = 600\nxmax = 5.0\nusing Distributions\nd = Uniform(-xmax, xmax)\nX = rand(d, N)\nX = reshape(X, :, 1)\n\n# Outputs:\nnoise = 0.5\nfun(X) = X * sin(X)\nε = randn(N) .* noise\ny = @.(fun(X)) + ε\ny = vec(y)\n\n# Partition:\nusing MLJ\ntrain, test = partition(eachindex(y), 0.4, 0.4, shuffle=true)\n\nusing Plots\nscatter(X, y, label=\"Observed\")\nxrange = range(-xmax,xmax,length=N)\nplot!(xrange, @.(fun(xrange)), lw=4, label=\"Ground truth\", ls=:dash, colour=:black)","category":"page"},{"location":"tutorials/regression/#Model","page":"Regression","title":"Model","text":"","category":"section"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"To model this data we will use polynomial regression. There is currently no out-of-the-box support for polynomial feature transformations in MLJ, but it is easy enough to add a little helper function for this. Note how we define a linear pipeline pipe here. Since pipelines in MLJ are just models, we can use the generated object as an input to conformal_model below.","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"LinearRegressor = @load LinearRegressor pkg=MLJLinearModels\ndegree_polynomial = 10\npolynomial_features(X, degree::Int) = reduce(hcat, map(i -> X.^i, 1:degree))\npipe = (X -> MLJ.table(polynomial_features(MLJ.matrix(X), degree_polynomial))) |> LinearRegressor()","category":"page"},{"location":"tutorials/regression/#Conformal-Prediction","page":"Regression","title":"Conformal Prediction","text":"","category":"section"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"Next, we conformalize our polynomial regressor using every available approach (except the Naive approach):","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"using ConformalPrediction\nconformal_models = merge(values(available_models[:regression])...)\nresults = Dict()\nfor _mod in keys(conformal_models) \n conf_model = conformal_model(pipe; method=_mod, coverage=0.95)\n global mach = machine(conf_model, X, y)\n MLJ.fit!(mach, rows=train)\n results[_mod] = mach\nend","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"Finally, let us look at the resulting conformal predictions in each case. The chart below shows the results: for the first 4 methods it displays the training data (dots) overlaid with the conformal prediction interval (shaded area). At first glance it is hard to spot any major differences between the different approaches. Next, we will look at how we can evaluate and benchmark these predictions.","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"using Plots\nzoom = -0.5\nxrange = range(-xmax+zoom,xmax-zoom,length=N)\nplt_list = []\n\nfor (_mod, mach) in first(results, n_charts)\n plt = plot(mach.model, mach.fitresult, X, y, zoom=zoom, title=_mod)\n plot!(plt, xrange, @.(fun(xrange)), lw=1, ls=:dash, colour=:black, label=\"Ground truth\")\n push!(plt_list, plt)\nend\n\nplot(plt_list..., size=(800,500))","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"(Image: Figure 1: Conformal prediction regions.)","category":"page"},{"location":"tutorials/regression/#Evaluation","page":"Regression","title":"Evaluation","text":"","category":"section"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"For evaluation of conformal predictors we follow Angelopoulos and Bates (2021) (Section 3). As a first step towards adaptiveness (adaptivity), the authors recommend to inspect the set size of conformal predictions. The chart below shows the interval width for the different methods along with the ground truth interval width:","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"xrange = range(-xmax,xmax,length=N)\nplt = plot(xrange, ones(N) .* (1.96*2*noise), ls=:dash, colour=:black, label=\"Ground truth\", lw=2)\nfor (_mod, mach) in results\n ŷ = predict(mach, reshape([x for x in xrange], :, 1))\n y_size = set_size.(ŷ)\n plot!(xrange, y_size, label=String(_mod))\nend\nplt","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"(Image: Figure 2: Prediction interval width.)","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"We can also use specific metrics like empirical coverage and size-stratified coverage to check for correctness and adaptiveness, respectively (angelopoulus2021gentle?). To this end, the package provides custom measures that are compatible with MLJ.jl. In other words, we can evaluate model performance in true MLJ.jl fashion (see here).","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"The code below runs the evaluation with respect to both metrics, emp_coverage and ssc for a single conformal machine:","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"_mod, mach = first(results)\n_eval = evaluate!(\n mach,\n operation=predict,\n measure=[emp_coverage, ssc]\n)\ndisplay(_eval)\nprintln(\"Empirical coverage for $(_mod): $(round(_eval.measurement[1], digits=3))\")\nprintln(\"SSC for $(_mod): $(round(_eval.measurement[2], digits=3))\")","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"PerformanceEvaluation object with these fields:\n measure, operation, measurement, per_fold,\n per_observation, fitted_params_per_fold,\n report_per_fold, train_test_rows\nExtract:\n┌──────────────────────────────────────────────┬───────────┬─────────────┬──────\n│ measure │ operation │ measurement │ 1.9 ⋯\n├──────────────────────────────────────────────┼───────────┼─────────────┼──────\n│ ConformalPrediction.emp_coverage │ predict │ 0.94 │ 0.0 ⋯\n│ ConformalPrediction.size_stratified_coverage │ predict │ 0.94 │ 0.0 ⋯\n└──────────────────────────────────────────────┴───────────┴─────────────┴──────\n 2 columns omitted\n\nEmpirical coverage for jackknife_plus_ab: 0.94\nSSC for jackknife_plus_ab: 0.94","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"Note that, in the regression case, stratified set sizes correspond to discretized interval widths.","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"To benchmark the different approaches, we evaluate them iteratively below. As expected, more conservative approaches like Jackknife-min max  and CV-min max  attain higher aggregate and conditional coverage. Note that size-stratified is not available for methods that produce constant intervals, like standard Jackknife.","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"using DataFrames\nbmk = DataFrame()\nfor (_mod, mach) in results\n _eval = evaluate!(\n mach,\n resampling=CV(;nfolds=5),\n operation=predict,\n measure=[emp_coverage, ssc]\n )\n _bmk = DataFrame(\n Dict(\n :model => _mod,\n :emp_coverage => _eval.measurement[1],\n :ssc => _eval.measurement[2]\n )\n )\n bmk = vcat(bmk, _bmk)\nend\n\nshow(sort(select!(bmk, [2,1,3]), 2, rev=true))","category":"page"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"9×3 DataFrame\n Row │ model emp_coverage ssc \n │ Symbol Float64 Float64 \n─────┼──────────────────────────────────────────────────\n 1 │ jackknife_plus_ab_minmax 0.988333 0.980547\n 2 │ cv_minmax 0.96 0.910873\n 3 │ simple_inductive 0.953333 0.953333\n 4 │ jackknife_minmax 0.946667 0.869103\n 5 │ cv_plus 0.945 0.866549\n 6 │ jackknife_plus_ab 0.941667 0.941667\n 7 │ jackknife_plus 0.941667 0.871606\n 8 │ jackknife 0.941667 0.941667\n 9 │ naive 0.938333 0.938333","category":"page"},{"location":"tutorials/regression/#References","page":"Regression","title":"References","text":"","category":"section"},{"location":"tutorials/regression/","page":"Regression","title":"Regression","text":"Angelopoulos, Anastasios N., and Stephen Bates. 2021. “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification.” https://arxiv.org/abs/2107.07511.","category":"page"},{"location":"how_to_guides/mnist/#How-to-Conformalize-a-Deep-Image-Classifier","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"","category":"section"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"Deep Learning is popular and — for some tasks like image classification — remarkably powerful. But it is also well-known that Deep Neural Networks (DNN) can be unstable (Goodfellow, Shlens, and Szegedy 2014) and poorly calibrated. Conformal Prediction can be used to mitigate these pitfalls. This how-to guide demonstrates how you can build an image classifier in Flux.jl and conformalize its predictions. For a formal treatment see A. Angelopoulos et al. (2022).","category":"page"},{"location":"how_to_guides/mnist/#The-Task-at-Hand","page":"How to Conformalize a Deep Image Classifier","title":"The Task at Hand","text":"","category":"section"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"The task at hand is to predict the labels of handwritten images of digits using the famous MNIST dataset (LeCun 1998). Importing this popular machine learning dataset in Julia is made remarkably easy through MLDatasets.jl:","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"using MLDatasets\nN = 1000\nXraw, yraw = MNIST(split=:train)[:]\nXraw = Xraw[:,:,1:N]\nyraw = yraw[1:N]","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"The chart below shows a few random samples from the training data:","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"using MLJ\nusing Images\nX = map(x -> convert2image(MNIST, x), eachslice(Xraw, dims=3))\ny = coerce(yraw, Multiclass)\n\nn_samples = 10\nmosaic(rand(X, n_samples)..., ncol=n_samples)","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"(Image: Figure 1: Random samples from the MNIST dataset.)","category":"page"},{"location":"how_to_guides/mnist/#Building-the-Network","page":"How to Conformalize a Deep Image Classifier","title":"Building the Network","text":"","category":"section"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"To model the mapping from image inputs to labels will rely on a simple Multi-Layer Perceptron (MLP). A great Julia library for Deep Learning is Flux.jl. But wait … doesn’t ConformalPrediction.jl work with models trained in MLJ.jl? That’s right, but fortunately there exists a Flux.jl interface to MLJ.jl, namely MLJFlux.jl. The interface is still in its early stages, but already very powerful and easily accessible for anyone (like myself) who is used to building Neural Networks in Flux.jl.","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"In Flux.jl, you could build an MLP for this task as follows,","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"using Flux\n\nmlp = Chain(\n Flux.flatten,\n Dense(prod((28,28)), 32, relu),\n Dense(32, 10)\n)","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"where (28,28) is just the input dimension (28x28 pixel images). Since we have ten digits, our output dimension is ten.[1]","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"We can do the exact same thing in MLJFlux.jl as follows,","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"using MLJFlux\n\nbuilder = MLJFlux.@builder Chain(\n Flux.flatten,\n Dense(prod(n_in), 32, relu),\n Dense(32, n_out)\n)","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"where here we rely on the @builder macro to make the transition from Flux.jl to MLJ.jl as seamless as possible. Finally, MLJFlux.jl already comes with a number of helper functions to define plain-vanilla networks. In this case, we will use the ImageClassifier with our custom builder and cross-entropy loss:","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"ImageClassifier = @load ImageClassifier\nclf = ImageClassifier(\n builder=builder,\n epochs=10,\n loss=Flux.crossentropy\n)","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"The generated instance clf is a model (in the MLJ.jl sense) so from this point on we can rely on standard MLJ.jl workflows. For example, we can wrap our model in data to create a machine and then evaluate it on a holdout set as follows:","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"mach = machine(clf, X, y)\n\nevaluate!(\n mach,\n resampling=Holdout(rng=123, fraction_train=0.8),\n operation=predict_mode,\n measure=[accuracy]\n)","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"The accuracy of our very simple model is not amazing, but good enough for the purpose of this tutorial. For each image, our MLP returns a softmax output for each possible digit: 0,1,2,3,…,9. Since each individual softmax output is valued between zero and one, y(k) ∈ (0,1), this is commonly interpreted as a probability: y(k) ≔ p(y=k|X). Edge cases – that is values close to either zero or one – indicate high predictive certainty. But this is only a heuristic notion of predictive uncertainty (A. N. Angelopoulos and Bates 2021). Next, we will turn this heuristic notion of uncertainty into a rigorous one using Conformal Prediction.","category":"page"},{"location":"how_to_guides/mnist/#Conformalizing-the-Network","page":"How to Conformalize a Deep Image Classifier","title":"Conformalizing the Network","text":"","category":"section"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"Since clf is a model, it is also compatible with our package: ConformalPrediction.jl. To conformalize our MLP, we therefore only need to call conformal_model(clf). Since the generated instance conf_model is also just a model, we can still rely on standard MLJ.jl workflows. Below we first wrap it in data and then fit it. Aaaand … we’re done! Let’s look at the results in the next section.","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"using ConformalPrediction\nconf_model = conformal_model(clf; method=:simple_inductive)\nmach = machine(conf_model, X, y)\nfit!(mach)","category":"page"},{"location":"how_to_guides/mnist/#Results","page":"How to Conformalize a Deep Image Classifier","title":"Results","text":"","category":"section"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"The charts below present the results. The first row displays highly certain predictions, now defined in the rigorous sense of Conformal Prediction: in each case, the conformal set (just beneath the image) includes only one label.","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"The following two rows display increasingly uncertain predictions of set size two and three, respectively. They demonstrate that CP is well equipped to deal with samples characterized by high aleatoric uncertainty: digits four (4), seven (7) and nine (9) share certain similarities. So do digits five (5) and six (6) as well as three (3) and eight (8). These may be hard to distinguish from each other even after seeing many examples (and even for a human). It is therefore unsurprising to see that these digits often end up together in conformal sets.","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"(Image: Figure 2: Plot 1)","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"(Image: Figure 3: Plot 2)","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"(Image: Figure 4: Plot 3)","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"Conformalized predictions from an image classifier.","category":"page"},{"location":"how_to_guides/mnist/#Evaluation","page":"How to Conformalize a Deep Image Classifier","title":"Evaluation","text":"","category":"section"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"As always, we can also evaluate our conformal model in terms of coverage (correctness) and size-stratified coverage (adaptiveness).","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"_eval = evaluate!(\n mach,\n resampling=Holdout(rng=123, fraction_train=0.8),\n operation=predict,\n measure=[emp_coverage, ssc]\n)\ndisplay(_eval)\nprintln(\"Empirical coverage: $(round(_eval.measurement[1], digits=3))\")\nprintln(\"SSC: $(round(_eval.measurement[2], digits=3))\")","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"PerformanceEvaluation object with these fields:\n measure, operation, measurement, per_fold,\n per_observation, fitted_params_per_fold,\n report_per_fold, train_test_rows\nExtract:\n┌──────────────────────────────────────────────┬───────────┬─────────────┬──────\n│ measure │ operation │ measurement │ per ⋯\n├──────────────────────────────────────────────┼───────────┼─────────────┼──────\n│ ConformalPrediction.emp_coverage │ predict │ 0.96 │ [0. ⋯\n│ ConformalPrediction.size_stratified_coverage │ predict │ 0.885 │ [0. ⋯\n└──────────────────────────────────────────────┴───────────┴─────────────┴──────\n 1 column omitted\n\nEmpirical coverage: 0.96\nSSC: 0.885","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"Unsurprisingly, we can attain higher adaptivity (SSC) when using adaptive prediction sets:","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"conf_model = conformal_model(clf; method=:adaptive_inductive)\nmach = machine(conf_model, X, y)\nfit!(mach)\n_eval = evaluate!(\n mach,\n resampling=Holdout(rng=123, fraction_train=0.8),\n operation=predict,\n measure=[emp_coverage, ssc]\n)\nresults[:adaptive_inductive] = mach\ndisplay(_eval)\nprintln(\"Empirical coverage: $(round(_eval.measurement[1], digits=3))\")\nprintln(\"SSC: $(round(_eval.measurement[2], digits=3))\")","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"PerformanceEvaluation object with these fields:\n measure, operation, measurement, per_fold,\n per_observation, fitted_params_per_fold,\n report_per_fold, train_test_rows\nExtract:\n┌──────────────────────────────────────────────┬───────────┬─────────────┬──────\n│ measure │ operation │ measurement │ per ⋯\n├──────────────────────────────────────────────┼───────────┼─────────────┼──────\n│ ConformalPrediction.emp_coverage │ predict │ 1.0 │ [1. ⋯\n│ ConformalPrediction.size_stratified_coverage │ predict │ 1.0 │ [1. ⋯\n└──────────────────────────────────────────────┴───────────┴─────────────┴──────\n 1 column omitted\n\nEmpirical coverage: 1.0\nSSC: 1.0","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"We can also have a look at the resulting set size for both approaches:","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"plt_list = []\nfor (_mod, mach) in results\n push!(plt_list, bar(mach.model, mach.fitresult, X; title=String(_mod)))\nend\nplot(plt_list..., size=(800,300))","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"(Image: Figure 5: Prediction interval width.)","category":"page"},{"location":"how_to_guides/mnist/#References","page":"How to Conformalize a Deep Image Classifier","title":"References","text":"","category":"section"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"Angelopoulos, Anastasios N., and Stephen Bates. 2021. “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification.” https://arxiv.org/abs/2107.07511.","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"Angelopoulos, Anastasios, Stephen Bates, Jitendra Malik, and Michael I. Jordan. 2022. “Uncertainty Sets for Image Classifiers Using Conformal Prediction.” arXiv. https://arxiv.org/abs/2009.14193.","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"Goodfellow, Ian J, Jonathon Shlens, and Christian Szegedy. 2014. “Explaining and Harnessing Adversarial Examples.” https://arxiv.org/abs/1412.6572.","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"LeCun, Yann. 1998. “The MNIST Database of Handwritten Digits.”","category":"page"},{"location":"how_to_guides/mnist/","page":"How to Conformalize a Deep Image Classifier","title":"How to Conformalize a Deep Image Classifier","text":"[1] For a full tutorial on how to build an MNIST image classifier relying solely on Flux.jl, check out this tutorial.","category":"page"},{"location":"tutorials/","page":"Overview","title":"Overview","text":"CurrentModule = ConformalPrediction","category":"page"},{"location":"tutorials/#Tutorials","page":"Overview","title":"Tutorials","text":"","category":"section"},{"location":"tutorials/","page":"Overview","title":"Overview","text":"In this section you will find a series of tutorials that should help you gain a basic understanding of Conformal Prediction and how to apply it in Julia using this package.","category":"page"},{"location":"tutorials/","page":"Overview","title":"Overview","text":"Tutorials are lessons that take the reader by the hand through a series of steps to complete a project of some kind. Tutorials are learning-oriented.— Diátaxis","category":"page"},{"location":"tutorials/","page":"Overview","title":"Overview","text":"In other words, you come here because you are new to this topic and are looking for a first peek at the methodology and code 🫣","category":"page"},{"location":"contribute/#Contributor’s-Guide","page":"🛠 Contribute","title":"Contributor’s Guide","text":"","category":"section"},{"location":"contribute/","page":"🛠 Contribute","title":"🛠 Contribute","text":"CurrentModule = ConformalPrediction","category":"page"},{"location":"contribute/#Contents","page":"🛠 Contribute","title":"Contents","text":"","category":"section"},{"location":"contribute/","page":"🛠 Contribute","title":"🛠 Contribute","text":"Pages = [\"contribute.md\"]\nDepth = 2","category":"page"},{"location":"contribute/#Contributing-to-ConformalPrediction.jl","page":"🛠 Contribute","title":"Contributing to ConformalPrediction.jl","text":"","category":"section"},{"location":"contribute/","page":"🛠 Contribute","title":"🛠 Contribute","text":"Contributions are welcome! Please follow the SciML ColPrac guide. To get started we recommend you have a look at the Explanation section in the docs. The subsection explaining the package architecture may be particularly useful. You may already have a specific idea about what you want to contribute, in which case please feel free to open an issue and pull request. If you don’t have anything specific in mind, the list of outstanding issues may be a good source of inspiration. If you decide to work on an outstanding issue, be sure to check its current status: if it’s “In Progress”, check in with the developer who last worked on the issue to see how you may help.","category":"page"},{"location":"how_to_guides/timeseries/#How-to-Conformalize-a-Time-Series-Model","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"","category":"section"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"Time series data is prevalent across various domains, such as finance, weather forecasting, energy, and supply chains. However, accurately quantifying uncertainty in time series predictions is often a complex task due to inherent temporal dependencies, non-stationarity, and noise in the data. In this context, Conformal Prediction offers a valuable solution by providing prediction intervals which offer a sound way to quantify uncertainty.","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"This how-to guide demonstrates how you can conformalize a time series model using Ensemble Batch Prediction Intervals (EnbPI) (Xu and Xie 2021). This method enables the updating of prediction intervals whenever new observations are available. This dynamic update process allows the method to adapt to changing conditions, accounting for the potential degradation of predictions or the increase in noise levels in the data.","category":"page"},{"location":"how_to_guides/timeseries/#The-Task-at-Hand","page":"How to Conformalize a Time Series Model","title":"The Task at Hand","text":"","category":"section"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"Inspired by MAPIE, we employ the Victoria electricity demand dataset. This dataset contains hourly electricity demand (in GW) for Victoria state in Australia, along with corresponding temperature data (in Celsius degrees).","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"using CSV, DataFrames\ndf = CSV.read(\"./dev/artifacts/electricity_demand.csv\", DataFrame)","category":"page"},{"location":"how_to_guides/timeseries/#Feature-engineering","page":"How to Conformalize a Time Series Model","title":"Feature engineering","text":"","category":"section"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"In this how-to guide, we only focus on date, time and lag features.","category":"page"},{"location":"how_to_guides/timeseries/#Date-and-Time-related-features","page":"How to Conformalize a Time Series Model","title":"Date and Time-related features","text":"","category":"section"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"We create temporal features out of the date and hour:","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"using Dates\ndf.Datetime = Dates.DateTime.(df.Datetime, \"yyyy-mm-dd HH:MM:SS\")\ndf.Weekofyear = Dates.week.(df.Datetime)\ndf.Weekday = Dates.dayofweek.(df.Datetime)\ndf.hour = Dates.hour.(df.Datetime) ","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"Additionally, to simulate sudden changes caused by unforeseen events, such as blackouts or lockdowns, we deliberately reduce the electricity demand by 2GW from February 22nd onward.","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"df.Demand_updated = copy(df.Demand)\ncondition = df.Datetime .>= Date(\"2014-02-22\")\ndf[condition, :Demand_updated] .= df[condition, :Demand_updated] .- 2","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"That is how the data looks like after our manipulation","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"cutoff_point = 200\nplot(df[cutoff_point:split_index, [:Datetime]].Datetime, df[cutoff_point:split_index, :].Demand ,\n label=\"training data\", color=:green, xlabel = \"Date\" , ylabel=\"Electricity demand(GW)\")\nplot!(df[split_index+1 : size(df,1), [:Datetime]].Datetime, df[split_index+1 : size(df,1), : ].Demand,\n label=\"test data\", color=:orange, xlabel = \"Date\" , ylabel=\"Electricity demand(GW)\")\nplot!(df[split_index+1 : size(df,1), [:Datetime]].Datetime, df[split_index+1 : size(df,1), : ].Demand_updated, label=\"updated test data\", color=:red, linewidth=1, framestyle=:box)\nplot!(legend=:outerbottom, legendcolumns=3)\nplot!(size=(850,400), left_margin = 5Plots.mm)","category":"page"},{"location":"how_to_guides/timeseries/#Lag-features","page":"How to Conformalize a Time Series Model","title":"Lag features","text":"","category":"section"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"using ShiftedArrays\nn_lags = 5\nfor i = 1:n_lags\n DataFrames.transform!(df, \"Demand\" => (x -> ShiftedArrays.lag(x, i)) => \"lag_hour_$i\")\nend\n\ndf_dropped_missing = dropmissing(df)\ndf_dropped_missing","category":"page"},{"location":"how_to_guides/timeseries/#Train-test-split","page":"How to Conformalize a Time Series Model","title":"Train-test split","text":"","category":"section"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"As usual, we split the data into train and test sets. We use the first 90% of the data for training and the remaining 10% for testing.","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"features_cols = DataFrames.select(df_dropped_missing, Not([:Datetime, :Demand, :Demand_updated]))\nX = Matrix(features_cols)\ny = Matrix(df_dropped_missing[:, [:Demand_updated]])\nsplit_index = floor(Int, 0.9 * size(y , 1)) \nprintln(split_index)\nX_train = X[1:split_index, :]\ny_train = y[1:split_index, :]\nX_test = X[split_index+1 : size(y,1), :]\ny_test = y[split_index+1 : size(y,1), :]","category":"page"},{"location":"how_to_guides/timeseries/#Loading-model-using-MLJ-interface","page":"How to Conformalize a Time Series Model","title":"Loading model using MLJ interface","text":"","category":"section"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"As our baseline model, we use a boosted tree regressor:","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"using MLJ\nEvoTreeRegressor = @load EvoTreeRegressor pkg=EvoTrees verbosity=0\nmodel = EvoTreeRegressor(nrounds =100, max_depth=10, rng=123)","category":"page"},{"location":"how_to_guides/timeseries/#Conformal-time-series","page":"How to Conformalize a Time Series Model","title":"Conformal time series","text":"","category":"section"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"Next, we conformalize the model using EnbPI. First, we will proceed without updating training set residuals to build prediction intervals. The result is shown in the following figure:","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"using ConformalPrediction\n\nconf_model = conformal_model(model; method=:time_series_ensemble_batch, coverage=0.95)\nmach = machine(conf_model, X_train, y_train)\ntrain = [1:split_index;]\nfit!(mach, rows=train)\n\ny_pred_interval = MLJ.predict(conf_model, mach.fitresult, X_test)\nlb = [ minimum(tuple_data) for tuple_data in y_pred_interval]\nub = [ maximum(tuple_data) for tuple_data in y_pred_interval]\ny_pred = [mean(tuple_data) for tuple_data in y_pred_interval]","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"#| echo: false\n#| output: true\ncutoff_point = findfirst(df_dropped_missing.Datetime .== Date(\"2014-02-15\"))\nplot(df_dropped_missing[cutoff_point:split_index, [:Datetime]].Datetime, y_train[cutoff_point:split_index] ,\n label=\"train\", color=:green , xlabel = \"Date\" , ylabel=\"Electricity demand(GW)\", linewidth=1)\nplot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime,\n y_test, label=\"test\", color=:red)\nplot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime ,\n y_pred, label =\"prediction\", color=:blue)\nplot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime,\n lb, fillrange = ub, fillalpha = 0.2, label = \"prediction interval w/o EnbPI\",\n color=:lake, linewidth=0, framestyle=:box)\nplot!(legend=:outerbottom, legendcolumns=4, legendfontsize=6)\nplot!(size=(850,400), left_margin = 5Plots.mm)","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"We can use partial_fit method in EnbPI implementation in ConformalPrediction in order to adjust prediction intervals to sudden change points on test sets that have not been seen by the model during training. In the below experiment, samplesize indicates the batch of new observations. You can decide if you want to update residuals by samplesize or update and remove first n residuals (shift_size = n). The latter will allow to remove early residuals that will not have a positive impact on the current observations.","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"The chart below compares the results to the previous experiment without updating residuals:","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"sample_size = 30\nshift_size = 100\nlast_index = size(X_test , 1)\nlb_updated , ub_updated = ([], [])\nfor step in 1:sample_size:last_index\n if last_index - step < sample_size\n y_interval = MLJ.predict(conf_model, mach.fitresult, X_test[step:last_index , :])\n partial_fit(mach.model , mach.fitresult, X_test[step:last_index , :], y_test[step:last_index , :], shift_size)\n else\n y_interval = MLJ.predict(conf_model, mach.fitresult, X_test[step:step+sample_size-1 , :])\n partial_fit(mach.model , mach.fitresult, X_test[step:step+sample_size-1 , :], y_test[step:step+sample_size-1 , :], shift_size) \n end \n lb_updatedᵢ= [ minimum(tuple_data) for tuple_data in y_interval]\n push!(lb_updated,lb_updatedᵢ)\n ub_updatedᵢ = [ maximum(tuple_data) for tuple_data in y_interval]\n push!(ub_updated, ub_updatedᵢ)\nend\nlb_updated = reduce(vcat, lb_updated)\nub_updated = reduce(vcat, ub_updated)","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"#| echo: false\n#| output: true\nplot(df_dropped_missing[cutoff_point:split_index, [:Datetime]].Datetime, y_train[cutoff_point:split_index] ,\n label=\"train\", color=:green , xlabel = \"Date\" , ylabel=\"Electricity demand(GW)\", linewidth=1)\nplot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime, y_test,\n label=\"test\", color=:red)\nplot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime ,\n y_pred, label =\"prediction\", color=:blue)\nplot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime,\n lb_updated, fillrange = ub_updated, fillalpha = 0.2, label = \"EnbPI\",\n color=:lake, linewidth=0, framestyle=:box)\nplot!(legend=:outerbottom, legendcolumns=4)\nplot!(size=(850,400), left_margin = 5Plots.mm)","category":"page"},{"location":"how_to_guides/timeseries/#Results","page":"How to Conformalize a Time Series Model","title":"Results","text":"","category":"section"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"In time series problems, unexpected incidents can lead to sudden changes, and such scenarios are highly probable. As illustrated earlier, the model’s training data lacks information about these change points, making it unable to anticipate them. The top figure demonstrates that when residuals are not updated, the prediction intervals solely rely on the distribution of residuals from the training set. Consequently, these intervals fail to encompass the true observations after the change point, resulting in a sudden drop in coverage.","category":"page"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"However, by partially updating the residuals, the method becomes adept at capturing the increasing uncertainties in model predictions. It is important to note that the changes in uncertainty occur approximately one day after the change point. This delay is attributed to the requirement of having a sufficient number of new residuals to alter the quantiles obtained from the residual distribution.","category":"page"},{"location":"how_to_guides/timeseries/#References","page":"How to Conformalize a Time Series Model","title":"References","text":"","category":"section"},{"location":"how_to_guides/timeseries/","page":"How to Conformalize a Time Series Model","title":"How to Conformalize a Time Series Model","text":"Xu, Chen, and Yao Xie. 2021. “Conformal Prediction Interval for Dynamic Time-Series.” In, 11559–69. PMLR. https://proceedings.mlr.press/v139/xu21h.html.","category":"page"},{"location":"explanation/architecture/#Package-Architecture","page":"Package Architecture","title":"Package Architecture","text":"","category":"section"},{"location":"explanation/architecture/","page":"Package Architecture","title":"Package Architecture","text":"The diagram below demonstrates the package architecture at the time of writing. This is still subject to change, so any thoughts and comments are very much welcome.","category":"page"},{"location":"explanation/architecture/","page":"Package Architecture","title":"Package Architecture","text":"The goal is to make this package as compatible as possible with MLJ to tab into existing functionality. The basic idea is to subtype MLJ Supervised models and then use concrete types to implement different approaches to conformal prediction. For each of these concrete types the compulsory MMI.fit and MMI.predict methods need be implemented (see here).","category":"page"},{"location":"explanation/architecture/","page":"Package Architecture","title":"Package Architecture","text":"(Image: )","category":"page"},{"location":"explanation/architecture/#Abstract-Subtypes","page":"Package Architecture","title":"Abstract Subtypes","text":"","category":"section"},{"location":"explanation/architecture/","page":"Package Architecture","title":"Package Architecture","text":"Currently, I intend to work with three different abstract subtypes:","category":"page"},{"location":"explanation/architecture/","page":"Package Architecture","title":"Package Architecture","text":"ConformalPrediction.ConformalInterval\nConformalPrediction.ConformalProbabilisticSet\nConformalPrediction.ConformalProbabilistic","category":"page"},{"location":"explanation/architecture/#fit-and-predict","page":"Package Architecture","title":"fit and predict","text":"","category":"section"},{"location":"explanation/architecture/","page":"Package Architecture","title":"Package Architecture","text":"The fit and predict methods are compulsory in order to prepare models for general use with MLJ. They also serve us to implement the logic underlying the various approaches to conformal prediction. To understand how this currently works, have a look at the ConformalPrediction.AdaptiveInductiveClassifier as an example: fit(conf_model::ConformalPrediction.AdaptiveInductiveClassifier, verbosity, X, y) and predict(conf_model::ConformalPrediction.AdaptiveInductiveClassifier, fitresult, Xnew).","category":"page"},{"location":"explanation/","page":"Overview","title":"Overview","text":"CurrentModule = ConformalPrediction","category":"page"},{"location":"explanation/#Explanation","page":"Overview","title":"Explanation","text":"","category":"section"},{"location":"explanation/","page":"Overview","title":"Overview","text":"In this section you will find detailed explanations about the methodology and code.","category":"page"},{"location":"explanation/","page":"Overview","title":"Overview","text":"Explanation clarifies, deepens and broadens the reader’s understanding of a subject.— Diátaxis","category":"page"},{"location":"explanation/","page":"Overview","title":"Overview","text":"In other words, you come here because you are interested in understanding how all of this actually works 🤓","category":"page"},{"location":"tutorials/classification/#Classification","page":"Classification","title":"Classification","text":"","category":"section"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"This tutorial is based in parts on this blog post.","category":"page"},{"location":"tutorials/classification/#Split-Conformal-Classification","page":"Classification","title":"Split Conformal Classification","text":"","category":"section"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"We consider a simple binary classification problem. Let (X(i),Y(i)), i = 1, ..., n denote our feature-label pairs and let μ : 𝒳 ↦ 𝒴 denote the mapping from features to labels. For illustration purposes we will use the moons dataset 🌙. Using MLJ.jl we first generate the data and split into into a training and test set:","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"using MLJ\nusing Random\nRandom.seed!(123)\n\n# Data:\nX, y = make_moons(500; noise=0.15)\nX = MLJ.table(convert.(Float32, MLJ.matrix(X)))\ntrain, test = partition(eachindex(y), 0.8, shuffle=true)","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"Here we will use a specific case of CP called split conformal prediction which can then be summarized as follows:[1]","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"Partition the training into a proper training set and a separate calibration set: 𝒟_(n) = 𝒟^(train) ∪ 𝒟^(cali).\nTrain the machine learning model on the proper training set: μ̂(i ∈ 𝒟^(train))(X(i),Y_(i)).\nCompute nonconformity scores, 𝒮, using the calibration data 𝒟^(cali) and the fitted model μ̂_(i ∈ 𝒟^(train)).\nFor a user-specified desired coverage ratio (1−α) compute the corresponding quantile, q̂, of the empirical distribution of nonconformity scores, 𝒮.\nFor the given quantile and test sample X_(test), form the corresponding conformal prediction set:","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"C(X_texttest)=ys(X_texttesty) le hatq","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"This is the default procedure used for classification and regression in ConformalPrediction.jl.","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"Now let’s take this to our 🌙 data. To illustrate the package functionality we will demonstrate the envisioned workflow. We first define our atomic machine learning model following standard MLJ.jl conventions. Using ConformalPrediction.jl we then wrap our atomic model in a conformal model using the standard API call conformal_model(model::Supervised; kwargs...). To train and predict from our conformal model we can then rely on the conventional MLJ.jl procedure again. In particular, we wrap our conformal model in data (turning it into a machine) and then fit it to the training data. Finally, we use our machine to predict the label for a new test sample Xtest:","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"# Model:\nKNNClassifier = @load KNNClassifier pkg=NearestNeighborModels\nmodel = KNNClassifier(;K=50) \n\n# Training:\nusing ConformalPrediction\nconf_model = conformal_model(model; coverage=.9)\nmach = machine(conf_model, X, y)\nfit!(mach, rows=train)\n\n# Conformal Prediction:\nXtest = selectrows(X, test)\nytest = y[test]\nŷ = predict(mach, Xtest)\nŷ[1]","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"import NearestNeighborModels ✔\n\nUnivariateFinite{Multiclass{2}}(0=>0.94)","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"The final predictions are set-valued. While the softmax output remains unchanged for the SimpleInductiveClassifier, the size of the prediction set depends on the chosen coverage rate, (1−α).","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"When specifying a coverage rate very close to one, the prediction set will typically include many (in some cases all) of the possible labels. Below, for example, both classes are included in the prediction set when setting the coverage rate equal to (1−α)=1.0. This is intuitive, since high coverage quite literally requires that the true label is covered by the prediction set with high probability.","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"conf_model = conformal_model(model; coverage=coverage, method=:simple_inductive)\nmach = machine(conf_model, X, y)\nfit!(mach, rows=train)\n\n# Conformal Prediction:\nXtest = (x1=[1],x2=[0])\npredict(mach, Xtest)[1]","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"UnivariateFinite{Multiclass{2}}(0=>0.5, 1=>0.5)","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"Conversely, for low coverage rates, prediction sets can also be empty. For a choice of (1−α)=0.1, for example, the prediction set for our test sample is empty. This is a bit difficult to think about intuitively and I have not yet come across a satisfactory, intuitive interpretation.[2] When the prediction set is empty, the predict call currently returns missing:","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"conf_model = conformal_model(model; coverage=coverage)\nmach = machine(conf_model, X, y)\nfit!(mach, rows=train)\n\n# Conformal Prediction:\npredict(mach, Xtest)[1]","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"missing","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"cov_ = .95\nconf_model = conformal_model(model; coverage=cov_)\nmach = machine(conf_model, X, y)\nfit!(mach, rows=train)\nMarkdown.parse(\"\"\"\nThe following chart shows the resulting predicted probabilities for ``y=1`` (left) and set size (right) for a choice of ``(1-\\\\alpha)``=$cov_.\n\"\"\")","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"The following chart shows the resulting predicted probabilities for y = 1 (left) and set size (right) for a choice of (1−α)=0.95.","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"using Plots\np_proba = contourf(mach.model, mach.fitresult, X, y)\np_set_size = contourf(mach.model, mach.fitresult, X, y; plot_set_size=true)\nplot(p_proba, p_set_size, size=(800,250))","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"(Image: )","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"The animation below should provide some more intuition as to what exactly is happening here. It illustrates the effect of the chosen coverage rate on the predicted softmax output and the set size in the two-dimensional feature space. Contours are overlayed with the moon data points (including test data). The two samples highlighted in red, X₁ and X₂, have been manually added for illustration purposes. Let’s look at these one by one.","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"Firstly, note that X₁ (red cross) falls into a region of the domain that is characterized by high predictive uncertainty. It sits right at the bottom-right corner of our class-zero moon 🌜 (orange), a region that is almost entirely enveloped by our class-one moon 🌛 (green). For low coverage rates the prediction set for X₁ is empty: on the left-hand side this is indicated by the missing contour for the softmax probability; on the right-hand side we can observe that the corresponding set size is indeed zero. For high coverage rates the prediction set includes both y = 0 and y = 1, indicative of the fact that the conformal classifier is uncertain about the true label.","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"With respect to X₂, we observe that while also sitting on the fringe of our class-zero moon, this sample populates a region that is not fully enveloped by data points from the opposite class. In this region, the underlying atomic classifier can be expected to be more certain about its predictions, but still not highly confident. How is this reflected by our corresponding conformal prediction sets?","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"Xtest_2 = (x1=[-0.5],x2=[0.25])\np̂_2 = pdf(predict(mach, Xtest_2)[1], 0)","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"Well, for low coverage rates (roughly  \\< 0.9) the conformal prediction set does not include y = 0: the set size is zero (right panel). Only for higher coverage rates do we have C(X₂) = {0}: the coverage rate is high enough to include y = 0, but the corresponding softmax probability is still fairly low. For example, for (1−α) = 0.95 we have p̂(y=0|X₂) = 0.72.","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"These two examples illustrate an interesting point: for regions characterized by high predictive uncertainty, conformal prediction sets are typically empty (for low coverage) or large (for high coverage). While set-valued predictions may be something to get used to, this notion is overall intuitive.","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"# Setup\ncoverages = range(0.75,1.0,length=5)\nn = 100\nx1_range = range(extrema(X.x1)...,length=n)\nx2_range = range(extrema(X.x2)...,length=n)\n\nanim = @animate for coverage in coverages\n conf_model = conformal_model(model; coverage=coverage)\n mach = machine(conf_model, X, y)\n fit!(mach, rows=train)\n # Probabilities:\n p1 = contourf(mach.model, mach.fitresult, X, y)\n scatter!(p1, Xtest.x1, Xtest.x2, ms=6, c=:red, label=\"X₁\", shape=:cross, msw=6)\n scatter!(p1, Xtest_2.x1, Xtest_2.x2, ms=6, c=:red, label=\"X₂\", shape=:diamond, msw=6)\n p2 = contourf(mach.model, mach.fitresult, X, y; plot_set_size=true)\n scatter!(p2, Xtest.x1, Xtest.x2, ms=6, c=:red, label=\"X₁\", shape=:cross, msw=6)\n scatter!(p2, Xtest_2.x1, Xtest_2.x2, ms=6, c=:red, label=\"X₂\", shape=:diamond, msw=6)\n plot(p1, p2, plot_title=\"(1-α)=$(round(coverage,digits=2))\", size=(800,300))\nend\n\ngif(anim, joinpath(www_path,\"classification.gif\"), fps=1)","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"(Image: )","category":"page"},{"location":"tutorials/classification/#Adaptive-Sets","page":"Classification","title":"Adaptive Sets","text":"","category":"section"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"Instead of using the simple approach, we can use adaptive prediction sets (Angelopoulos and Bates 2021):","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"conf_model = conformal_model(model; coverage=cov_, method=:adaptive_inductive)\nmach = machine(conf_model, X, y)\nfit!(mach, rows=train)\nresults[:adaptive_inductive] = mach","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"using Plots\np_proba = contourf(mach.model, mach.fitresult, X, y)\np_set_size = contourf(mach.model, mach.fitresult, X, y; plot_set_size=true)\nplot(p_proba, p_set_size, size=(800,250))","category":"page"},{"location":"tutorials/classification/#Evaluation","page":"Classification","title":"Evaluation","text":"","category":"section"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"For evaluation of conformal predictors we follow Angelopoulos and Bates (2021) (Section 3). As a first step towards adaptiveness (adaptivity), the authors recommend to inspect the set size of conformal predictions. The chart below shows the interval width for the different methods along with the ground truth interval width:","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"plt_list = []\nfor (_mod, mach) in results\n push!(plt_list, bar(mach.model, mach.fitresult, X; title=String(_mod)))\nend\nplot(plt_list..., size=(800,300))","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"(Image: Figure 1: Prediction interval width.)","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"We can also use specific metrics like empirical coverage and size-stratified coverage to check for correctness and adaptiveness, respectively. To this end, the package provides custom measures that are compatible with MLJ.jl. In other words, we can evaluate model performance in true MLJ.jl fashion (see here).","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"The code below runs the evaluation with respect to both metrics, emp_coverage and ssc for a single conformal machine:","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"_mod, mach = first(results)\n_eval = evaluate!(\n mach,\n operation=predict,\n measure=[emp_coverage, ssc]\n)\n# display(_eval)\nprintln(\"Empirical coverage for $(_mod): $(round(_eval.measurement[1], digits=3))\")\nprintln(\"SSC for $(_mod): $(round(_eval.measurement[2], digits=3))\")","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"Empirical coverage for adaptive_inductive: 0.962\nSSC for adaptive_inductive: 0.962","category":"page"},{"location":"tutorials/classification/#References","page":"Classification","title":"References","text":"","category":"section"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"Angelopoulos, Anastasios N., and Stephen Bates. 2021. “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification.” https://arxiv.org/abs/2107.07511.","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"[1] In other places split conformal prediction is sometimes referred to as inductive conformal prediction.","category":"page"},{"location":"tutorials/classification/","page":"Classification","title":"Classification","text":"[2] Any thoughts/comments welcome!","category":"page"},{"location":"tutorials/plotting/#Visualization-using-TaijaPlotting.jl","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"","category":"section"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"CurrentModule = ConformalPrediction","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"This tutorial demonstrates how various custom plotting methods can be used to visually analyze conformal predictors.","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"using ConformalPrediction\nusing Plots, TaijaPlotting","category":"page"},{"location":"tutorials/plotting/#Regression","page":"Visualization using TaijaPlotting.jl","title":"Regression","text":"","category":"section"},{"location":"tutorials/plotting/#Visualizing-Prediction-Intervals","page":"Visualization using TaijaPlotting.jl","title":"Visualizing Prediction Intervals","text":"","category":"section"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"For conformal regressors, the TaijaPlotting.plot can be used to visualize the prediction intervals for given data points.","category":"page"},{"location":"tutorials/plotting/#Univariate-Input","page":"Visualization using TaijaPlotting.jl","title":"Univariate Input","text":"","category":"section"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"using MLJ\nX, y = make_regression(100, 1; noise=0.3)","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"EvoTreeRegressor = @load EvoTreeRegressor pkg=EvoTrees\nmodel = EvoTreeRegressor() \nconf_model = conformal_model(model)\nmach = machine(conf_model, X, y)\nfit!(mach)","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"plot(mach.model, mach.fitresult, X, y; input_var=1)","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"(Image: )","category":"page"},{"location":"tutorials/plotting/#Multivariate-Input","page":"Visualization using TaijaPlotting.jl","title":"Multivariate Input","text":"","category":"section"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"using MLJ\nX, y = @load_boston\nschema(X)","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"EvoTreeRegressor = @load EvoTreeRegressor pkg=EvoTrees\nmodel = EvoTreeRegressor() \nconf_model = conformal_model(model)\nmach = machine(conf_model, X, y)\nfit!(mach)","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"input_vars = [:Crim, :Age, :Tax]\nnvars = length(input_vars)\nplt_list = []\nfor input_var in input_vars\n plt = plot(mach.model, mach.fitresult, X, y; input_var=input_var, title=input_var)\n push!(plt_list, plt)\nend\nplot(plt_list..., layout=(1,nvars), size=(nvars*200, 200))","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"(Image: )","category":"page"},{"location":"tutorials/plotting/#Visualizing-Set-Size","page":"Visualization using TaijaPlotting.jl","title":"Visualizing Set Size","text":"","category":"section"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"To visualize the set size distribution, the TaijaPlotting.bar can be used. For regression models, the prediction interval widths are stratified into discrete bins.","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"bar(mach.model, mach.fitresult, X)","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"(Image: )","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"EvoTreeRegressor = @load EvoTreeRegressor pkg=EvoTrees\nmodel = EvoTreeRegressor() \nconf_model = conformal_model(model, method=:jackknife_plus)\nmach = machine(conf_model, X, y)\nfit!(mach)","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"bar(mach.model, mach.fitresult, X)","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"(Image: )","category":"page"},{"location":"tutorials/plotting/#Classification","page":"Visualization using TaijaPlotting.jl","title":"Classification","text":"","category":"section"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"KNNClassifier = @load KNNClassifier pkg=NearestNeighborModels\nmodel = KNNClassifier(;K=3)","category":"page"},{"location":"tutorials/plotting/#Visualizing-Predictions","page":"Visualization using TaijaPlotting.jl","title":"Visualizing Predictions","text":"","category":"section"},{"location":"tutorials/plotting/#Stacked-Area-Charts","page":"Visualization using TaijaPlotting.jl","title":"Stacked Area Charts","text":"","category":"section"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"Stacked area charts can be used to visualize prediction sets for any conformal classifier.","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"using MLJ\nn_input = 4\nX, y = make_blobs(100, n_input)","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"conf_model = conformal_model(model)\nmach = machine(conf_model, X, y)\nfit!(mach)","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"plt_list = []\nfor i in 1:n_input\n plt = areaplot(mach.model, mach.fitresult, X, y; input_var=i, title=\"Input $i\")\n push!(plt_list, plt)\nend\nplot(plt_list..., size=(220*n_input,200), layout=(1, n_input))","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"(Image: )","category":"page"},{"location":"tutorials/plotting/#Contour-Plots-for-Two-Dimensional-Inputs","page":"Visualization using TaijaPlotting.jl","title":"Contour Plots for Two-Dimensional Inputs","text":"","category":"section"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"For conformal classifiers with exactly two input variables, the TaijaPlotting.contourf method can be used to visualize conformal predictions in the two-dimensional feature space.","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"using MLJ\nX, y = make_blobs(100, 2)","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"conf_model = conformal_model(model)\nmach = machine(conf_model, X, y)\nfit!(mach)","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"p1 = contourf(mach.model, mach.fitresult, X, y)\np2 = contourf(mach.model, mach.fitresult, X, y; plot_set_size=true)\nplot(p1, p2, size=(700,300))","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"(Image: )","category":"page"},{"location":"tutorials/plotting/#Visualizing-Set-Size-2","page":"Visualization using TaijaPlotting.jl","title":"Visualizing Set Size","text":"","category":"section"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"To visualize the set size distribution, the TaijaPlotting.bar can be used. Recall that for more adaptive predictors the distribution of set sizes is typically spread out more widely, which reflects that “the procedure is effectively distinguishing between easy and hard inputs” (Angelopoulos and Bates 2021). This is desirable: when for a given sample it is difficult to make predictions, this should be reflected in the set size (or interval width in the regression case). Since ‘difficult’ lies on some spectrum that ranges from ‘very easy’ to ‘very difficult’ the set size should vary across the spectrum of ‘empty set’ to ‘all labels included’.","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"X, y = make_moons(500; noise=0.15)\nKNNClassifier = @load KNNClassifier pkg=NearestNeighborModels\nmodel = KNNClassifier(;K=50) ","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"conf_model = conformal_model(model)\nmach = machine(conf_model, X, y)\nfit!(mach)","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"p1 = contourf(mach.model, mach.fitresult, X, y; plot_set_size=true)\np2 = bar(mach.model, mach.fitresult, X)\nplot(p1, p2, size=(700,300))","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"(Image: )","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"conf_model = conformal_model(model, method=:adaptive_inductive)\nmach = machine(conf_model, X, y)\nfit!(mach)","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"p1 = contourf(mach.model, mach.fitresult, X, y; plot_set_size=true)\np2 = bar(mach.model, mach.fitresult, X)\nplot(p1, p2, size=(700,300))","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"(Image: )","category":"page"},{"location":"tutorials/plotting/","page":"Visualization using TaijaPlotting.jl","title":"Visualization using TaijaPlotting.jl","text":"Angelopoulos, Anastasios N., and Stephen Bates. 2021. “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification.” https://arxiv.org/abs/2107.07511.","category":"page"},{"location":"faq/#Frequently-Asked-Questions","page":"❓ FAQ","title":"Frequently Asked Questions","text":"","category":"section"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"CurrentModule = ConformalPrediction","category":"page"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"In this section we attempt to provide some reflections on frequently asked questions about the package and implemented methodologies. If you have a particular question that is not listed here, please feel free to also open an issue. While can answer questions regarding the package with a certain degree of confidence, I do not pretend to have any definite answers to methodological questions, but merely reflections (see the disclaimer below).","category":"page"},{"location":"faq/#Package","page":"❓ FAQ","title":"Package","text":"","category":"section"},{"location":"faq/#Why-the-interface-to-MLJ.jl?","page":"❓ FAQ","title":"Why the interface to MLJ.jl?","text":"","category":"section"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"An important design choice. MLJ.jl is a one-stop shop for common machine learning models and pipelines in Julia. It’s growing fast and the development team is very accessible, friendly and enthusiastic. Conformal Prediction is a model-agnostic approach to uncertainty quantification, so it can be applied to any common (supervised) machine learning model. For these reasons I decided to interface this package to MLJ.jl. The idea is that any (supervised) MLJ.jl model can be conformalized using ConformalPrediction.jl. By leveraging existing MLJ.jl functionality for common tasks like training, prediction and model evaluation, this package is light-weight and scalable.","category":"page"},{"location":"faq/#Methodology","page":"❓ FAQ","title":"Methodology","text":"","category":"section"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"For methodological questions about Conformal Prediction, my best advice is to consult the literature on the topic. A good place to start is “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification” (Angelopoulos and Bates 2021): the tutorial is comprehensive, accessible and continuously updated. Below you will find a list of high-level questions and reflections.","category":"page"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"warning: Disclaimer\n","category":"page"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"    I want to emphasize that these are merely my own reflections. I provide these to the best of my knowledge and understanding of the topic, but please be aware that I am still on a learning journey myself. I have not read the entire literature on this topic (and won’t be able to in the future either). If you spot anything that doesn’t look right or sits at odds with something your read in the literature, please open an issue. Even better: if you want to add your own reflections and thoughts, feel free to open a pull request.","category":"page"},{"location":"faq/#What-is-Predictive-Uncertainty-Quantification?","page":"❓ FAQ","title":"What is Predictive Uncertainty Quantification?","text":"","category":"section"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"Predictive Uncertainty Quantification deals with quantifying the uncertainty around predictions for the output variable of a supervised model. It is a subset of Uncertainty Quantification, which can also relate to uncertainty around model parameters, for example. I will sometimes use both terms interchangeably, even though I shouldn’t (please bare with me, or if you’re bothered by a particular slip-up, open a PR).","category":"page"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"Uncertainty of model parameters is a very important topic itself: we might be interested in understanding, for example, if the estimated effect θ of some input variable x on the output variable y is statistically significant. This typically hinges on being able to quantify the uncertainty around the parameter θ. This package does not offer this sort of functionality. I have so far not come across any work on Conformal Inference that deals with parameter uncertainty, but I also haven’t properly looked for it.","category":"page"},{"location":"faq/#What-is-the-(marginal)-coverage-guarantee?","page":"❓ FAQ","title":"What is the (marginal) coverage guarantee?","text":"","category":"section"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"The (marginal) coverage guarantee states that:","category":"page"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"[…] the probability that the prediction set contains the correct label [for a fresh test point from the same distribution] is almost exactly 1 − α.— Angelopoulos and Bates (2021)","category":"page"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"See Angelopoulos and Bates (2021) for a formal proof of this property or check out this section or Pluto.jl 🎈 notebook to convince yourself through a small empirical exercise. Note that this property relates to a special case of conformal prediction, namely Split Conformal Prediction (Angelopoulos and Bates 2021).","category":"page"},{"location":"faq/#What-does-marginal-mean-in-this-context?","page":"❓ FAQ","title":"What does marginal mean in this context?","text":"","category":"section"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"The property is “marginal” in the sense that the probability is averaged over the randomness in the data (Angelopoulos and Bates 2021). Depending on the size of the calibration set (context: Split Conformal Prediction), the realized coverage or estimated empirical coverage may deviate slightly from the user specified value 1 − α. To get a sense of this effect, you may want to check out this Pluto.jl 🎈 notebook: it allows you to adjust the calibration set size and check the resulting empirical coverage. See also Section 3 of Angelopoulos and Bates (2021).","category":"page"},{"location":"faq/#Is-CP-really-distribution-free?","page":"❓ FAQ","title":"Is CP really distribution-free?","text":"","category":"section"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"The marginal coverage property holds under the assumption that the input data is exchangeable, which is a minimal distributional assumption. So, in my view, the short answer to this question is “No”. I believe that when people use the term “distribution-free” in this context, they mean that no prior assumptions are being made about the actual form or family of distribution(s) that generate the model parameters and data. If we define “distribution-free” in this sense, then the answer to me seems “Yes”.","category":"page"},{"location":"faq/#What-happens-if-this-minimal-distributional-assumption-is-violated?","page":"❓ FAQ","title":"What happens if this minimal distributional assumption is violated?","text":"","category":"section"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"Then the marginal coverage property does not hold. See here for an example.","category":"page"},{"location":"faq/#What-are-set-valued-predictions?","page":"❓ FAQ","title":"What are set-valued predictions?","text":"","category":"section"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"This should be clearer after reading through some of the other tutorials and explanations. For conformal classifiers of type ConformalProbabilisticSet, predictions are set-valued: these conformal classifiers may return multiple labels, a single label or no labels at all. Larger prediction sets indicate higher predictive uncertainty: for sets of size greater than one the conformal predictor cannot with certainty narrow down its prediction down to a single label, so it returns all labels that meet the specified marginal coverage.","category":"page"},{"location":"faq/#How-do-I-interpret-the-distribution-of-set-size?","page":"❓ FAQ","title":"How do I interpret the distribution of set size?","text":"","category":"section"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"It can be useful to plot the distribution of set sizes in order to visually asses how adaptive a conformal predictor is. For more adaptive predictors the distribution of set sizes is typically spread out more widely, which reflects that “the procedure is effectively distinguishing between easy and hard inputs” (Angelopoulos and Bates 2021). This is desirable: when for a given sample it is difficult to make predictions, this should be reflected in the set size (or interval width in the regression case). Since ‘difficult’ lies on some spectrum that ranges from ‘very easy’ to ‘very difficult’ the set size should very across the spectrum of ‘empty set’ to ‘all labels included’.","category":"page"},{"location":"faq/#What-is-aleatoric-uncertainty?-What-is-epistemic-uncertainty?","page":"❓ FAQ","title":"What is aleatoric uncertainty? What is epistemic uncertainty?","text":"","category":"section"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"Loosely speaking: aleatoric uncertainty relates to uncertainty that cannot be “learned away” by observing more data (think points near the decision boundary); epistemic uncertainty relates to uncertainty that can be “learned away” by observing more data.","category":"page"},{"location":"faq/#References","page":"❓ FAQ","title":"References","text":"","category":"section"},{"location":"faq/","page":"❓ FAQ","title":"❓ FAQ","text":"Angelopoulos, Anastasios N., and Stephen Bates. 2021. “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification.” https://arxiv.org/abs/2107.07511.","category":"page"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"CurrentModule = ConformalPrediction","category":"page"},{"location":"reference/#Reference","page":"🧐 Reference","title":"Reference","text":"","category":"section"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"In this reference you will find a detailed overview of the package API.","category":"page"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"Reference guides are technical descriptions of the machinery and how to operate it. Reference material is information-oriented.— Diátaxis","category":"page"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"In other words, you come here because you want to take a very close look at the code 🧐","category":"page"},{"location":"reference/#Content","page":"🧐 Reference","title":"Content","text":"","category":"section"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"Pages = [\"_reference.md\"]","category":"page"},{"location":"reference/#Index","page":"🧐 Reference","title":"Index","text":"","category":"section"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"","category":"page"},{"location":"reference/#Public-Interface","page":"🧐 Reference","title":"Public Interface","text":"","category":"section"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"Modules = [\n ConformalPrediction,\n ConformalPrediction.ConformalTraining,\n]\nPrivate = false","category":"page"},{"location":"reference/#ConformalPrediction.available_models","page":"🧐 Reference","title":"ConformalPrediction.available_models","text":"A container listing all available methods for conformal prediction.\n\n\n\n\n\n","category":"constant"},{"location":"reference/#ConformalPrediction.tested_atomic_models","page":"🧐 Reference","title":"ConformalPrediction.tested_atomic_models","text":"A container listing all atomic MLJ models that have been tested for use with this package.\n\n\n\n\n\n","category":"constant"},{"location":"reference/#ConformalPrediction.conformal_model-Tuple{MLJModelInterface.Supervised}","page":"🧐 Reference","title":"ConformalPrediction.conformal_model","text":"conformal_model(model::Supervised; method::Union{Nothing, Symbol}=nothing, kwargs...)\n\nA simple wrapper function that turns a model::Supervised into a conformal model. It accepts an optional key argument that can be used to specify the desired method for conformal prediction as well as additinal kwargs... specific to the method.\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.emp_coverage-Tuple{Any, Any}","page":"🧐 Reference","title":"ConformalPrediction.emp_coverage","text":"emp_coverage(ŷ, y)\n\nComputes the empirical coverage for conformal predictions ŷ.\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.ineff","page":"🧐 Reference","title":"ConformalPrediction.ineff","text":"ineff(ŷ)\n\nComputes the inefficiency (average set size) for conformal predictions ŷ.\n\n\n\n\n\n","category":"function"},{"location":"reference/#ConformalPrediction.partial_fit","page":"🧐 Reference","title":"ConformalPrediction.partial_fit","text":"partial_fit(conf_model::TimeSeriesRegressorEnsembleBatch, fitresult, X, y, shift_size)\n\nFor the TimeSeriesRegressorEnsembleBatch Non-conformity scores are updated by the most recent data (X,y). shift_size determines how many points in Non-conformity scores will be discarded.\n\n\n\n\n\n","category":"function"},{"location":"reference/#ConformalPrediction.set_size-Tuple{Any}","page":"🧐 Reference","title":"ConformalPrediction.set_size","text":"set_size(ŷ)\n\nHelper function that computes the set size for conformal predictions. \n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.size_stratified_coverage-Tuple{Any, Any}","page":"🧐 Reference","title":"ConformalPrediction.size_stratified_coverage","text":"size_stratified_coverage(ŷ, y)\n\nComputes the size-stratified coverage for conformal predictions ŷ.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.fit-Tuple{ConformalPrediction.AdaptiveInductiveClassifier, Any, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.fit","text":"MMI.fit(conf_model::AdaptiveInductiveClassifier, verbosity, X, y)\n\nFor the AdaptiveInductiveClassifier nonconformity scores are computed by cumulatively summing the ranked scores of each label in descending order until reaching the true label Y_i:\n\nS_i^textCAL = s(X_iY_i) = sum_j=1^k hatmu(X_i)_pi_j textwhere Y_i=pi_k i in mathcalD_textcalibration\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.fit-Tuple{ConformalPrediction.CVMinMaxRegressor, Any, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.fit","text":"MMI.fit(conf_model::CVMinMaxRegressor, verbosity, X, y)\n\nFor the CVMinMaxRegressor nonconformity scores are computed in the same way as for the CVPlusRegressor. Specifically, we have,\n\nS_i^textCV = s(X_i Y_i) = h(hatmu_-mathcalD_k(i)(X_i) Y_i) i in mathcalD_texttrain\n\nwhere hatmu_-mathcalD_k(i)(X_i) denotes the CV prediction for X_i. In other words, for each CV fold k=1K and each training instance i=1n the model is trained on all training data excluding the fold containing i. The fitted model is then used to predict out-of-sample from X_i. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure h(cdot) to the fitted value hatmu_-mathcalD_k(i)(X_i) and the true value Y_i.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.fit-Tuple{ConformalPrediction.CVPlusRegressor, Any, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.fit","text":"MMI.fit(conf_model::CVPlusRegressor, verbosity, X, y)\n\nFor the CVPlusRegressor nonconformity scores are computed though cross-validation (CV) as follows,\n\nS_i^textCV = s(X_i Y_i) = h(hatmu_-mathcalD_k(i)(X_i) Y_i) i in mathcalD_texttrain\n\nwhere hatmu_-mathcalD_k(i)(X_i) denotes the CV prediction for X_i. In other words, for each CV fold k=1K and each training instance i=1n the model is trained on all training data excluding the fold containing i. The fitted model is then used to predict out-of-sample from X_i. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure h(cdot) to the fitted value hatmu_-mathcalD_k(i)(X_i) and the true value Y_i.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.fit-Tuple{ConformalPrediction.ConformalQuantileRegressor, Any, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.fit","text":"MMI.fit(conf_model::ConformalQuantileRegressor, verbosity, X, y)\n\nFor the ConformalQuantileRegressor nonconformity scores are computed as follows:\n\nS_i^textCAL = s(X_i Y_i) = h(hatmu_alpha_lo(X_i) hatmu_alpha_hi(X_i) Y_i) i in mathcalD_textcalibration\n\nA typical choice for the heuristic function is h(hatmu_alpha_lo(X_i) hatmu_alpha_hi(X_i) Y_i)= maxhatmu_alpha_low(X_i)-Y_i Y_i-hatmu_alpha_hi(X_i) where hatmu denotes the model fitted on training data mathcalD_texttrain and\\alpha{lo}, \\alpha{hi}`` lower and higher percentile.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.fit-Tuple{ConformalPrediction.JackknifeMinMaxRegressor, Any, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.fit","text":"MMI.fit(conf_model::JackknifeMinMaxRegressor, verbosity, X, y)\n\nFor the JackknifeMinMaxRegressor nonconformity scores are computed in the same way as for the JackknifeRegressor. Specifically, we have,\n\nS_i^textLOO = s(X_i Y_i) = h(hatmu_-i(X_i) Y_i) i in mathcalD_texttrain\n\nwhere hatmu_-i(X_i) denotes the leave-one-out prediction for X_i. In other words, for each training instance i=1n the model is trained on all training data excluding i. The fitted model is then used to predict out-of-sample from X_i. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure h(cdot) to the fitted value hatmu_-i(X_i) and the true value Y_i.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.fit-Tuple{ConformalPrediction.JackknifePlusAbMinMaxRegressor, Any, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.fit","text":"MMI.fit(conf_model::JackknifePlusMinMaxAbRegressor, verbosity, X, y)\n\nFor the JackknifePlusAbMinMaxRegressor nonconformity scores are as,\n\nS_i^textJ+MinMax = s(X_i Y_i) = h(agg(hatmu_B_K(-i)(X_i)) Y_i) i in mathcalD_texttrain\n\nwhere agg(hatmu_B_K(-i)(X_i)) denotes the aggregate predictions, typically mean or median, for each X_i (with K_-i the bootstraps not containing X_i). In other words, B models are trained on boostrapped sampling, the fitted models are then used to create aggregated prediction of out-of-sample X_i. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure h(cdot) to the fitted value agg(hatmu_B_K(-i)(X_i)) and the true value Y_i.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.fit-Tuple{ConformalPrediction.JackknifePlusAbRegressor, Any, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.fit","text":"MMI.fit(conf_model::JackknifePlusAbRegressor, verbosity, X, y)\n\nFor the JackknifePlusAbRegressor nonconformity scores are computed as\n\n S_i^textJ+ab = s(X_i Y_i) = h(agg(hatmu_B_K(-i)(X_i)) Y_i) i in mathcalD_texttrain \n\nwhere agg(hatmu_B_K(-i)(X_i)) denotes the aggregate predictions, typically mean or median, for each X_i (with K_-i the bootstraps not containing X_i). In other words, B models are trained on boostrapped sampling, the fitted models are then used to create aggregated prediction of out-of-sample X_i. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure h(cdot) to the fitted value agg(hatmu_B_K(-i)(X_i)) and the true value Y_i.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.fit-Tuple{ConformalPrediction.JackknifePlusRegressor, Any, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.fit","text":"MMI.fit(conf_model::JackknifePlusRegressor, verbosity, X, y)\n\nFor the JackknifePlusRegressor nonconformity scores are computed in the same way as for the JackknifeRegressor. Specifically, we have,\n\nS_i^textLOO = s(X_i Y_i) = h(hatmu_-i(X_i) Y_i) i in mathcalD_texttrain\n\nwhere hatmu_-i(X_i) denotes the leave-one-out prediction for X_i. In other words, for each training instance i=1n the model is trained on all training data excluding i. The fitted model is then used to predict out-of-sample from X_i. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure h(cdot) to the fitted value hatmu_-i(X_i) and the true value Y_i.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.fit-Tuple{ConformalPrediction.JackknifeRegressor, Any, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.fit","text":"MMI.fit(conf_model::JackknifeRegressor, verbosity, X, y)\n\nFor the JackknifeRegressor nonconformity scores are computed through a leave-one-out (LOO) procedure as follows,\n\nS_i^textLOO = s(X_i Y_i) = h(hatmu_-i(X_i) Y_i) i in mathcalD_texttrain\n\nwhere hatmu_-i(X_i) denotes the leave-one-out prediction for X_i. In other words, for each training instance i=1n the model is trained on all training data excluding i. The fitted model is then used to predict out-of-sample from X_i. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure h(cdot) to the fitted value hatmu_-i(X_i) and the true value Y_i.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.fit-Tuple{ConformalPrediction.NaiveClassifier, Any, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.fit","text":"MMI.fit(conf_model::NaiveClassifier, verbosity, X, y)\n\nFor the NaiveClassifier nonconformity scores are computed in-sample as follows:\n\nS_i^textIS = s(X_i Y_i) = h(hatmu(X_i) Y_i) i in mathcalD_textcalibration\n\nA typical choice for the heuristic function is h(hatmu(X_i) Y_i)=1-hatmu(X_i)_Y_i where hatmu(X_i)_Y_i denotes the softmax output of the true class and hatmu denotes the model fitted on training data mathcalD_texttrain.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.fit-Tuple{ConformalPrediction.NaiveRegressor, Any, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.fit","text":"MMI.fit(conf_model::NaiveRegressor, verbosity, X, y)\n\nFor the NaiveRegressor nonconformity scores are computed in-sample as follows:\n\nS_i^textIS = s(X_i Y_i) = h(hatmu(X_i) Y_i) i in mathcalD_texttrain\n\nA typical choice for the heuristic function is h(hatmu(X_i)Y_i)=Y_i-hatmu(X_i) where hatmu denotes the model fitted on training data mathcalD_texttrain.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.fit-Tuple{ConformalPrediction.SimpleInductiveClassifier, Any, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.fit","text":"MMI.fit(conf_model::SimpleInductiveClassifier, verbosity, X, y)\n\nFor the SimpleInductiveClassifier nonconformity scores are computed as follows:\n\nS_i^textCAL = s(X_i Y_i) = h(hatmu(X_i) Y_i) i in mathcalD_textcalibration\n\nA typical choice for the heuristic function is h(hatmu(X_i) Y_i)=1-hatmu(X_i)_Y_i where hatmu(X_i)_Y_i denotes the softmax output of the true class and hatmu denotes the model fitted on training data mathcalD_texttrain. The simple approach only takes the softmax probability of the true label into account.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.fit-Tuple{ConformalPrediction.SimpleInductiveRegressor, Any, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.fit","text":"MMI.fit(conf_model::SimpleInductiveRegressor, verbosity, X, y)\n\nFor the SimpleInductiveRegressor nonconformity scores are computed as follows:\n\nS_i^textCAL = s(X_i Y_i) = h(hatmu(X_i) Y_i) i in mathcalD_textcalibration\n\nA typical choice for the heuristic function is h(hatmu(X_i)Y_i)=Y_i-hatmu(X_i) where hatmu denotes the model fitted on training data mathcalD_texttrain.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.fit-Tuple{ConformalPrediction.TimeSeriesRegressorEnsembleBatch, Any, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.fit","text":"MMI.fit(conf_model::TimeSeriesRegressorEnsembleBatch, verbosity, X, y)\n\nFor the TimeSeriesRegressorEnsembleBatch nonconformity scores are computed as\n\n S_i^textJ+ab = s(X_i Y_i) = h(agg(hatmu_B_K(-i)(X_i)) Y_i) i in mathcalD_texttrain \n\nwhere agg(hatmu_B_K(-i)(X_i)) denotes the aggregate predictions, typically mean or median, for each X_i (with K_-i the bootstraps not containing X_i). In other words, B models are trained on boostrapped sampling, the fitted models are then used to create aggregated prediction of out-of-sample X_i. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure h(cdot) to the fitted value agg(hatmu_B_K(-i)(X_i)) and the true value Y_i.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.predict-Tuple{ConformalPrediction.AdaptiveInductiveClassifier, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.predict","text":"MMI.predict(conf_model::AdaptiveInductiveClassifier, fitresult, Xnew)\n\nFor the AdaptiveInductiveClassifier prediction sets are computed as follows,\n\nhatC_nalpha(X_n+1) = lefty s(X_n+1y) le hatq_n alpha^+ S_i^textCAL right i in mathcalD_textcalibration\n\nwhere mathcalD_textcalibration denotes the designated calibration data.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.predict-Tuple{ConformalPrediction.CVMinMaxRegressor, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.predict","text":"MMI.predict(conf_model::CVMinMaxRegressor, fitresult, Xnew)\n\nFor the CVMinMaxRegressor prediction intervals are computed as follows,\n\nhatC_nalpha(X_n+1) = left min_i=1n hatmu_-mathcalD_k(i)(X_n+1) - hatq_n alpha^+ S_i^textCV max_i=1n hatmu_-mathcalD_k(i)(X_n+1) + hatq_n alpha^+ S_i^textCV right i in mathcalD_texttrain\n\nwhere hatmu_-mathcalD_k(i) denotes the model fitted on training data with subset mathcalD_k(i) that contains the i th point removed.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.predict-Tuple{ConformalPrediction.CVPlusRegressor, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.predict","text":"MMI.predict(conf_model::CVPlusRegressor, fitresult, Xnew)\n\nFor the CVPlusRegressor prediction intervals are computed in much same way as for the JackknifePlusRegressor. Specifically, we have,\n\nhatC_nalpha(X_n+1) = left hatq_n alpha^- hatmu_-mathcalD_k(i)(X_n+1) - S_i^textCV hatq_n alpha^+ hatmu_-mathcalD_k(i)(X_n+1) + S_i^textCV right i in mathcalD_texttrain\n\nwhere hatmu_-mathcalD_k(i) denotes the model fitted on training data with fold mathcalD_k(i) that contains the i th point removed.\n\nThe JackknifePlusRegressor is a special case of the CVPlusRegressor for which K=n.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.predict-Tuple{ConformalPrediction.ConformalQuantileRegressor, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.predict","text":"MMI.predict(conf_model::ConformalQuantileRegressor, fitresult, Xnew)\n\nFor the ConformalQuantileRegressor prediction intervals are computed as follows,\n\nhatC_nalpha(X_n+1) = hatmu_alpha_lo(X_n+1) - hatq_n alpha S_i^textCAL hatmu_alpha_hi(X_n+1) + hatq_n alpha S_i^textCAL i in mathcalD_textcalibration\n\nwhere mathcalD_textcalibration denotes the designated calibration data.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.predict-Tuple{ConformalPrediction.JackknifeMinMaxRegressor, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.predict","text":"MMI.predict(conf_model::JackknifeMinMaxRegressor, fitresult, Xnew)\n\nFor the JackknifeMinMaxRegressor prediction intervals are computed as follows,\n\nhatC_nalpha(X_n+1) = left min_i=1n hatmu_-i(X_n+1) - hatq_n alpha^+ S_i^textLOO max_i=1n hatmu_-i(X_n+1) + hatq_n alpha^+ S_i^textLOO right i in mathcalD_texttrain\n\nwhere hatmu_-i denotes the model fitted on training data with ith point removed. The jackknife-minmax procedure is more conservative than the JackknifePlusRegressor.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.predict-Tuple{ConformalPrediction.JackknifePlusAbMinMaxRegressor, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.predict","text":"MMI.predict(conf_model::JackknifePlusAbMinMaxRegressor, fitresult, Xnew)\n\nFor the JackknifePlusAbMinMaxRegressor prediction intervals are computed as follows,\n\nhatC_nalpha^J+MinMax(X_n+1) = left min_i=1n hatmu_-i(X_n+1) - hatq_n alpha^+ S_i^textJ+MinMax max_i=1n hatmu_-i(X_n+1) + hatq_n alpha^+ S_i^textJ+MinMax right i in mathcalD_texttrain\n\nwhere hatmu_-i denotes the model fitted on training data with ith point removed. The jackknife+ab-minmax procedure is more conservative than the JackknifePlusAbRegressor.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.predict-Tuple{ConformalPrediction.JackknifePlusAbRegressor, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.predict","text":"MMI.predict(conf_model::JackknifePlusAbRegressor, fitresult, Xnew)\n\nFor the JackknifePlusAbRegressor prediction intervals are computed as follows,\n\nhatC_nalpha B^J+ab(X_n+1) = left hatq_n alpha^- hatmu_agg(-i)(X_n+1) - S_i^textJ+ab hatq_n alpha^+ hatmu_agg(-i)(X_n+1) + S_i^textJ+ab right i in mathcalD_texttrain\n\nwhere hatmu_agg(-i) denotes the aggregated models hatmu_1 hatmu_B fitted on bootstrapped data (B) does not include the ith data point. The jackknife+ procedure is more stable than the JackknifeRegressor.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.predict-Tuple{ConformalPrediction.JackknifePlusRegressor, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.predict","text":"MMI.predict(conf_model::JackknifePlusRegressor, fitresult, Xnew)\n\nFor the JackknifePlusRegressor prediction intervals are computed as follows,\n\nhatC_nalpha(X_n+1) = left hatq_n alpha^- hatmu_-i(X_n+1) - S_i^textLOO hatq_n alpha^+ hatmu_-i(X_n+1) + S_i^textLOO right i in mathcalD_texttrain\n\nwhere hatmu_-i denotes the model fitted on training data with ith point removed. The jackknife+ procedure is more stable than the JackknifeRegressor.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.predict-Tuple{ConformalPrediction.JackknifeRegressor, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.predict","text":"MMI.predict(conf_model::JackknifeRegressor, fitresult, Xnew)\n\nFor the JackknifeRegressor prediction intervals are computed as follows,\n\nhatC_nalpha(X_n+1) = hatmu(X_n+1) pm hatq_n alpha^+ S_i^textLOO i in mathcalD_texttrain\n\nwhere S_i^textLOO denotes the nonconformity that is generated as explained in fit(conf_model::JackknifeRegressor, verbosity, X, y). The jackknife procedure addresses the overfitting issue associated with the NaiveRegressor.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.predict-Tuple{ConformalPrediction.NaiveClassifier, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.predict","text":"MMI.predict(conf_model::NaiveClassifier, fitresult, Xnew)\n\nFor the NaiveClassifier prediction sets are computed as follows:\n\nhatC_nalpha(X_n+1) = lefty s(X_n+1y) le hatq_n alpha^+ S_i^textIS right i in mathcalD_texttrain\n\nThe naive approach typically produces prediction regions that undercover due to overfitting.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.predict-Tuple{ConformalPrediction.NaiveRegressor, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.predict","text":"MMI.predict(conf_model::NaiveRegressor, fitresult, Xnew)\n\nFor the NaiveRegressor prediction intervals are computed as follows:\n\nhatC_nalpha(X_n+1) = hatmu(X_n+1) pm hatq_n alpha^+ S_i^textIS i in mathcalD_texttrain\n\nThe naive approach typically produces prediction regions that undercover due to overfitting.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.predict-Tuple{ConformalPrediction.SimpleInductiveClassifier, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.predict","text":"MMI.predict(conf_model::SimpleInductiveClassifier, fitresult, Xnew)\n\nFor the SimpleInductiveClassifier prediction sets are computed as follows,\n\nhatC_nalpha(X_n+1) = lefty s(X_n+1y) le hatq_n alpha^+ S_i^textCAL right i in mathcalD_textcalibration\n\nwhere mathcalD_textcalibration denotes the designated calibration data.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.predict-Tuple{ConformalPrediction.SimpleInductiveRegressor, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.predict","text":"MMI.predict(conf_model::SimpleInductiveRegressor, fitresult, Xnew)\n\nFor the SimpleInductiveRegressor prediction intervals are computed as follows,\n\nhatC_nalpha(X_n+1) = hatmu(X_n+1) pm hatq_n alpha^+ S_i^textCAL i in mathcalD_textcalibration\n\nwhere mathcalD_textcalibration denotes the designated calibration data.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.predict-Tuple{ConformalPrediction.TimeSeriesRegressorEnsembleBatch, Any, Any}","page":"🧐 Reference","title":"MLJModelInterface.predict","text":"MMI.predict(conf_model::TimeSeriesRegressorEnsembleBatch, fitresult, Xnew)\n\nFor the TimeSeriesRegressorEnsembleBatch prediction intervals are computed as follows,\n\nhatC_nalpha B^J+ab(X_n+1) = left hatq_n alpha^- hatmu_agg(-i)(X_n+1) - S_i^textJ+ab hatq_n alpha^+ hatmu_agg(-i)(X_n+1) + S_i^textJ+ab right i in mathcalD_texttrain\n\nwhere hatmu_agg(-i) denotes the aggregated models hatmu_1 hatmu_B fitted on bootstrapped data (B) does not include the ith data point. The jackknife+ procedure is more stable than the JackknifeRegressor.\n\n\n\n\n\n","category":"method"},{"location":"reference/#Internal-functions","page":"🧐 Reference","title":"Internal functions","text":"","category":"section"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"Modules = [\n ConformalPrediction,\n ConformalPrediction.ConformalTraining,\n]\nPublic = false","category":"page"},{"location":"reference/#ConformalPrediction.AdaptiveInductiveClassifier","page":"🧐 Reference","title":"ConformalPrediction.AdaptiveInductiveClassifier","text":"The AdaptiveInductiveClassifier is an improvement to the SimpleInductiveClassifier and the NaiveClassifier. Contrary to the NaiveClassifier it computes nonconformity scores using a designated calibration dataset like the SimpleInductiveClassifier. Contrary to the SimpleInductiveClassifier it utilizes the softmax output of all classes.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.CVMinMaxRegressor","page":"🧐 Reference","title":"ConformalPrediction.CVMinMaxRegressor","text":"Constructor for CVMinMaxRegressor.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.CVPlusRegressor","page":"🧐 Reference","title":"ConformalPrediction.CVPlusRegressor","text":"Constructor for CVPlusRegressor.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.ConformalInterval","page":"🧐 Reference","title":"ConformalPrediction.ConformalInterval","text":"An abstract base type for conformal models that produce interval-valued predictions. This includes most conformal regression models.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.ConformalProbabilistic","page":"🧐 Reference","title":"ConformalPrediction.ConformalProbabilistic","text":"An abstract base type for conformal models that produce probabilistic predictions. This includes some conformal classifier like Venn-ABERS.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.ConformalProbabilisticSet","page":"🧐 Reference","title":"ConformalPrediction.ConformalProbabilisticSet","text":"An abstract base type for conformal models that produce set-valued probabilistic predictions. This includes most conformal classification models.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.ConformalQuantileRegressor","page":"🧐 Reference","title":"ConformalPrediction.ConformalQuantileRegressor","text":"Constructor for ConformalQuantileRegressor.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.JackknifeMinMaxRegressor","page":"🧐 Reference","title":"ConformalPrediction.JackknifeMinMaxRegressor","text":"Constructor for JackknifeMinMaxRegressor.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.JackknifePlusAbMinMaxRegressor","page":"🧐 Reference","title":"ConformalPrediction.JackknifePlusAbMinMaxRegressor","text":"Constructor for JackknifePlusAbMinMaxRegressor.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.JackknifePlusAbRegressor","page":"🧐 Reference","title":"ConformalPrediction.JackknifePlusAbRegressor","text":"Constructor for JackknifePlusAbPlusRegressor.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.JackknifePlusRegressor","page":"🧐 Reference","title":"ConformalPrediction.JackknifePlusRegressor","text":"Constructor for JackknifePlusRegressor.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.JackknifeRegressor","page":"🧐 Reference","title":"ConformalPrediction.JackknifeRegressor","text":"Constructor for JackknifeRegressor.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.NaiveClassifier","page":"🧐 Reference","title":"ConformalPrediction.NaiveClassifier","text":"The NaiveClassifier is the simplest approach to Inductive Conformal Classification. Contrary to the NaiveClassifier it computes nonconformity scores using a designated training dataset.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.NaiveRegressor","page":"🧐 Reference","title":"ConformalPrediction.NaiveRegressor","text":"The NaiveRegressor for conformal prediction is the simplest approach to conformal regression.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.QuantileModel","page":"🧐 Reference","title":"ConformalPrediction.QuantileModel","text":"Union type for quantile models.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.SimpleInductiveClassifier","page":"🧐 Reference","title":"ConformalPrediction.SimpleInductiveClassifier","text":"The SimpleInductiveClassifier is the simplest approach to Inductive Conformal Classification. Contrary to the NaiveClassifier it computes nonconformity scores using a designated calibration dataset.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.SimpleInductiveRegressor","page":"🧐 Reference","title":"ConformalPrediction.SimpleInductiveRegressor","text":"The SimpleInductiveRegressor is the simplest approach to Inductive Conformal Regression. Contrary to the NaiveRegressor it computes nonconformity scores using a designated calibration dataset.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.TimeSeriesRegressorEnsembleBatch","page":"🧐 Reference","title":"ConformalPrediction.TimeSeriesRegressorEnsembleBatch","text":"Constructor for TimeSeriesRegressorEnsembleBatch.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction._aggregate-Tuple{Any, Union{String, Symbol}}","page":"🧐 Reference","title":"ConformalPrediction._aggregate","text":"_aggregate(y, aggregate::Union{Symbol,String})\n\nHelper function that performs aggregation across vector of predictions.\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.absolute_error-Tuple{Any, Any}","page":"🧐 Reference","title":"ConformalPrediction.absolute_error","text":"absolute_error(y,ŷ)\n\nComputes abs(y - ŷ) where ŷ is the predicted value.\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.blockbootstrap-Tuple{Any, Any}","page":"🧐 Reference","title":"ConformalPrediction.blockbootstrap","text":"blockbootstrap(time_series_data, block_szie)\n\nGenerate a sampling method, that block bootstraps the given data\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.is_classification-Tuple{Any}","page":"🧐 Reference","title":"ConformalPrediction.is_classification","text":"is_classification(ŷ)\n\nHelper function that checks if conformal prediction ŷ comes from a conformal classification model.\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.is_covered-Tuple{Any, Any}","page":"🧐 Reference","title":"ConformalPrediction.is_covered","text":"is_covered(ŷ, y)\n\nHelper function to check if y is contained in conformal region. Based on whether conformal predictions ŷ are set- or interval-valued, different checks are executed.\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.is_covered_interval-Tuple{Any, Any}","page":"🧐 Reference","title":"ConformalPrediction.is_covered_interval","text":"is_covered_interval(ŷ, y)\n\nHelper function to check if y is contained in conformal interval.\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.is_covered_set-Tuple{Any, Any}","page":"🧐 Reference","title":"ConformalPrediction.is_covered_set","text":"is_covered_set(ŷ, y)\n\nHelper function to check if y is contained in conformal set.\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.is_regression-Tuple{Any}","page":"🧐 Reference","title":"ConformalPrediction.is_regression","text":"is_regression(ŷ)\n\nHelper function that checks if conformal prediction ŷ comes from a conformal regression model.\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.minus_softmax-Tuple{Any, Any}","page":"🧐 Reference","title":"ConformalPrediction.minus_softmax","text":"minus_softmax(y,ŷ)\n\nComputes 1.0 - ŷ where ŷ is the softmax output for a given class.\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.qminus","page":"🧐 Reference","title":"ConformalPrediction.qminus","text":"qminus(v::AbstractArray, coverage::AbstractFloat=0.9)\n\nImplements the hatq_nalpha^- finite-sample corrected quantile function as defined in Barber et al. (2020): https://arxiv.org/pdf/1905.02928.pdf. \n\n\n\n\n\n","category":"function"},{"location":"reference/#ConformalPrediction.qplus","page":"🧐 Reference","title":"ConformalPrediction.qplus","text":"qplus(v::AbstractArray, coverage::AbstractFloat=0.9)\n\nImplements the hatq_nalpha^+ finite-sample corrected quantile function as defined in Barber et al. (2020): https://arxiv.org/pdf/1905.02928.pdf. \n\n\n\n\n\n","category":"function"},{"location":"reference/#ConformalPrediction.reformat_interval-Tuple{Any}","page":"🧐 Reference","title":"ConformalPrediction.reformat_interval","text":"reformat_interval(ŷ)\n\nReformats conformal interval predictions.\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.reformat_mlj_prediction-Tuple{Any}","page":"🧐 Reference","title":"ConformalPrediction.reformat_mlj_prediction","text":"reformat_mlj_prediction(ŷ)\n\nA helper function that extracts only the output (predicted values) for whatever is returned from MMI.predict(model, fitresult, Xnew). This is currently used to avoid issues when calling MMI.predict(model, fitresult, Xnew) in pipelines.\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.score","page":"🧐 Reference","title":"ConformalPrediction.score","text":"score(conf_model::AdaptiveInductiveClassifier, ::Type{<:Supervised}, fitresult, X, y::Union{Nothing,AbstractArray}=nothing)\n\nScore method for the AdaptiveInductiveClassifier dispatched for any <:Supervised model.\n\n\n\n\n\n","category":"function"},{"location":"reference/#ConformalPrediction.score-2","page":"🧐 Reference","title":"ConformalPrediction.score","text":"score(conf_model::SimpleInductiveClassifier, ::Type{<:Supervised}, fitresult, X, y::Union{Nothing,AbstractArray}=nothing)\n\nScore method for the SimpleInductiveClassifier dispatched for any <:Supervised model.\n\n\n\n\n\n","category":"function"},{"location":"reference/#ConformalPrediction.score-3","page":"🧐 Reference","title":"ConformalPrediction.score","text":"score(conf_model::ConformalProbabilisticSet, fitresult, X, y::Union{Nothing,AbstractArray}=nothing)\n\nGeneric score method for the ConformalProbabilisticSet. It computes nonconformity scores using the heuristic function h and the softmax probabilities of the true class. Method is dispatched for different Conformal Probabilistic Sets and atomic models.\n\n\n\n\n\n","category":"function"},{"location":"reference/#ConformalPrediction.split_data-Tuple{ConformalPrediction.ConformalProbabilisticSet, Any, Any}","page":"🧐 Reference","title":"ConformalPrediction.split_data","text":"split_data(conf_model::ConformalProbabilisticSet, indices::Base.OneTo{Int})\n\nSplits the data into a proper training and calibration set.\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.ConformalTraining.ConformalNNClassifier","page":"🧐 Reference","title":"ConformalPrediction.ConformalTraining.ConformalNNClassifier","text":"The ConformalNNClassifier struct is a wrapper for a ConformalModel that can be used with MLJFlux.jl.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.ConformalTraining.ConformalNNRegressor","page":"🧐 Reference","title":"ConformalPrediction.ConformalTraining.ConformalNNRegressor","text":"The ConformalNNRegressor struct is a wrapper for a ConformalModel that can be used with MLJFlux.jl.\n\n\n\n\n\n","category":"type"},{"location":"reference/#ConformalPrediction.ConformalTraining.classification_loss-Tuple{ConformalPrediction.ConformalProbabilisticSet, Any, Any, Any}","page":"🧐 Reference","title":"ConformalPrediction.ConformalTraining.classification_loss","text":"classification_loss(\n conf_model::ConformalProbabilisticSet, fitresult, X, y;\n loss_matrix::Union{AbstractMatrix,UniformScaling}=UniformScaling(1.0),\n temp::Real=0.1\n)\n\nComputes the calibration loss following Stutz et al. (2022): https://openreview.net/pdf?id=t8O-4LKFVx. Following the notation in the paper, the loss is computed as,\n\nmathcalL(C_theta(xtau)y) = sum_k L_yk left (1 - C_thetak(xtau)) mathbfI_y=k + C_thetak(xtau) mathbfI_yne k right\n\nwhere tau is just the quantile q̂ and kappa is the target set size (defaults to 1).\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.ConformalTraining.qminus_smooth","page":"🧐 Reference","title":"ConformalPrediction.ConformalTraining.qminus_smooth","text":"qminus_smooth(v::AbstractArray, coverage::AbstractFloat=0.9)\n\nImplements the hatq_nalpha^- finite-sample corrected quantile function as defined in Barber et al. (2020): https://arxiv.org/pdf/1905.02928.pdf. To allow for differentiability, we use the soft sort function from InferOpt.jl.\n\n\n\n\n\n","category":"function"},{"location":"reference/#ConformalPrediction.ConformalTraining.qplus_smooth","page":"🧐 Reference","title":"ConformalPrediction.ConformalTraining.qplus_smooth","text":"qplus_smooth(v::AbstractArray, coverage::AbstractFloat=0.9)\n\nImplements the hatq_nalpha^+ finite-sample corrected quantile function as defined in Barber et al. (2020): https://arxiv.org/pdf/1905.02928.pdf. To allow for differentiability, we use the soft sort function from InferOpt.jl.\n\n\n\n\n\n","category":"function"},{"location":"reference/#ConformalPrediction.ConformalTraining.score","page":"🧐 Reference","title":"ConformalPrediction.ConformalTraining.score","text":"ConformalPrediction.score(conf_model::AdaptiveInductiveClassifier, ::Type{<:MLJFluxModel}, fitresult, X, y::Union{Nothing,AbstractArray}=nothing)\n\nOverloads the score function for the MLJFluxModel type.\n\n\n\n\n\n","category":"function"},{"location":"reference/#ConformalPrediction.ConformalTraining.score-2","page":"🧐 Reference","title":"ConformalPrediction.ConformalTraining.score","text":"ConformalPrediction.score(conf_model::AdaptiveInductiveClassifier, ::Type{<:EitherEnsembleModel{<:MLJFluxModel}}, fitresult, X, y::Union{Nothing,AbstractArray}=nothing)\n\nOverloads the score function for ensembles of MLJFluxModel types.\n\n\n\n\n\n","category":"function"},{"location":"reference/#ConformalPrediction.ConformalTraining.smooth_size_loss-Tuple{ConformalPrediction.ConformalProbabilisticSet, Any, Any}","page":"🧐 Reference","title":"ConformalPrediction.ConformalTraining.smooth_size_loss","text":"function smooth_size_loss(\n conf_model::ConformalProbabilisticSet, fitresult, X;\n temp::Real=0.1, κ::Real=1.0\n)\n\nComputes the smooth (differentiable) size loss following Stutz et al. (2022): https://openreview.net/pdf?id=t8O-4LKFVx. First, soft assignment probabilities are computed for new data X. Then (following the notation in the paper) the loss is computed as, \n\nOmega(C_theta(xtau)) = max (0 sum_k C_thetak(xtau) - kappa)\n\nwhere tau is just the quantile q̂ and kappa is the target set size (defaults to 1). For empty sets, the loss is computed as K - kappa, that is the maximum set size minus the target set size.\n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.ConformalTraining.soft_assignment-Tuple{ConformalPrediction.ConformalProbabilisticSet, Any, Any}","page":"🧐 Reference","title":"ConformalPrediction.ConformalTraining.soft_assignment","text":"soft_assignment(conf_model::ConformalProbabilisticSet, fitresult, X; temp::Real=0.1)\n\nThis function can be used to compute soft assigment probabilities for new data X as in soft_assignment(conf_model::ConformalProbabilisticSet; temp::Real=0.1). When a fitted model mu (fitresult) and new samples X are supplied, non-conformity scores are first computed for the new data points. Then the existing threshold/quantile q̂ is used to compute the final soft assignments. \n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.ConformalTraining.soft_assignment-Tuple{ConformalPrediction.ConformalProbabilisticSet}","page":"🧐 Reference","title":"ConformalPrediction.ConformalTraining.soft_assignment","text":"soft_assignment(conf_model::ConformalProbabilisticSet; temp::Real=0.1)\n\nComputes soft assignment scores for each label and sample. That is, the probability of label k being included in the confidence set. This implementation follows Stutz et al. (2022): https://openreview.net/pdf?id=t8O-4LKFVx. Contrary to the paper, we use non-conformity scores instead of conformity scores, hence the sign swap. \n\n\n\n\n\n","category":"method"},{"location":"reference/#ConformalPrediction.score-4","page":"🧐 Reference","title":"ConformalPrediction.score","text":"ConformalPrediction.score(conf_model::InductiveModel, model::MLJFluxModel, fitresult, X, y::Union{Nothing,AbstractArray}=nothing)\n\nOverloads the score function for the MLJFluxModel type.\n\n\n\n\n\n","category":"function"},{"location":"reference/#ConformalPrediction.score-5","page":"🧐 Reference","title":"ConformalPrediction.score","text":"ConformalPrediction.score(conf_model::SimpleInductiveClassifier, ::Type{<:EitherEnsembleModel{<:MLJFluxModel}}, fitresult, X, y::Union{Nothing,AbstractArray}=nothing)\n\nOverloads the score function for ensembles of MLJFluxModel types.\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJFlux.shape-Tuple{ConformalPrediction.ConformalTraining.ConformalNNRegressor, Any, Any}","page":"🧐 Reference","title":"MLJFlux.shape","text":"shape(model::NeuralNetworkRegressor, X, y)\n\nA private method that returns the shape of the input and output of the model for given data X and y.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJFlux.train!-Tuple{Union{ConformalPrediction.ConformalTraining.ConformalNNClassifier, ConformalPrediction.ConformalTraining.ConformalNNRegressor}, Vararg{Any, 5}}","page":"🧐 Reference","title":"MLJFlux.train!","text":"MLJFlux.train!(model::ConformalNN, penalty, chain, optimiser, X, y)\n\nImplements the conformal traning procedure for the ConformalNN type.\n\n\n\n\n\n","category":"method"},{"location":"explanation/finite_sample_correction/#Finite-sample-Correction","page":"Finite-sample Correction","title":"Finite-sample Correction","text":"","category":"section"},{"location":"explanation/finite_sample_correction/","page":"Finite-sample Correction","title":"Finite-sample Correction","text":"We follow the convention used in Angelopoulos and Bates (2021) and Barber et al. (2021) to correct for the finite-sample bias of the empirical quantile. Specifically, we use the following definition of the (1−α) empirical quantile:","category":"page"},{"location":"explanation/finite_sample_correction/","page":"Finite-sample Correction","title":"Finite-sample Correction","text":"hatq_nalpha^+v = fraclceil (n+1)(1-alpha)rceiln","category":"page"},{"location":"explanation/finite_sample_correction/","page":"Finite-sample Correction","title":"Finite-sample Correction","text":"Barber et al. (2021) further define as the α empirical quantile:","category":"page"},{"location":"explanation/finite_sample_correction/","page":"Finite-sample Correction","title":"Finite-sample Correction","text":"hatq_nalpha^-v = fraclfloor (n+1)alpha rfloorn = - hatq_nalpha^+-v","category":"page"},{"location":"explanation/finite_sample_correction/","page":"Finite-sample Correction","title":"Finite-sample Correction","text":"Below we test this equality numerically by generating a large number of random vectors and comparing the two quantiles. We then plot the density of the difference between the two quantiles. While the errors are small, they are not negligible for small n. In our computations, we use q̂(n, α)⁻{v} exactly as it is defined above, rather than relying on  − q̂(n, α)⁺{ − v}.","category":"page"},{"location":"explanation/finite_sample_correction/","page":"Finite-sample Correction","title":"Finite-sample Correction","text":"using ConformalPrediction: qplus, qminus\nnobs = [100, 1000, 10000]\nn = 1000\nalpha = 0.1\nplts = []\nΔ = Float32[]\nfor _nobs in nobs\n for i in 1:n\n v = rand(_nobs)\n δ = qminus(v, alpha) - (-qplus(-v, 1-alpha))\n push!(Δ, δ)\n end\n plt = density(Δ)\n vline!([mean(Δ)], color=:red, label=\"mean\")\n push!(plts, plt)\nend\nplot(plts..., layout=(1,3), size=(900, 300), legend=:topleft, title=[\"nobs = 100\" \"nobs = 1000\" \"nobs = 10000\"])","category":"page"},{"location":"explanation/finite_sample_correction/","page":"Finite-sample Correction","title":"Finite-sample Correction","text":"(Image: )","category":"page"},{"location":"explanation/finite_sample_correction/","page":"Finite-sample Correction","title":"Finite-sample Correction","text":"See also this related discussion.","category":"page"},{"location":"explanation/finite_sample_correction/#References","page":"Finite-sample Correction","title":"References","text":"","category":"section"},{"location":"explanation/finite_sample_correction/","page":"Finite-sample Correction","title":"Finite-sample Correction","text":"Angelopoulos, Anastasios N., and Stephen Bates. 2021. “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification.” https://arxiv.org/abs/2107.07511.","category":"page"},{"location":"explanation/finite_sample_correction/","page":"Finite-sample Correction","title":"Finite-sample Correction","text":"Barber, Rina Foygel, Emmanuel J. Candès, Aaditya Ramdas, and Ryan J. Tibshirani. 2021. “Predictive Inference with the Jackknife+.” The Annals of Statistics 49 (1): 486–507. https://doi.org/10.1214/20-AOS1965.","category":"page"},{"location":"how_to_guides/llm/#How-to-Build-a-Conformal-Chatbot","page":"How to Conformalize a Large Language Model","title":"How to Build a Conformal Chatbot","text":"","category":"section"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"Large Language Models are all the buzz right now. They are used for a variety of tasks, including text classification, question answering, and text generation. In this tutorial, we will show how to conformalize a transformer language model for text classification. We will use the Banking77 dataset (Casanueva et al. 2020), which consists of 13,083 queries from 77 intents. On the model side, we will use the DistilRoBERTa model, which is a distilled version of RoBERTa (Liu et al. 2019) finetuned on the Banking77 dataset.","category":"page"},{"location":"how_to_guides/llm/#Data","page":"How to Conformalize a Large Language Model","title":"Data","text":"","category":"section"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"The data was downloaded from HuggingFace 🤗 (HF) and split into a proper training, calibration, and test set. All that’s left to do is to load the data and preprocess it. We add 1 to the labels to make them 1-indexed (sorry Pythonistas 😜)","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"# Get labels:\ndf_labels = CSV.read(\"dev/artifacts/data/banking77/labels.csv\", DataFrame, drop=[1])\nlabels = df_labels[:,1]\n\n# Get data:\ndf_train = CSV.read(\"dev/artifacts/data/banking77/train.csv\", DataFrame, drop=[1])\ndf_cal = CSV.read(\"dev/artifacts/data/banking77/calibration.csv\", DataFrame, drop=[1])\ndf_full_train = vcat(df_train, df_cal)\ntrain_ratio = round(nrow(df_train)/nrow(df_full_train), digits=2)\ndf_test = CSV.read(\"dev/artifacts/data/banking77/test.csv\", DataFrame, drop=[1])\n\n# Preprocess data:\nqueries_train, y_train = collect(df_train.text), categorical(df_train.labels .+ 1)\nqueries_cal, y_cal = collect(df_cal.text), categorical(df_cal.labels .+ 1)\nqueries, y = collect(df_full_train.text), categorical(df_full_train.labels .+ 1)\nqueries_test, y_test = collect(df_test.text), categorical(df_test.labels .+ 1)","category":"page"},{"location":"how_to_guides/llm/#HuggingFace-Model","page":"How to Conformalize a Large Language Model","title":"HuggingFace Model","text":"","category":"section"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"The model can be loaded from HF straight into our running Julia session using the Transformers.jl package. Below we load the tokenizer tkr and the model mod. The tokenizer is used to convert the text into a sequence of integers, which is then fed into the model. The model outputs a hidden state, which is then fed into a classifier to get the logits for each class. Finally, the logits are then passed through a softmax function to get the corresponding predicted probabilities. Below we run a few queries through the model to see how it performs.","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"# Load model from HF 🤗:\ntkr = hgf\"mrm8488/distilroberta-finetuned-banking77:tokenizer\"\nmod = hgf\"mrm8488/distilroberta-finetuned-banking77:ForSequenceClassification\"\n\n# Test model:\nquery = [\n \"What is the base of the exchange rates?\",\n \"Why is my card not working?\",\n \"My Apple Pay is not working, what should I do?\",\n]\na = encode(tkr, query)\nb = mod.model(a)\nc = mod.cls(b.hidden_state)\nd = softmax(c.logit)\n[labels[i] for i in Flux.onecold(d)]","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"3-element Vector{String}:\n \"exchange_rate\"\n \"card_not_working\"\n \"apple_pay_or_google_pay\"","category":"page"},{"location":"how_to_guides/llm/#MLJ-Interface","page":"How to Conformalize a Large Language Model","title":"MLJ Interface","text":"","category":"section"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"Since our package is interfaced to MLJ.jl, we need to define a wrapper model that conforms to the MLJ interface. In order to add the model for general use, we would probably go through MLJFlux.jl, but for this tutorial, we will make our life easy and simply overload the MLJBase.fit and MLJBase.predict methods. Since the model from HF is already pre-trained and we are not interested in further fine-tuning, we will simply return the model object in the MLJBase.fit method. The MLJBase.predict method will then take the model object and the query and return the predicted probabilities. We also need to define the MLJBase.target_scitype and MLJBase.predict_mode methods. The former tells MLJ what the output type of the model is, and the latter can be used to retrieve the label with the highest predicted probability.","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"struct IntentClassifier <: MLJBase.Probabilistic\n tkr::TextEncoders.AbstractTransformerTextEncoder\n mod::HuggingFace.HGFRobertaForSequenceClassification\nend\n\nfunction IntentClassifier(;\n tokenizer::TextEncoders.AbstractTransformerTextEncoder, \n model::HuggingFace.HGFRobertaForSequenceClassification,\n)\n IntentClassifier(tkr, mod)\nend\n\nfunction get_hidden_state(clf::IntentClassifier, query::Union{AbstractString, Vector{<:AbstractString}})\n token = encode(clf.tkr, query)\n hidden_state = clf.mod.model(token).hidden_state\n return hidden_state\nend\n\n# This doesn't actually retrain the model, but it retrieves the classifier object\nfunction MLJBase.fit(clf::IntentClassifier, verbosity, X, y)\n cache=nothing\n report=nothing\n fitresult = (clf = clf.mod.cls, labels = levels(y))\n return fitresult, cache, report\nend\n\nfunction MLJBase.predict(clf::IntentClassifier, fitresult, Xnew)\n output = fitresult.clf(get_hidden_state(clf, Xnew))\n p̂ = UnivariateFinite(fitresult.labels,softmax(output.logit)',pool=missing)\n return p̂\nend\n\nMLJBase.target_scitype(clf::IntentClassifier) = AbstractVector{<:Finite}\n\nMLJBase.predict_mode(clf::IntentClassifier, fitresult, Xnew) = mode.(MLJBase.predict(clf, fitresult, Xnew))","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"To test that everything is working as expected, we fit the model and generated predictions for a subset of the test data:","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"clf = IntentClassifier(tkr, mod)\ntop_n = 10\nfitresult, _, _ = MLJBase.fit(clf, 1, nothing, y_test[1:top_n])\n@time ŷ = MLJBase.predict(clf, fitresult, queries_test[1:top_n]);","category":"page"},{"location":"how_to_guides/llm/#Conformal-Chatbot","page":"How to Conformalize a Large Language Model","title":"Conformal Chatbot","text":"","category":"section"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"To turn the wrapped, pre-trained model into a conformal intent classifier, we can now rely on standard API calls. We first wrap our atomic model where we also specify the desired coverage rate and method. Since even simple forward passes are computationally expensive for our (small) LLM, we rely on Simple Inductive Conformal Classification.","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"#| eval: false\n\nconf_model = conformal_model(clf; coverage=0.95, method=:simple_inductive, train_ratio=train_ratio)\nmach = machine(conf_model, queries, y)\n@time fit!(mach)\nSerialization.serialize(\"dev/artifacts/models/banking77/simple_inductive.jls\", mach)","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"Finally, we use our conformal LLM to build a simple and yet powerful chatbot that runs directly in the Julia REPL. Without dwelling on the details too much, the conformal_chatbot works as follows:","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"Prompt user to explain their intent.\nFeed user input through conformal LLM and present the output to the user.\nIf the conformal prediction sets includes more than one label, prompt the user to either refine their input or choose one of the options included in the set.","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"mach = Serialization.deserialize(\"dev/artifacts/models/banking77/simple_inductive.jls\")\n\nfunction prediction_set(mach, query::String)\n p̂ = MLJBase.predict(mach, query)[1]\n probs = pdf.(p̂, collect(1:77))\n in_set = findall(probs .!= 0)\n labels_in_set = labels[in_set]\n probs_in_set = probs[in_set]\n _order = sortperm(-probs_in_set)\n plt = UnicodePlots.barplot(labels_in_set[_order], probs_in_set[_order], title=\"Possible Intents\")\n return labels_in_set, plt\nend\n\nfunction conformal_chatbot()\n println(\"👋 Hi, I'm a Julia, your conformal chatbot. I'm here to help you with your banking query. Ask me anything or type 'exit' to exit ...\\n\")\n completed = false\n queries = \"\"\n while !completed\n query = readline()\n queries = queries * \",\" * query\n labels, plt = prediction_set(mach, queries)\n if length(labels) > 1\n println(\"🤔 Hmmm ... I can think of several options here. If any of these applies, simply type the corresponding number (e.g. '1' for the first option). Otherwise, can you refine your question, please?\\n\")\n println(plt)\n else\n println(\"🥳 I think you mean $(labels[1]). Correct?\")\n end\n\n # Exit:\n if query == \"exit\"\n println(\"👋 Bye!\")\n break\n end\n if query ∈ string.(collect(1:77))\n println(\"👍 Great! You've chosen '$(labels[parse(Int64, query)])'. I'm glad I could help you. Have a nice day!\")\n completed = true\n end\n end\nend","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"Below we show the output for two example queries. The first one is very ambiguous. As expected, the size of the prediction set is therefore large.","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"ambiguous_query = \"transfer mondey?\"\nprediction_set(mach, ambiguous_query)[2]","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":" Possible Intents \n ┌ ┐ \n beneficiary_not_allowed ┤■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■ 0.150517 \n balance_not_updated_after_bank_transfer ┤■■■■■■■■■■■■■■■■■■■■■■ 0.111409 \n transfer_into_account ┤■■■■■■■■■■■■■■■■■■■ 0.0939535 \n transfer_not_received_by_recipient ┤■■■■■■■■■■■■■■■■■■ 0.091163 \n top_up_by_bank_transfer_charge ┤■■■■■■■■■■■■■■■■■■ 0.089306 \n failed_transfer ┤■■■■■■■■■■■■■■■■■■ 0.0888322 \n transfer_timing ┤■■■■■■■■■■■■■ 0.0641952 \n transfer_fee_charged ┤■■■■■■■ 0.0361131 \n pending_transfer ┤■■■■■ 0.0270795 \n receiving_money ┤■■■■■ 0.0252126 \n declined_transfer ┤■■■ 0.0164443 \n cancel_transfer ┤■■■ 0.0150444 \n └ ┘","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"The more refined version of the prompt yields a smaller prediction set: less ambiguous prompts result in lower predictive uncertainty.","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"refined_query = \"I tried to transfer money to my friend, but it failed.\"\nprediction_set(mach, refined_query)[2]","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":" Possible Intents \n ┌ ┐ \n failed_transfer ┤■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■ 0.59042 \n beneficiary_not_allowed ┤■■■■■■■ 0.139806 \n transfer_not_received_by_recipient ┤■■ 0.0449783 \n balance_not_updated_after_bank_transfer ┤■■ 0.037894 \n declined_transfer ┤■ 0.0232856 \n transfer_into_account ┤■ 0.0108771 \n cancel_transfer ┤ 0.00876369 \n └ ┘","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"Below we include a short demo video that shows the REPL-based chatbot in action.","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"(Image: )","category":"page"},{"location":"how_to_guides/llm/#Final-Remarks","page":"How to Conformalize a Large Language Model","title":"Final Remarks","text":"","category":"section"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"This work was done in collaboration with colleagues at ING as part of the ING Analytics 2023 Experiment Week. Our team demonstrated that Conformal Prediction provides a powerful and principled alternative to top-K intent classification. We won the first prize by popular vote.","category":"page"},{"location":"how_to_guides/llm/#References","page":"How to Conformalize a Large Language Model","title":"References","text":"","category":"section"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"Casanueva, Iñigo, Tadas Temčinas, Daniela Gerz, Matthew Henderson, and Ivan Vulić. 2020. “Efficient Intent Detection with Dual Sentence Encoders.” In Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI, 38–45. Online: Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.nlp4convai-1.5.","category":"page"},{"location":"how_to_guides/llm/","page":"How to Conformalize a Large Language Model","title":"How to Conformalize a Large Language Model","text":"Liu, Yinhan, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. “RoBERTa: A Robustly Optimized BERT Pretraining Approach.” arXiv. https://doi.org/10.48550/arXiv.1907.11692.","category":"page"},{"location":"#ConformalPrediction","page":"🏠 Home","title":"ConformalPrediction","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"CurrentModule = ConformalPrediction","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"(Image: )","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Documentation for ConformalPrediction.jl.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"ConformalPrediction.jl is a package for Predictive Uncertainty Quantification (UQ) through Conformal Prediction (CP) in Julia. It is designed to work with supervised models trained in MLJ (Blaom et al. 2020). Conformal Prediction is easy-to-understand, easy-to-use and model-agnostic and it works under minimal distributional assumptions.","category":"page"},{"location":"#Quick-Tour","page":"🏠 Home","title":"🏃 Quick Tour","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"First time here? Take a quick interactive tour to see what this package can do right on JuliaHub (To run the notebook, hit login and then edit).","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"This Pluto.jl 🎈 notebook won the 2nd Price in the JuliaCon 2023 Notebook Competition.","category":"page"},{"location":"#Local-Tour","page":"🏠 Home","title":"Local Tour","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"To run the tour locally, just clone this repo and start Pluto.jl as follows:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"] add Pluto\nusing Pluto\nPluto.run()","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"All notebooks are contained in docs/pluto.","category":"page"},{"location":"#Background","page":"🏠 Home","title":"📖 Background","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Don’t worry, we’re not about to deep-dive into methodology. But just to give you a high-level description of Conformal Prediction (CP) upfront:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Conformal prediction (a.k.a. conformal inference) is a user-friendly paradigm for creating statistically rigorous uncertainty sets/intervals for the predictions of such models. Critically, the sets are valid in a distribution-free sense: they possess explicit, non-asymptotic guarantees even without distributional assumptions or model assumptions.— Angelopoulos and Bates (2021)","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Intuitively, CP works under the premise of turning heuristic notions of uncertainty into rigorous uncertainty estimates through repeated sampling or the use of dedicated calibration data.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"(Image: Conformal Prediction in action: prediction intervals at varying coverage rates. As coverage grows, so does the width of the prediction interval.)","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"The animation above is lifted from a small blog post that introduces Conformal Prediction and this package in the context of regression. It shows how the prediction interval and the test points that it covers varies in size as the user-specified coverage rate changes.","category":"page"},{"location":"#Installation","page":"🏠 Home","title":"🚩 Installation","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"You can install the latest stable release from the general registry:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"using Pkg\nPkg.add(\"ConformalPrediction\")","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"The development version can be installed as follows:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"using Pkg\nPkg.add(url=\"https://github.com/juliatrustworthyai/ConformalPrediction.jl\")","category":"page"},{"location":"#Usage-Example","page":"🏠 Home","title":"🔍 Usage Example","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"To illustrate the intended use of the package, let’s have a quick look at a simple regression problem. We first generate some synthetic data and then determine indices for our training and test data using MLJ:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"using MLJ\n\n# Inputs:\nN = 600\nxmax = 3.0\nusing Distributions\nd = Uniform(-xmax, xmax)\nX = rand(d, N)\nX = reshape(X, :, 1)\n\n# Outputs:\nnoise = 0.5\nfun(X) = sin(X)\nε = randn(N) .* noise\ny = @.(fun(X)) + ε\ny = vec(y)\n\n# Partition:\ntrain, test = partition(eachindex(y), 0.4, 0.4, shuffle=true)","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"We then import Symbolic Regressor (SymbolicRegression.jl) following the standard MLJ procedure.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"regressor = @load SRRegressor pkg=SymbolicRegression\nmodel = regressor(\n niterations=50,\n binary_operators=[+, -, *],\n unary_operators=[sin],\n)","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"To turn our conventional model into a conformal model, we just need to declare it as such by using conformal_model wrapper function. The generated conformal model instance can wrapped in data to create a machine. Finally, we proceed by fitting the machine on training data using the generic fit! method:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"using ConformalPrediction\nconf_model = conformal_model(model)\nmach = machine(conf_model, X, y)\nfit!(mach, rows=train)","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Predictions can then be computed using the generic predict method. The code below produces predictions for the first n samples. Each tuple contains the lower and upper bound for the prediction interval.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"show_first = 5\nXtest = selectrows(X, test)\nytest = y[test]\nŷ = predict(mach, Xtest)\nŷ[1:show_first]","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"5-element Vector{Tuple{Float64, Float64}}:\n (-0.04087262272113379, 1.8635644669554758)\n (0.04647464096907805, 1.9509117306456876)\n (-0.24248802236397216, 1.6619490673126376)\n (-0.07841928163933476, 1.8260178080372749)\n (-0.02268628324126465, 1.881750806435345)","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"For simple models like this one, we can call a custom Plots recipe on our instance, fit result and data to generate the chart below:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"using Plots\nzoom = 0\nplt = plot(mach.model, mach.fitresult, Xtest, ytest, lw=5, zoom=zoom, observed_lab=\"Test points\")\nxrange = range(-xmax+zoom,xmax-zoom,length=N)\nplot!(plt, xrange, @.(fun(xrange)), lw=2, ls=:dash, colour=:darkorange, label=\"Ground truth\")","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"(Image: )","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"We can evaluate the conformal model using the standard MLJ workflow with a custom performance measure. You can use either emp_coverage for the overall empirical coverage (correctness) or ssc for the size-stratified coverage rate (adaptiveness).","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"_eval = evaluate!(mach; measure=[emp_coverage, ssc], verbosity=0)\ndisplay(_eval)\nprintln(\"Empirical coverage: $(round(_eval.measurement[1], digits=3))\")\nprintln(\"SSC: $(round(_eval.measurement[2], digits=3))\")","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"PerformanceEvaluation object with these fields:\n model, measure, operation, measurement, per_fold,\n per_observation, fitted_params_per_fold,\n report_per_fold, train_test_rows, resampling, repeats\nExtract:\n┌──────────────────────────────────────────────┬───────────┬─────────────┬──────\n│ measure │ operation │ measurement │ 1.9 ⋯\n├──────────────────────────────────────────────┼───────────┼─────────────┼──────\n│ ConformalPrediction.emp_coverage │ predict │ 0.953 │ 0.0 ⋯\n│ ConformalPrediction.size_stratified_coverage │ predict │ 0.953 │ 0.0 ⋯\n└──────────────────────────────────────────────┴───────────┴─────────────┴──────\n 2 columns omitted\n\nEmpirical coverage: 0.953\nSSC: 0.953","category":"page"},{"location":"#Read-on","page":"🏠 Home","title":"📚 Read on","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"If after reading the usage example above you are just left with more questions about the topic, that’s normal. Below we have have collected a number of further resources to help you get started with this package and the topic itself:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Blog post introducing conformal classifiers: [Quarto], [TDS], [Forem].\nBlog post applying CP to a deep learning image classifier: [Quarto], [TDS], [Forem].\nThe package docs and in particular the FAQ.","category":"page"},{"location":"#External-Resources","page":"🏠 Home","title":"External Resources","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification by Angelopoulos and Bates (2021) (pdf).\nPredictive inference with the jackknife+ by Barber et al. (2021) (pdf)\nAwesome Conformal Prediction repository by Valery Manokhin (repo).\nDocumentation for the Python package MAPIE.","category":"page"},{"location":"#Status","page":"🏠 Home","title":"🔁 Status","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"This package is in its early stages of development and therefore still subject to changes to the core architecture and API.","category":"page"},{"location":"#Implemented-Methodologies","page":"🏠 Home","title":"Implemented Methodologies","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"The following CP approaches have been implemented:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Regression:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Inductive\nNaive Transductive\nJackknife\nJackknife+\nJackknife-minmax\nCV+\nCV-minmax","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Classification:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Inductive\nNaive Transductive\nAdaptive Inductive","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"The package has been tested for the following supervised models offered by MLJ.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Regression:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"keys(tested_atomic_models[:regression])","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"KeySet for a Dict{Symbol, Expr} with 5 entries. Keys:\n :ridge\n :lasso\n :evo_tree\n :nearest_neighbor\n :linear","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Classification:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"keys(tested_atomic_models[:classification])","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"KeySet for a Dict{Symbol, Expr} with 3 entries. Keys:\n :nearest_neighbor\n :evo_tree\n :logistic","category":"page"},{"location":"#Implemented-Evaluation-Metrics","page":"🏠 Home","title":"Implemented Evaluation Metrics","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"To evaluate conformal predictors we are typically interested in correctness and adaptiveness. The former can be evaluated by looking at the empirical coverage rate, while the latter can be assessed through metrics that address the conditional coverage (Angelopoulos and Bates 2021). To this end, the following metrics have been implemented:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"emp_coverage (empirical coverage)\nssc (size-stratified coverage)","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"There is also a simple Plots.jl recipe that can be used to inspect the set sizes. In the regression case, the interval width is stratified into discrete bins for this purpose:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"bar(mach.model, mach.fitresult, X)","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"(Image: )","category":"page"},{"location":"#Contribute","page":"🏠 Home","title":"🛠 Contribute","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Contributions are welcome! A good place to start is the list of outstanding issues. For more details, see also the Contributor’s Guide. Please follow the SciML ColPrac guide.","category":"page"},{"location":"#Thanks","page":"🏠 Home","title":"🙏 Thanks","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"To build this package I have read and re-read both Angelopoulos and Bates (2021) and Barber et al. (2021). The Awesome Conformal Prediction repository (Manokhin, n.d.) has also been a fantastic place to get started. Thanks also to @aangelopoulos, @valeman and others for actively contributing to discussions on here. Quite a few people have also recently started using and contributing to the package for which I am very grateful. Finally, many thanks to Anthony Blaom (@ablaom) for many helpful discussions about how to interface this package to MLJ.jl.","category":"page"},{"location":"#References","page":"🏠 Home","title":"🎓 References","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Angelopoulos, Anastasios N., and Stephen Bates. 2021. “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification.” https://arxiv.org/abs/2107.07511.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Barber, Rina Foygel, Emmanuel J. Candès, Aaditya Ramdas, and Ryan J. Tibshirani. 2021. “Predictive Inference with the Jackknife+.” The Annals of Statistics 49 (1): 486–507. https://doi.org/10.1214/20-AOS1965.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Blaom, Anthony D., Franz Kiraly, Thibaut Lienart, Yiannis Simillides, Diego Arenas, and Sebastian J. Vollmer. 2020. “MLJ: A Julia Package for Composable Machine Learning.” Journal of Open Source Software 5 (55): 2704. https://doi.org/10.21105/joss.02704.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Manokhin, Valery. n.d. “Awesome Conformal Prediction.”","category":"page"}] } diff --git a/dev/tutorials/classification/index.html b/dev/tutorials/classification/index.html index c66189f..ce1f21c 100644 --- a/dev/tutorials/classification/index.html +++ b/dev/tutorials/classification/index.html @@ -82,4 +82,4 @@ # display(_eval) println("Empirical coverage for $(_mod): $(round(_eval.measurement[1], digits=3))") println("SSC for $(_mod): $(round(_eval.measurement[2], digits=3))")
      Empirical coverage for adaptive_inductive: 0.962
      -SSC for adaptive_inductive: 0.962

      References

      Angelopoulos, Anastasios N., and Stephen Bates. 2021. “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification.” https://arxiv.org/abs/2107.07511.

      [1] In other places split conformal prediction is sometimes referred to as inductive conformal prediction.

      [2] Any thoughts/comments welcome!

      +SSC for adaptive_inductive: 0.962

      References

      Angelopoulos, Anastasios N., and Stephen Bates. 2021. “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification.” https://arxiv.org/abs/2107.07511.

      [1] In other places split conformal prediction is sometimes referred to as inductive conformal prediction.

      [2] Any thoughts/comments welcome!

      diff --git a/dev/tutorials/index.html b/dev/tutorials/index.html index c728418..2f7f73c 100644 --- a/dev/tutorials/index.html +++ b/dev/tutorials/index.html @@ -1,2 +1,2 @@ -Overview · ConformalPrediction.jl

      Tutorials

      In this section you will find a series of tutorials that should help you gain a basic understanding of Conformal Prediction and how to apply it in Julia using this package.

      Tutorials are lessons that take the reader by the hand through a series of steps to complete a project of some kind. Tutorials are learning-oriented.

      Diátaxis

      In other words, you come here because you are new to this topic and are looking for a first peek at the methodology and code 🫣

      +Overview · ConformalPrediction.jl

      Tutorials

      In this section you will find a series of tutorials that should help you gain a basic understanding of Conformal Prediction and how to apply it in Julia using this package.

      Tutorials are lessons that take the reader by the hand through a series of steps to complete a project of some kind. Tutorials are learning-oriented.

      Diátaxis

      In other words, you come here because you are new to this topic and are looking for a first peek at the methodology and code 🫣

      diff --git a/dev/tutorials/plotting/index.html b/dev/tutorials/plotting/index.html index 1acec89..bc4f86d 100644 --- a/dev/tutorials/plotting/index.html +++ b/dev/tutorials/plotting/index.html @@ -47,4 +47,4 @@ mach = machine(conf_model, X, y) fit!(mach)
      p1 = contourf(mach.model, mach.fitresult, X, y; plot_set_size=true)
       p2 = bar(mach.model, mach.fitresult, X)
      -plot(p1, p2, size=(700,300))

      Angelopoulos, Anastasios N., and Stephen Bates. 2021. “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification.” https://arxiv.org/abs/2107.07511.

      +plot(p1, p2, size=(700,300))

      Angelopoulos, Anastasios N., and Stephen Bates. 2021. “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification.” https://arxiv.org/abs/2107.07511.

      diff --git a/dev/tutorials/regression/index.html b/dev/tutorials/regression/index.html index 9b92e97..7829b89 100644 --- a/dev/tutorials/regression/index.html +++ b/dev/tutorials/regression/index.html @@ -105,4 +105,4 @@ 6 │ jackknife_plus_ab 0.941667 0.941667 7 │ jackknife_plus 0.941667 0.871606 8 │ jackknife 0.941667 0.941667 - 9 │ naive 0.938333 0.938333

      References

      Angelopoulos, Anastasios N., and Stephen Bates. 2021. “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification.” https://arxiv.org/abs/2107.07511.

      + 9 │ naive 0.938333 0.938333

      References

      Angelopoulos, Anastasios N., and Stephen Bates. 2021. “A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification.” https://arxiv.org/abs/2107.07511.