From bed1a770e17459ad4c3b357467d5e07649bfc822 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Wed, 20 Mar 2024 15:55:59 +0000 Subject: [PATCH] build based on bd98fcb --- v0.1.32/_contribute/index.html | 2 +- v0.1.32/assets/resources/index.html | 2 +- v0.1.32/contribute/index.html | 2 +- v0.1.32/contribute/performance/index.html | 2 +- v0.1.32/explanation/architecture/index.html | 2 +- v0.1.32/explanation/categorical/index.html | 2 +- v0.1.32/explanation/generators/clap_roar/index.html | 2 +- v0.1.32/explanation/generators/clue/index.html | 2 +- v0.1.32/explanation/generators/dice/index.html | 2 +- v0.1.32/explanation/generators/feature_tweak/index.html | 2 +- v0.1.32/explanation/generators/generic/index.html | 2 +- v0.1.32/explanation/generators/gravitational/index.html | 2 +- v0.1.32/explanation/generators/greedy/index.html | 2 +- v0.1.32/explanation/generators/growing_spheres/index.html | 2 +- v0.1.32/explanation/generators/overview/index.html | 2 +- v0.1.32/explanation/generators/probe/index.html | 2 +- v0.1.32/explanation/generators/revise/index.html | 2 +- v0.1.32/explanation/index.html | 2 +- v0.1.32/explanation/optimisers/jsma/index.html | 2 +- v0.1.32/explanation/optimisers/overview/index.html | 2 +- v0.1.32/how_to_guides/custom_generators/index.html | 2 +- v0.1.32/how_to_guides/custom_models/index.html | 2 +- v0.1.32/how_to_guides/index.html | 2 +- v0.1.32/index.html | 2 +- v0.1.32/reference/index.html | 2 +- v0.1.32/search/index.html | 2 +- v0.1.32/tutorials/benchmarking/index.html | 2 +- v0.1.32/tutorials/data_catalogue/index.html | 2 +- v0.1.32/tutorials/data_preprocessing/index.html | 2 +- v0.1.32/tutorials/evaluation/index.html | 2 +- v0.1.32/tutorials/generators/index.html | 2 +- v0.1.32/tutorials/index.html | 2 +- v0.1.32/tutorials/model_catalogue/index.html | 2 +- v0.1.32/tutorials/models/index.html | 2 +- v0.1.32/tutorials/parallelization/index.html | 2 +- v0.1.32/tutorials/simple_example/index.html | 2 +- v0.1.32/tutorials/whistle_stop/index.html | 2 +- 37 files changed, 37 insertions(+), 37 deletions(-) diff --git a/v0.1.32/_contribute/index.html b/v0.1.32/_contribute/index.html index 0d1840fb2..5131cda85 100644 --- a/v0.1.32/_contribute/index.html +++ b/v0.1.32/_contribute/index.html @@ -1,2 +1,2 @@ -Contributing · CounterfactualExplanations.jl

Contributing

Our goal is to provide a go-to place for Counterfactual Explanations in Julia. To this end, the following is a non-exhaustive list of enhancements we have planned:

  1. Additional counterfactual generators and predictive models.
  2. Additional datasets for testing, evaluation and benchmarking.
  3. Support for regression models.

For a complete list, have a look at outstanding issue.

How to contribute?

Any sort of contribution is welcome, in particular:

  1. Should you spot any errors or something is not working, please just open an issue.
  2. If you want to contribute your code, please proceed as follows:
    • Fork this repo and clone your fork: git clone https://github.com/your_username/CounterfactualExplanations.jl.
    • Implement your modifications and submit a pull request.
  3. For any other questions or comments, you can also start a discussion.
+Contributing · CounterfactualExplanations.jl

Contributing

Our goal is to provide a go-to place for Counterfactual Explanations in Julia. To this end, the following is a non-exhaustive list of enhancements we have planned:

  1. Additional counterfactual generators and predictive models.
  2. Additional datasets for testing, evaluation and benchmarking.
  3. Support for regression models.

For a complete list, have a look at outstanding issue.

How to contribute?

Any sort of contribution is welcome, in particular:

  1. Should you spot any errors or something is not working, please just open an issue.
  2. If you want to contribute your code, please proceed as follows:
    • Fork this repo and clone your fork: git clone https://github.com/your_username/CounterfactualExplanations.jl.
    • Implement your modifications and submit a pull request.
  3. For any other questions or comments, you can also start a discussion.
diff --git a/v0.1.32/assets/resources/index.html b/v0.1.32/assets/resources/index.html index 8fd94054b..f03977f0f 100644 --- a/v0.1.32/assets/resources/index.html +++ b/v0.1.32/assets/resources/index.html @@ -1,2 +1,2 @@ -📚 Additional Resources · CounterfactualExplanations.jl
+📚 Additional Resources · CounterfactualExplanations.jl
diff --git a/v0.1.32/contribute/index.html b/v0.1.32/contribute/index.html index 9198f214f..feb82ded8 100644 --- a/v0.1.32/contribute/index.html +++ b/v0.1.32/contribute/index.html @@ -1,2 +1,2 @@ -🛠 Contribute · CounterfactualExplanations.jl

Contributing

Our goal is to provide a go-to place for Counterfactual Explanations in Julia. To this end, the following is a non-exhaustive list of enhancements we have planned:

  1. Additional counterfactual generators and predictive models.
  2. Additional datasets for testing, evaluation and benchmarking.
  3. Support for regression models.

For a complete list, have a look at outstanding issue.

How to contribute?

Any sort of contribution is welcome, in particular:

  1. Should you spot any errors or something is not working, please just open an issue.
  2. If you want to contribute your code, please proceed as follows:
    • Fork this repo and clone your fork: git clone https://github.com/your_username/CounterfactualExplanations.jl.
    • Implement your modifications and submit a pull request.
  3. For any other questions or comments, you can also start a discussion.
+🛠 Contribute · CounterfactualExplanations.jl

Contributing

Our goal is to provide a go-to place for Counterfactual Explanations in Julia. To this end, the following is a non-exhaustive list of enhancements we have planned:

  1. Additional counterfactual generators and predictive models.
  2. Additional datasets for testing, evaluation and benchmarking.
  3. Support for regression models.

For a complete list, have a look at outstanding issue.

How to contribute?

Any sort of contribution is welcome, in particular:

  1. Should you spot any errors or something is not working, please just open an issue.
  2. If you want to contribute your code, please proceed as follows:
    • Fork this repo and clone your fork: git clone https://github.com/your_username/CounterfactualExplanations.jl.
    • Implement your modifications and submit a pull request.
  3. For any other questions or comments, you can also start a discussion.
diff --git a/v0.1.32/contribute/performance/index.html b/v0.1.32/contribute/performance/index.html index 6947ff33d..9808b3fcc 100644 --- a/v0.1.32/contribute/performance/index.html +++ b/v0.1.32/contribute/performance/index.html @@ -10,4 +10,4 @@ # Search: generator = GenericGenerator() -ce = generate_counterfactual(x, target, counterfactual_data, M, generator)
counterfactual_data_large = load_linearly_separable(100000)
@time generate_counterfactual(x, target, counterfactual_data, M, generator)
@time generate_counterfactual(x, target, counterfactual_data_large, M, generator)
+ce = generate_counterfactual(x, target, counterfactual_data, M, generator)
counterfactual_data_large = load_linearly_separable(100000)
@time generate_counterfactual(x, target, counterfactual_data, M, generator)
@time generate_counterfactual(x, target, counterfactual_data_large, M, generator)
diff --git a/v0.1.32/explanation/architecture/index.html b/v0.1.32/explanation/architecture/index.html index beb537c95..7ae230844 100644 --- a/v0.1.32/explanation/architecture/index.html +++ b/v0.1.32/explanation/architecture/index.html @@ -1,2 +1,2 @@ -Package Architecture · CounterfactualExplanations.jl

Package Architecture

Modular, composable, scalable!

The diagram below provides an overview of the package architecture. It is built around two core modules that are designed to be as extensible as possible through dispatch: 1) Models is concerned with making any arbitrary model compatible with the package; 2) Generators is used to implement arbitrary counterfactual search algorithms.[1] The core function of the package generate_counterfactual uses an instance of type <: AbstractFittedModel produced by the Models module and an instance of type <: AbstractGenerator produced by the Generators module.

[1] We have made an effort to keep the code base a flexible and extensible as possible, but cannot guarantee at this point that any counterfactual generator can be implemented without further adaptation.

+Package Architecture · CounterfactualExplanations.jl

Package Architecture

Modular, composable, scalable!

The diagram below provides an overview of the package architecture. It is built around two core modules that are designed to be as extensible as possible through dispatch: 1) Models is concerned with making any arbitrary model compatible with the package; 2) Generators is used to implement arbitrary counterfactual search algorithms.[1] The core function of the package generate_counterfactual uses an instance of type <: AbstractFittedModel produced by the Models module and an instance of type <: AbstractGenerator produced by the Generators module.

[1] We have made an effort to keep the code base a flexible and extensible as possible, but cannot guarantee at this point that any counterfactual generator can be implemented without further adaptation.

diff --git a/v0.1.32/explanation/categorical/index.html b/v0.1.32/explanation/categorical/index.html index ad88aa6c4..e0708cca3 100644 --- a/v0.1.32/explanation/categorical/index.html +++ b/v0.1.32/explanation/categorical/index.html @@ -117,4 +117,4 @@ 0.0 0.0 1.0 - 1.85 + 1.85 diff --git a/v0.1.32/explanation/generators/clap_roar/index.html b/v0.1.32/explanation/generators/clap_roar/index.html index 6d7006a4d..21df1ef6c 100644 --- a/v0.1.32/explanation/generators/clap_roar/index.html +++ b/v0.1.32/explanation/generators/clap_roar/index.html @@ -3,4 +3,4 @@ \text{extcost}(f(\mathbf{s}^\prime)) = l(M(f(\mathbf{s}^\prime)),y^\prime) \end{aligned}\]

for each counterfactual $k$ where $l$ denotes the loss function used to train $M$. This approach is based on the intuition that (endogenous) model shifts will be triggered by counterfactuals that increase classifier loss (Altmeyer et al. 2023).

Usage

The approach can be used in our package as follows:

generator = ClaPROARGenerator()
 ce = generate_counterfactual(x, target, counterfactual_data, M, generator)
-plot(ce)

Comparison to GenericGenerator

The figure below compares the outcome for the GenericGenerator and the ClaPROARGenerator.

References

Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia Liem. 2023. “Endogenous Macrodynamics in Algorithmic Recourse.” In First IEEE Conference on Secure and Trustworthy Machine Learning.

Upadhyay, Sohini, Shalmali Joshi, and Himabindu Lakkaraju. 2021. “Towards Robust and Reliable Algorithmic Recourse.” https://arxiv.org/abs/2102.13620.

+plot(ce)

Comparison to GenericGenerator

The figure below compares the outcome for the GenericGenerator and the ClaPROARGenerator.

References

Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia Liem. 2023. “Endogenous Macrodynamics in Algorithmic Recourse.” In First IEEE Conference on Secure and Trustworthy Machine Learning.

Upadhyay, Sohini, Shalmali Joshi, and Himabindu Lakkaraju. 2021. “Towards Robust and Reliable Algorithmic Recourse.” https://arxiv.org/abs/2102.13620.

diff --git a/v0.1.32/explanation/generators/clue/index.html b/v0.1.32/explanation/generators/clue/index.html index 765f122ec..4d5257c83 100644 --- a/v0.1.32/explanation/generators/clue/index.html +++ b/v0.1.32/explanation/generators/clue/index.html @@ -6,4 +6,4 @@ ce = generate_counterfactual( x, target, counterfactual_data, M, generator; converge_when=:max_iter, max_iter=1000) -plot(ce)

Extra: The CLUE generator can also be used upon already having achieved a counterfactual with a different generator. In this case you can use CLUE and make the counterfactual more robust.

Note: The above documentation is based on the information provided in the CLUE paper. Please refer to the original paper for more detailed explanations and implementation specifics.

References

Antorán, Javier, Umang Bhatt, Tameem Adel, Adrian Weller, and José Miguel Hernández-Lobato. 2020. “Getting a Clue: A Method for Explaining Uncertainty Estimates.” https://arxiv.org/abs/2006.06848.

+plot(ce)

Extra: The CLUE generator can also be used upon already having achieved a counterfactual with a different generator. In this case you can use CLUE and make the counterfactual more robust.

Note: The above documentation is based on the information provided in the CLUE paper. Please refer to the original paper for more detailed explanations and implementation specifics.

References

Antorán, Javier, Umang Bhatt, Tameem Adel, Adrian Weller, and José Miguel Hernández-Lobato. 2020. “Getting a Clue: A Method for Explaining Uncertainty Estimates.” https://arxiv.org/abs/2006.06848.

diff --git a/v0.1.32/explanation/generators/dice/index.html b/v0.1.32/explanation/generators/dice/index.html index 8bc3d85ad..fbaa405d7 100644 --- a/v0.1.32/explanation/generators/dice/index.html +++ b/v0.1.32/explanation/generators/dice/index.html @@ -39,4 +39,4 @@ num_counterfactuals=n_cf, converge_when=:generator_conditions ) ) -end

The figure below shows the resulting counterfactual paths. As expected, the resulting counterfactuals are more dispersed across the feature domain for higher choices of $\lambda_2$

References

Mothilal, Ramaravind K, Amit Sharma, and Chenhao Tan. 2020. “Explaining Machine Learning Classifiers Through Diverse Counterfactual Explanations.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 607–17.

[1] With thanks to the respondents on Discourse

+end

The figure below shows the resulting counterfactual paths. As expected, the resulting counterfactuals are more dispersed across the feature domain for higher choices of $\lambda_2$

References

Mothilal, Ramaravind K, Amit Sharma, and Chenhao Tan. 2020. “Explaining Machine Learning Classifiers Through Diverse Counterfactual Explanations.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 607–17.

[1] With thanks to the respondents on Discourse

diff --git a/v0.1.32/explanation/generators/feature_tweak/index.html b/v0.1.32/explanation/generators/feature_tweak/index.html index a05eea8a7..d4fb94ed7 100644 --- a/v0.1.32/explanation/generators/feature_tweak/index.html +++ b/v0.1.32/explanation/generators/feature_tweak/index.html @@ -31,4 +31,4 @@ colorbar=false, ) -display(plot(p1, p2; size=(800, 400)))

References

Tolomei, Gabriele, Fabrizio Silvestri, Andrew Haines, and Mounia Lalmas. 2017. “Interpretable Predictions of Tree-Based Ensembles via Actionable Feature Tweaking.” In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 465–74. https://doi.org/10.1145/3097983.3098039.

+display(plot(p1, p2; size=(800, 400)))

References

Tolomei, Gabriele, Fabrizio Silvestri, Andrew Haines, and Mounia Lalmas. 2017. “Interpretable Predictions of Tree-Based Ensembles via Actionable Feature Tweaking.” In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 465–74. https://doi.org/10.1145/3097983.3098039.

diff --git a/v0.1.32/explanation/generators/generic/index.html b/v0.1.32/explanation/generators/generic/index.html index 9fa16adb0..4444cfa42 100644 --- a/v0.1.32/explanation/generators/generic/index.html +++ b/v0.1.32/explanation/generators/generic/index.html @@ -1,4 +1,4 @@ Generic · CounterfactualExplanations.jl

GenericGenerator

We use the term generic to relate to the basic counterfactual generator proposed by Wachter, Mittelstadt, and Russell (2017) with $L1$-norm regularization. There is also a variant of this generator that uses the distance metric proposed in Wachter, Mittelstadt, and Russell (2017), which we call WachterGenerator.

Description

As the term indicates, this approach is simple: it forms the baseline approach for gradient-based counterfactual generators. Wachter, Mittelstadt, and Russell (2017) were among the first to realise that

[…] explanations can, in principle, be offered without opening the “black box.”

— Wachter, Mittelstadt, and Russell (2017)

Gradient descent is performed directly in the feature space. Concerning the cost heuristic, the authors choose to penalize the distance of counterfactuals from the factual value. This is based on the intuitive notion that larger feature perturbations require greater effort.

Usage

The approach can be used in our package as follows:

generator = GenericGenerator()
 ce = generate_counterfactual(x, target, counterfactual_data, M, generator)
-plot(ce)

References

Wachter, Sandra, Brent Mittelstadt, and Chris Russell. 2017. “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR.” Harv. JL & Tech. 31: 841.

+plot(ce)

References

Wachter, Sandra, Brent Mittelstadt, and Chris Russell. 2017. “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR.” Harv. JL & Tech. 31: 841.

diff --git a/v0.1.32/explanation/generators/gravitational/index.html b/v0.1.32/explanation/generators/gravitational/index.html index 949e65006..ec9f6cfdf 100644 --- a/v0.1.32/explanation/generators/gravitational/index.html +++ b/v0.1.32/explanation/generators/gravitational/index.html @@ -5,4 +5,4 @@ \text{extcost}(f(\mathbf{s}^\prime)) = \text{dist}(f(\mathbf{s}^\prime),\bar{x}^*) \end{aligned}\]

where $\bar{x}$ is some sensible point in the target domain, for example, the subsample average $\bar{x}^*=\text{mean}(x)$, $x \in \mathcal{D}_1$.

There is a tradeoff then, between the distance of counterfactuals from their factual value and the chosen point in the target domain. The chart below illustrates how the counterfactual outcome changes as the penalty $\lambda_2$ on the distance to the point in the target domain is increased from left to right (holding the other penalty term constant).

Usage

The approach can be used in our package as follows:

generator = GravitationalGenerator()
 ce = generate_counterfactual(x, target, counterfactual_data, M, generator)
-display(plot(ce))

Comparison to GenericGenerator

The figure below compares the outcome for the GenericGenerator and the GravitationalGenerator.

References

Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia Liem. 2023. “Endogenous Macrodynamics in Algorithmic Recourse.” In First IEEE Conference on Secure and Trustworthy Machine Learning.

+display(plot(ce))

Comparison to GenericGenerator

The figure below compares the outcome for the GenericGenerator and the GravitationalGenerator.

References

Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia Liem. 2023. “Endogenous Macrodynamics in Algorithmic Recourse.” In First IEEE Conference on Secure and Trustworthy Machine Learning.

diff --git a/v0.1.32/explanation/generators/greedy/index.html b/v0.1.32/explanation/generators/greedy/index.html index aef1e63d7..bfb8e9dca 100644 --- a/v0.1.32/explanation/generators/greedy/index.html +++ b/v0.1.32/explanation/generators/greedy/index.html @@ -2,4 +2,4 @@ Greedy · CounterfactualExplanations.jl

GreedyGenerator

We use the term greedy to describe the counterfactual generator introduced by Schut et al. (2021).

Description

The Greedy generator works under the premise of generating realistic counterfactuals by minimizing predictive uncertainty. Schut et al. (2021) show that for models that incorporates predictive uncertainty in their predictions, maximizing the predictive probability corresponds to minimizing the predictive uncertainty: by construction, the generated counterfactual will therefore be realistic (low epistemic uncertainty) and unambiguous (low aleatoric uncertainty).

For the counterfactual search Schut et al. (2021) propose using a Jacobian-based Saliency Map Attack(JSMA). It is greedy in the sense that it is an “iterative algorithm that updates the most salient feature, i.e. the feature that has the largest influence on the classification, by $\delta$ at each step” (Schut et al. 2021).

Usage

The approach can be used in our package as follows:

M = fit_model(counterfactual_data, :DeepEnsemble)
 generator = GreedyGenerator()
 ce = generate_counterfactual(x, target, counterfactual_data, M, generator)
-plot(ce)

References

Schut, Lisa, Oscar Key, Rory Mc Grath, Luca Costabello, Bogdan Sacaleanu, Yarin Gal, et al. 2021. “Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties.” In International Conference on Artificial Intelligence and Statistics, 1756–64. PMLR.

+plot(ce)

References

Schut, Lisa, Oscar Key, Rory Mc Grath, Luca Costabello, Bogdan Sacaleanu, Yarin Gal, et al. 2021. “Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties.” In International Conference on Artificial Intelligence and Statistics, 1756–64. PMLR.

diff --git a/v0.1.32/explanation/generators/growing_spheres/index.html b/v0.1.32/explanation/generators/growing_spheres/index.html index eb4188b72..01b7919d6 100644 --- a/v0.1.32/explanation/generators/growing_spheres/index.html +++ b/v0.1.32/explanation/generators/growing_spheres/index.html @@ -4,4 +4,4 @@ M = fit_model(counterfactual_data, :DeepEnsemble) ce = generate_counterfactual( x, target, counterfactual_data, M, generator) -plot(ce)

References

Laugel, Thibault, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, and Marcin Detyniecki. 2017. “Inverse Classification for Comparison-Based Interpretability in Machine Learning.” arXiv. https://doi.org/10.48550/arXiv.1712.08443.

+plot(ce)

References

Laugel, Thibault, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, and Marcin Detyniecki. 2017. “Inverse Classification for Comparison-Based Interpretability in Machine Learning.” arXiv. https://doi.org/10.48550/arXiv.1712.08443.

diff --git a/v0.1.32/explanation/generators/overview/index.html b/v0.1.32/explanation/generators/overview/index.html index 5b4f67c18..d0ad37478 100644 --- a/v0.1.32/explanation/generators/overview/index.html +++ b/v0.1.32/explanation/generators/overview/index.html @@ -12,4 +12,4 @@ :generic => GenericGenerator :greedy => GreedyGenerator

The following sections provide brief descriptions of all of them.

Gradient-based Counterfactual Generators

At the time of writing, all generators are gradient-based: that is, counterfactuals are searched through gradient descent. In Altmeyer et al. (2023) we lay out a general methodological framework that can be applied to all of these generators:

\[\begin{aligned} \mathbf{s}^\prime &= \arg \min_{\mathbf{s}^\prime \in \mathcal{S}} \left\{ {\text{yloss}(M(f(\mathbf{s}^\prime)),y^*)}+ \lambda {\text{cost}(f(\mathbf{s}^\prime)) } \right\} -\end{aligned} \]

“Here $\mathbf{s}^\prime=\left\{s_k^\prime\right\}_K$ is a $K$-dimensional array of counterfactual states and $f: \mathcal{S} \mapsto \mathcal{X}$ maps from the counterfactual state space to the feature space.” (Altmeyer et al. 2023)

For most generators, the state space is the feature space ($f$ is the identity function) and the number of counterfactuals $K$ is one. Latent Space generators instead search counterfactuals in some latent space $\mathcal{S}$. In this case, $f$ corresponds to the decoder part of the generative model, that is the function that maps back from the latent space to inputs.

References

Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia Liem. 2023. “Endogenous Macrodynamics in Algorithmic Recourse.” In First IEEE Conference on Secure and Trustworthy Machine Learning.

+\end{aligned} \]

“Here $\mathbf{s}^\prime=\left\{s_k^\prime\right\}_K$ is a $K$-dimensional array of counterfactual states and $f: \mathcal{S} \mapsto \mathcal{X}$ maps from the counterfactual state space to the feature space.” (Altmeyer et al. 2023)

For most generators, the state space is the feature space ($f$ is the identity function) and the number of counterfactuals $K$ is one. Latent Space generators instead search counterfactuals in some latent space $\mathcal{S}$. In this case, $f$ corresponds to the decoder part of the generative model, that is the function that maps back from the latent space to inputs.

References

Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia Liem. 2023. “Endogenous Macrodynamics in Algorithmic Recourse.” In First IEEE Conference on Secure and Trustworthy Machine Learning.

diff --git a/v0.1.32/explanation/generators/probe/index.html b/v0.1.32/explanation/generators/probe/index.html index f223fbe3c..765928d1d 100644 --- a/v0.1.32/explanation/generators/probe/index.html +++ b/v0.1.32/explanation/generators/probe/index.html @@ -9,4 +9,4 @@ opt = Descent(0.01) generator = CounterfactualExplanations.Generators.ProbeGenerator(opt=opt) ce = generate_counterfactual(x, target, counterfactual_data, M, generator, converge_when =:invalidation_rate, invalidation_rate = 0.5, learning_rate = 0.5) -plot(ce)

Choosing different invalidation rates makes the counterfactual more or less robust. The following plot shows the counterfactuals generated for different invalidation rates.

References

Pawelczyk, Martin, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci, and Himabindu Lakkaraju. 2022. “Probabilistically Robust Recourse: Navigating the Trade-Offs Between Costs and Robustness in Algorithmic Recourse.” arXiv Preprint arXiv:2203.06768.

+plot(ce)

Choosing different invalidation rates makes the counterfactual more or less robust. The following plot shows the counterfactuals generated for different invalidation rates.

References

Pawelczyk, Martin, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci, and Himabindu Lakkaraju. 2022. “Probabilistically Robust Recourse: Navigating the Trade-Offs Between Costs and Robustness in Algorithmic Recourse.” arXiv Preprint arXiv:2203.06768.

diff --git a/v0.1.32/explanation/generators/revise/index.html b/v0.1.32/explanation/generators/revise/index.html index 3ebab9313..571227c09 100644 --- a/v0.1.32/explanation/generators/revise/index.html +++ b/v0.1.32/explanation/generators/revise/index.html @@ -50,4 +50,4 @@ # Define generator: generator = REVISEGenerator(opt=Flux.Adam(0.5)) # Generate recourse: -ce = generate_counterfactual(x, target, counterfactual_data, M, generator; decision_threshold=γ)

The chart below shows the results:

References

Joshi, Shalmali, Oluwasanmi Koyejo, Warut Vijitbenjaronk, Been Kim, and Joydeep Ghosh. 2019. “Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems.” https://arxiv.org/abs/1907.09615.

[1] In general, we believe that there may be a trade-off between creating counterfactuals that respect the DGP vs. counterfactuals reflect the behaviour of the black-model in question - both accurately and complete.

[2] We believe that there is another potentially crucial disadvantage of relying on a separate generative model: it reallocates the task of learning realistic explanations for the data from the black-box model to the generative model.

+ce = generate_counterfactual(x, target, counterfactual_data, M, generator; decision_threshold=γ)

The chart below shows the results:

References

Joshi, Shalmali, Oluwasanmi Koyejo, Warut Vijitbenjaronk, Been Kim, and Joydeep Ghosh. 2019. “Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems.” https://arxiv.org/abs/1907.09615.

[1] In general, we believe that there may be a trade-off between creating counterfactuals that respect the DGP vs. counterfactuals reflect the behaviour of the black-model in question - both accurately and complete.

[2] We believe that there is another potentially crucial disadvantage of relying on a separate generative model: it reallocates the task of learning realistic explanations for the data from the black-box model to the generative model.

diff --git a/v0.1.32/explanation/index.html b/v0.1.32/explanation/index.html index e36a2a33a..6b8889714 100644 --- a/v0.1.32/explanation/index.html +++ b/v0.1.32/explanation/index.html @@ -1,2 +1,2 @@ -Overview · CounterfactualExplanations.jl

Explanation

In this section you will find detailed explanations about the methodology and code.

Explanation clarifies, deepens and broadens the reader’s understanding of a subject.

Diátaxis

In other words, you come here because you are interested in understanding how all of this actually works 🤓.

+Overview · CounterfactualExplanations.jl

Explanation

In this section you will find detailed explanations about the methodology and code.

Explanation clarifies, deepens and broadens the reader’s understanding of a subject.

Diátaxis

In other words, you come here because you are interested in understanding how all of this actually works 🤓.

diff --git a/v0.1.32/explanation/optimisers/jsma/index.html b/v0.1.32/explanation/optimisers/jsma/index.html index 9acab4475..faa4296aa 100644 --- a/v0.1.32/explanation/optimisers/jsma/index.html +++ b/v0.1.32/explanation/optimisers/jsma/index.html @@ -2,4 +2,4 @@ JSMA · CounterfactualExplanations.jl

Jacobian-based Saliency Map Attack

To search counterfactuals, Schut et al. (2021) propose to use a Jacobian-Based Saliency Map Attack (JSMA) inspired by the literature on adversarial attacks. It works by moving in the direction of the most salient feature at a fixed step size in each iteration. Schut et al. (2021) use this optimisation rule in the context of Bayesian classifiers and demonstrate good results in terms of plausibility — how realistic counterfactuals are — and redundancy — how sparse the proposed feature changes are.

JSMADescent

To implement this approach in a reusable manner, we have added JSMA as a Flux optimiser. In particular, we have added a class JSMADescent<:Flux.Optimise.AbstractOptimiser, for which we have overloaded the Flux.Optimise.apply! method. This makes it possible to reuse JSMADescent as an optimiser in composable generators.

The optimiser can be used with with any generator as follows:

using CounterfactualExplanations.Generators: JSMADescent
 generator = GenericGenerator() |>
     gen -> @with_optimiser(gen,JSMADescent(;η=0.1))
-ce = generate_counterfactual(x, target, counterfactual_data, M, generator)

The figure below compares the resulting counterfactual search outcome to the corresponding outcome with generic Descent.

plot(p1,p2,size=(1000,400))

Schut, Lisa, Oscar Key, Rory Mc Grath, Luca Costabello, Bogdan Sacaleanu, Yarin Gal, et al. 2021. “Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties.” In International Conference on Artificial Intelligence and Statistics, 1756–64. PMLR.

+ce = generate_counterfactual(x, target, counterfactual_data, M, generator)

The figure below compares the resulting counterfactual search outcome to the corresponding outcome with generic Descent.

plot(p1,p2,size=(1000,400))

Schut, Lisa, Oscar Key, Rory Mc Grath, Luca Costabello, Bogdan Sacaleanu, Yarin Gal, et al. 2021. “Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties.” In International Conference on Artificial Intelligence and Statistics, 1756–64. PMLR.

diff --git a/v0.1.32/explanation/optimisers/overview/index.html b/v0.1.32/explanation/optimisers/overview/index.html index f8a7c8894..6613fe302 100644 --- a/v0.1.32/explanation/optimisers/overview/index.html +++ b/v0.1.32/explanation/optimisers/overview/index.html @@ -1,2 +1,2 @@ -Overview · CounterfactualExplanations.jl

Optimisation Rules

Counterfactual search is an optimization problem. Consequently, the choice of the optimisation rule affects the generated counterfactuals. In the short term, we aim to enable users to choose any of the available Flux optimisers. This has not been sufficiently tested yet, and you may run into issues.

Custom Optimisation Rules

Flux optimisers are specifically designed for deep learning, and in particular, for learning model parameters. In counterfactual search, the features are the free parameters that we are optimising over. To this end, some custom optimisation rules are necessary to incorporate ideas presented in the literature. In the following, we introduce those rules.

+Overview · CounterfactualExplanations.jl

Optimisation Rules

Counterfactual search is an optimization problem. Consequently, the choice of the optimisation rule affects the generated counterfactuals. In the short term, we aim to enable users to choose any of the available Flux optimisers. This has not been sufficiently tested yet, and you may run into issues.

Custom Optimisation Rules

Flux optimisers are specifically designed for deep learning, and in particular, for learning model parameters. In counterfactual search, the features are the free parameters that we are optimising over. To this end, some custom optimisation rules are necessary to incorporate ideas presented in the literature. In the following, we introduce those rules.

diff --git a/v0.1.32/how_to_guides/custom_generators/index.html b/v0.1.32/how_to_guides/custom_generators/index.html index 8cff38c17..5060c7c4e 100644 --- a/v0.1.32/how_to_guides/custom_generators/index.html +++ b/v0.1.32/how_to_guides/custom_generators/index.html @@ -53,4 +53,4 @@ x, target, counterfactual_data, M, generator; num_counterfactuals=5) -plot(ce)

+plot(ce)

diff --git a/v0.1.32/how_to_guides/custom_models/index.html b/v0.1.32/how_to_guides/custom_models/index.html index 013c59de8..e8589d745 100644 --- a/v0.1.32/how_to_guides/custom_models/index.html +++ b/v0.1.32/how_to_guides/custom_models/index.html @@ -50,4 +50,4 @@ # Counterfactual search: generator = GenericGenerator() ce = generate_counterfactual(x, target, counterfactual_data, M, generator) -plot(ce)

References

Innes, Mike. 2018. “Flux: Elegant Machine Learning with Julia.” Journal of Open Source Software 3 (25): 602.

+plot(ce)

References

Innes, Mike. 2018. “Flux: Elegant Machine Learning with Julia.” Journal of Open Source Software 3 (25): 602.

diff --git a/v0.1.32/how_to_guides/index.html b/v0.1.32/how_to_guides/index.html index d231d693d..d82ca29fc 100644 --- a/v0.1.32/how_to_guides/index.html +++ b/v0.1.32/how_to_guides/index.html @@ -1,2 +1,2 @@ -Overview · CounterfactualExplanations.jl

How-To Guides

In this section you will find a series of how-to-guides that showcase specific use cases of Conformal Prediction.

How-to guides are directions that take the reader through the steps required to solve a real-world problem. How-to guides are goal-oriented.

Diátaxis

In other words, you come here because you may have some particular problem in mind, would like to see how it can be solved using CP and then most likely head off again 🫡.

+Overview · CounterfactualExplanations.jl

How-To Guides

In this section you will find a series of how-to-guides that showcase specific use cases of Conformal Prediction.

How-to guides are directions that take the reader through the steps required to solve a real-world problem. How-to guides are goal-oriented.

Diátaxis

In other words, you come here because you may have some particular problem in mind, would like to see how it can be solved using CP and then most likely head off again 🫡.

diff --git a/v0.1.32/index.html b/v0.1.32/index.html index 2ecc66d0f..063aecc41 100644 --- a/v0.1.32/index.html +++ b/v0.1.32/index.html @@ -42,4 +42,4 @@ author = {Patrick Altmeyer and Arie van Deursen and Cynthia C. s. Liem}, title = {Explaining Black-Box Models through Counterfactuals}, journal = {Proceedings of the JuliaCon Conferences} -}

📚 References

Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia Liem. 2023. “Endogenous Macrodynamics in Algorithmic Recourse.” In First IEEE Conference on Secure and Trustworthy Machine Learning. https://doi.org/10.1109/satml54575.2023.00036.

Antorán, Javier, Umang Bhatt, Tameem Adel, Adrian Weller, and José Miguel Hernández-Lobato. 2020. “Getting a Clue: A Method for Explaining Uncertainty Estimates.” https://arxiv.org/abs/2006.06848.

Joshi, Shalmali, Oluwasanmi Koyejo, Warut Vijitbenjaronk, Been Kim, and Joydeep Ghosh. 2019. “Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems.” https://arxiv.org/abs/1907.09615.

Kaggle. 2011. “Give Me Some Credit, Improve on the State of the Art in Credit Scoring by Predicting the Probability That Somebody Will Experience Financial Distress in the Next Two Years.” Kaggle. https://www.kaggle.com/c/GiveMeSomeCredit.

Laugel, Thibault, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, and Marcin Detyniecki. 2017. “Inverse Classification for Comparison-Based Interpretability in Machine Learning.” https://arxiv.org/abs/1712.08443.

Mothilal, Ramaravind K, Amit Sharma, and Chenhao Tan. 2020. “Explaining Machine Learning Classifiers Through Diverse Counterfactual Explanations.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 607–17. https://doi.org/10.1145/3351095.3372850.

Pawelczyk, Martin, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci, and Himabindu Lakkaraju. 2022. “Probabilistically Robust Recourse: Navigating the Trade-Offs Between Costs and Robustness in Algorithmic Recourse.” arXiv Preprint arXiv:2203.06768.

Schut, Lisa, Oscar Key, Rory Mc Grath, Luca Costabello, Bogdan Sacaleanu, Yarin Gal, et al. 2021. “Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties.” In International Conference on Artificial Intelligence and Statistics, 1756–64. PMLR.

Tolomei, Gabriele, Fabrizio Silvestri, Andrew Haines, and Mounia Lalmas. 2017. “Interpretable Predictions of Tree-Based Ensembles via Actionable Feature Tweaking.” In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 465–74. https://doi.org/10.1145/3097983.3098039.

Wachter, Sandra, Brent Mittelstadt, and Chris Russell. 2017. “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR.” Harv. JL & Tech. 31: 841. https://doi.org/10.2139/ssrn.3063289.

+}

📚 References

Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia Liem. 2023. “Endogenous Macrodynamics in Algorithmic Recourse.” In First IEEE Conference on Secure and Trustworthy Machine Learning. https://doi.org/10.1109/satml54575.2023.00036.

Antorán, Javier, Umang Bhatt, Tameem Adel, Adrian Weller, and José Miguel Hernández-Lobato. 2020. “Getting a Clue: A Method for Explaining Uncertainty Estimates.” https://arxiv.org/abs/2006.06848.

Joshi, Shalmali, Oluwasanmi Koyejo, Warut Vijitbenjaronk, Been Kim, and Joydeep Ghosh. 2019. “Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems.” https://arxiv.org/abs/1907.09615.

Kaggle. 2011. “Give Me Some Credit, Improve on the State of the Art in Credit Scoring by Predicting the Probability That Somebody Will Experience Financial Distress in the Next Two Years.” Kaggle. https://www.kaggle.com/c/GiveMeSomeCredit.

Laugel, Thibault, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, and Marcin Detyniecki. 2017. “Inverse Classification for Comparison-Based Interpretability in Machine Learning.” https://arxiv.org/abs/1712.08443.

Mothilal, Ramaravind K, Amit Sharma, and Chenhao Tan. 2020. “Explaining Machine Learning Classifiers Through Diverse Counterfactual Explanations.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 607–17. https://doi.org/10.1145/3351095.3372850.

Pawelczyk, Martin, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci, and Himabindu Lakkaraju. 2022. “Probabilistically Robust Recourse: Navigating the Trade-Offs Between Costs and Robustness in Algorithmic Recourse.” arXiv Preprint arXiv:2203.06768.

Schut, Lisa, Oscar Key, Rory Mc Grath, Luca Costabello, Bogdan Sacaleanu, Yarin Gal, et al. 2021. “Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties.” In International Conference on Artificial Intelligence and Statistics, 1756–64. PMLR.

Tolomei, Gabriele, Fabrizio Silvestri, Andrew Haines, and Mounia Lalmas. 2017. “Interpretable Predictions of Tree-Based Ensembles via Actionable Feature Tweaking.” In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 465–74. https://doi.org/10.1145/3097983.3098039.

Wachter, Sandra, Brent Mittelstadt, and Chris Russell. 2017. “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR.” Harv. JL & Tech. 31: 841. https://doi.org/10.2139/ssrn.3063289.

diff --git a/v0.1.32/reference/index.html b/v0.1.32/reference/index.html index 76ac66203..2a74bbe96 100644 --- a/v0.1.32/reference/index.html +++ b/v0.1.32/reference/index.html @@ -17,4 +17,4 @@ CounterfactualExplanations.Generators, CounterfactualExplanations.Objectives ] -Public = false +Public = false diff --git a/v0.1.32/search/index.html b/v0.1.32/search/index.html index e7266e3c0..1b37b7cb7 100644 --- a/v0.1.32/search/index.html +++ b/v0.1.32/search/index.html @@ -1,2 +1,2 @@ -Search · CounterfactualExplanations.jl

Loading search...

    +Search · CounterfactualExplanations.jl

    Loading search...

      diff --git a/v0.1.32/tutorials/benchmarking/index.html b/v0.1.32/tutorials/benchmarking/index.html index 7dcf94577..6c74a2a0c 100644 --- a/v0.1.32/tutorials/benchmarking/index.html +++ b/v0.1.32/tutorials/benchmarking/index.html @@ -179,4 +179,4 @@ 1 │ circles Generic 2.71561 2 │ circles Greedy 0.596901 3 │ moons Generic 1.30436 - 4 │ moons Greedy 0.742734 + 4 │ moons Greedy 0.742734 diff --git a/v0.1.32/tutorials/data_catalogue/index.html b/v0.1.32/tutorials/data_catalogue/index.html index 3a43ae8b1..3cdfa1b46 100644 --- a/v0.1.32/tutorials/data_catalogue/index.html +++ b/v0.1.32/tutorials/data_catalogue/index.html @@ -34,4 +34,4 @@ 10 10 10

      We can also use a helper function to split the data into train and test sets:

      train_data, test_data = 
      -    CounterfactualExplanations.DataPreprocessing.train_test_split(counterfactual_data)
      + CounterfactualExplanations.DataPreprocessing.train_test_split(counterfactual_data) diff --git a/v0.1.32/tutorials/data_preprocessing/index.html b/v0.1.32/tutorials/data_preprocessing/index.html index fcce141f8..833ddb29e 100644 --- a/v0.1.32/tutorials/data_preprocessing/index.html +++ b/v0.1.32/tutorials/data_preprocessing/index.html @@ -91,4 +91,4 @@ ce = generate_counterfactual(x, target, counterfactual_data, M, generator)

      The resulting counterfactual path is shown in the chart below. Since only the first feature can be perturbed, the sample can only move along the horizontal axis.

      plot(ce)

      Figure 1: Counterfactual path with an immutable feature.

      <!– ## Domain constraints &#10;In some cases, we may also want to constrain the domain of some feature. For example, age as a feature is constrained to a range from 0 to some upper bound corresponding perhaps to the average life expectancy of humans. Below, for example, we impose an upper bound of $0.5$ for our two features. &#10;```{.julia} counterfactualdata.mutability = [:both, :both] counterfactualdata.domain = [(0,0) for var in counterfactualdata.featurescontinuous]

      &#10;This results in the counterfactual path shown below: since features are not allowed to be perturbed beyond the upper bound, the resulting counterfactual falls just short of the threshold probability $\gamma$.
       &#10;```{.julia}
       ce = generate_counterfactual(x, target, counterfactual_data, M, generator)
      -plot(ce)

      –>

      +plot(ce)

      –>

      diff --git a/v0.1.32/tutorials/evaluation/index.html b/v0.1.32/tutorials/evaluation/index.html index bf4481379..4e4640007 100644 --- a/v0.1.32/tutorials/evaluation/index.html +++ b/v0.1.32/tutorials/evaluation/index.html @@ -53,4 +53,4 @@ 12 │ 4 1 validity 1.0 13 │ 5 1 distance 3.9374 14 │ 5 1 redundancy [0.0, 0.0, 0.0, 0.0, 0.0] - 15 │ 5 1 validity 1.0

      This leads us to our next topic: Performance Benchmarks.

      + 15 │ 5 1 validity 1.0

      This leads us to our next topic: Performance Benchmarks.

      diff --git a/v0.1.32/tutorials/generators/index.html b/v0.1.32/tutorials/generators/index.html index 77239df60..46b44f5f4 100644 --- a/v0.1.32/tutorials/generators/index.html +++ b/v0.1.32/tutorials/generators/index.html @@ -22,4 +22,4 @@ :greedy => GreedyGenerator

      To specify the type of generator you want to use, you can simply instantiate it:

      # Search:
       generator = GenericGenerator()
       ce = generate_counterfactual(x, target, counterfactual_data, M, generator)
      -plot(ce)

      We generally make an effort to follow the literature as closely as possible when implementing off-the-shelf generators.

      References

      Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia Liem. 2023. “Endogenous Macrodynamics in Algorithmic Recourse.” In First IEEE Conference on Secure and Trustworthy Machine Learning. https://doi.org/10.1109/satml54575.2023.00036.

      Joshi, Shalmali, Oluwasanmi Koyejo, Warut Vijitbenjaronk, Been Kim, and Joydeep Ghosh. 2019. “Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems.” https://arxiv.org/abs/1907.09615.

      Mothilal, Ramaravind K, Amit Sharma, and Chenhao Tan. 2020. “Explaining Machine Learning Classifiers Through Diverse Counterfactual Explanations.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 607–17. https://doi.org/10.1145/3351095.3372850.

      +plot(ce)

      We generally make an effort to follow the literature as closely as possible when implementing off-the-shelf generators.

      References

      Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia Liem. 2023. “Endogenous Macrodynamics in Algorithmic Recourse.” In First IEEE Conference on Secure and Trustworthy Machine Learning. https://doi.org/10.1109/satml54575.2023.00036.

      Joshi, Shalmali, Oluwasanmi Koyejo, Warut Vijitbenjaronk, Been Kim, and Joydeep Ghosh. 2019. “Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems.” https://arxiv.org/abs/1907.09615.

      Mothilal, Ramaravind K, Amit Sharma, and Chenhao Tan. 2020. “Explaining Machine Learning Classifiers Through Diverse Counterfactual Explanations.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 607–17. https://doi.org/10.1145/3351095.3372850.

      diff --git a/v0.1.32/tutorials/index.html b/v0.1.32/tutorials/index.html index be5e7169c..0405e058e 100644 --- a/v0.1.32/tutorials/index.html +++ b/v0.1.32/tutorials/index.html @@ -1,2 +1,2 @@ -Overview · CounterfactualExplanations.jl

      Tutorials

      In this section, you will find a series of tutorials that should help you gain a basic understanding of Conformal Prediction and how to apply it in Julia using this package.

      Tutorials are lessons that take the reader by the hand through a series of steps to complete a project of some kind. Tutorials are learning-oriented.

      Diátaxis

      In other words, you come here because you are new to this topic and are looking for a first peek at the methodology and code 🫣.

      +Overview · CounterfactualExplanations.jl

      Tutorials

      In this section, you will find a series of tutorials that should help you gain a basic understanding of Conformal Prediction and how to apply it in Julia using this package.

      Tutorials are lessons that take the reader by the hand through a series of steps to complete a project of some kind. Tutorials are learning-oriented.

      Diátaxis

      In other words, you come here because you are new to this topic and are looking for a first peek at the methodology and code 🫣.

      diff --git a/v0.1.32/tutorials/model_catalogue/index.html b/v0.1.32/tutorials/model_catalogue/index.html index b74980e49..a309f9ca0 100644 --- a/v0.1.32/tutorials/model_catalogue/index.html +++ b/v0.1.32/tutorials/model_catalogue/index.html @@ -26,4 +26,4 @@ dropout = true )

      The model_params can be supplied to the familiar API call:

      M = fit_model(train_data, :MLP; model_params...)
      FluxModel(Chain(Dense(784 => 32, relu), Dropout(0.25, active=false), Dense(32 => 10)), :classification_multi)

      The model performance on our test set can be evaluated as follows:

      model_evaluation(M, test_data)
      1-element Vector{Float64}:
        0.9136076495599659

      Finally, let’s restore the default training parameters:

      CounterfactualExplanations.reset!(flux_training_params)

      Fitting and tuning MLJ models

      Among models from the MLJ library, three models are supported as of now:

      mlj_models_catalogue

      From these models, the DecisionTreeModel and the RandomForestModel are compatible with the Feature Tweak generator. Support for other generators has not been implemented, as both decision trees and random forests are non-differentiable tree-based models and thus, gradient-based generators don’t apply for them. Support for generating counterfactuals for the EvoTreeModel has not been implemented yet.

      Tuning MLJ models is very simple. As the first step, let’s reload the dataset:

      n = 500
      -counterfactual_data = CounterfactualExplanations.Data.load_moons(n)

      Using the usual procedure for fitting models, we can call the following method:

      tree = CounterfactualExplanations.Models.fit_model(counterfactual_data, :DecisionTree)

      However, it’s also possible to tune the DecisionTreeClassifier’s parameters. This can be done using the keyword arguments when calling fit_model() as follows:

      tree = CounterfactualExplanations.Models.fit_model(counterfactual_data, :DecisionTree; max_depth=2, min_samples_leaf=3)

      For all supported MLJ models, every tunable parameter they have is supported as a keyword argument. The tunable parameters for the DecisionTreeModel and the RandomForestModel can be found from the documentation of the DecisionTree.jl package under the Decision Tree Classifier and Random Forest Classifier sections. The tunable parameters for the EvoTreeModel can be found from the documentation of the EvoTrees.jl package under the EvoTreeClassifier section.

      Please note again that generating counterfactuals for the EvoTreeModel is not supported yet.

      References

      Lakshminarayanan, Balaji, Alexander Pritzel, and Charles Blundell. 2016. “Simple and Scalable Predictive Uncertainty Estimation Using Deep Ensembles.” https://arxiv.org/abs/1612.01474.

      LeCun, Yann. 1998. “The MNIST Database of Handwritten Digits.”

      +counterfactual_data = CounterfactualExplanations.Data.load_moons(n)

      Using the usual procedure for fitting models, we can call the following method:

      tree = CounterfactualExplanations.Models.fit_model(counterfactual_data, :DecisionTree)

      However, it’s also possible to tune the DecisionTreeClassifier’s parameters. This can be done using the keyword arguments when calling fit_model() as follows:

      tree = CounterfactualExplanations.Models.fit_model(counterfactual_data, :DecisionTree; max_depth=2, min_samples_leaf=3)

      For all supported MLJ models, every tunable parameter they have is supported as a keyword argument. The tunable parameters for the DecisionTreeModel and the RandomForestModel can be found from the documentation of the DecisionTree.jl package under the Decision Tree Classifier and Random Forest Classifier sections. The tunable parameters for the EvoTreeModel can be found from the documentation of the EvoTrees.jl package under the EvoTreeClassifier section.

      Please note again that generating counterfactuals for the EvoTreeModel is not supported yet.

      References

      Lakshminarayanan, Balaji, Alexander Pritzel, and Charles Blundell. 2016. “Simple and Scalable Predictive Uncertainty Estimation Using Deep Ensembles.” https://arxiv.org/abs/1612.01474.

      LeCun, Yann. 1998. “The MNIST Database of Handwritten Digits.”

      diff --git a/v0.1.32/tutorials/models/index.html b/v0.1.32/tutorials/models/index.html index 4e7493e31..d23fbad17 100644 --- a/v0.1.32/tutorials/models/index.html +++ b/v0.1.32/tutorials/models/index.html @@ -40,4 +40,4 @@ Epoch 80 avg_loss(data) = 0.011847609f0 Epoch 100 -avg_loss(data) = 0.0072429096f0

      To prepare the fitted model for use with our package, we need to wrap it inside a container. For plain-vanilla models trained in Flux.jl, the corresponding constructor is called FluxModel. There is also a separate constructor called FluxEnsemble, which applies to Deep Ensembles. Deep Ensembles are a popular approach to approximate Bayesian Deep Learning and have been shown to generate good predictive uncertainty estimates (Lakshminarayanan, Pritzel, and Blundell 2016).

      The appropriate API call to wrap our simple network in a container follows below:

      M = FluxModel(nn)
      FluxModel(Chain(Dense(2 => 32, relu), Dropout(0.1, active=false), Dense(32 => 2)), :classification_binary)

      The likelihood function of the output variable is automatically inferred from the data. The generic plot() method can be called on the model and data to visualise the results:

      plot(M, counterfactual_data)

      Our model M is now ready for use with the package.

      References

      Lakshminarayanan, Balaji, Alexander Pritzel, and Charles Blundell. 2016. “Simple and Scalable Predictive Uncertainty Estimation Using Deep Ensembles.” https://arxiv.org/abs/1612.01474.

      +avg_loss(data) = 0.0072429096f0

      To prepare the fitted model for use with our package, we need to wrap it inside a container. For plain-vanilla models trained in Flux.jl, the corresponding constructor is called FluxModel. There is also a separate constructor called FluxEnsemble, which applies to Deep Ensembles. Deep Ensembles are a popular approach to approximate Bayesian Deep Learning and have been shown to generate good predictive uncertainty estimates (Lakshminarayanan, Pritzel, and Blundell 2016).

      The appropriate API call to wrap our simple network in a container follows below:

      M = FluxModel(nn)
      FluxModel(Chain(Dense(2 => 32, relu), Dropout(0.1, active=false), Dense(32 => 2)), :classification_binary)

      The likelihood function of the output variable is automatically inferred from the data. The generic plot() method can be called on the model and data to visualise the results:

      plot(M, counterfactual_data)

      Our model M is now ready for use with the package.

      References

      Lakshminarayanan, Balaji, Alexander Pritzel, and Charles Blundell. 2016. “Simple and Scalable Predictive Uncertainty Estimation Using Deep Ensembles.” https://arxiv.org/abs/1612.01474.

      diff --git a/v0.1.32/tutorials/parallelization/index.html b/v0.1.32/tutorials/parallelization/index.html index 6ca43ba68..fec7a2c24 100644 --- a/v0.1.32/tutorials/parallelization/index.html +++ b/v0.1.32/tutorials/parallelization/index.html @@ -64,4 +64,4 @@ bmk = benchmark(counterfactual_data; parallelizer=parallelizer) -MPI.Finalize()

      The file can be executed from the command line as follows:

      mpiexecjl --project -n 4 julia -e 'include("docs/src/srcipts/mpi.jl")'
      +MPI.Finalize()

      The file can be executed from the command line as follows:

      mpiexecjl --project -n 4 julia -e 'include("docs/src/srcipts/mpi.jl")'
      diff --git a/v0.1.32/tutorials/simple_example/index.html b/v0.1.32/tutorials/simple_example/index.html index 197f8f08e..d8c4bf742 100644 --- a/v0.1.32/tutorials/simple_example/index.html +++ b/v0.1.32/tutorials/simple_example/index.html @@ -9,4 +9,4 @@ x = select_factual(counterfactual_data, chosen)

      Finally, we generate and visualize the generated counterfactual:

      # Search:
       generator = WachterGenerator()
       ce = generate_counterfactual(x, target, counterfactual_data, M, generator)
      -plot(ce)

      +plot(ce)

      diff --git a/v0.1.32/tutorials/whistle_stop/index.html b/v0.1.32/tutorials/whistle_stop/index.html index d19e17631..51498b99d 100644 --- a/v0.1.32/tutorials/whistle_stop/index.html +++ b/v0.1.32/tutorials/whistle_stop/index.html @@ -23,4 +23,4 @@ ) ces[key] = ce plts = [plts..., plot(ce; title=key, colorbar=false)] -end

      +end