From 350cbc2fa33c17db08f684634898bc9d82a929da Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Wed, 23 Oct 2024 10:59:59 +0000 Subject: [PATCH] build based on 5ff1e81 --- dev/.documenter-siteinfo.json | 2 +- dev/CHANGELOG/index.html | 2 +- dev/assets/resources/index.html | 2 +- dev/contribute/index.html | 2 +- dev/contribute/performance/index.html | 2 +- dev/explanation/architecture/index.html | 2 +- dev/explanation/categorical/index.html | 2 +- .../evaluation/faithfulness/index.html | 2 +- .../evaluation/overview/index.html | 2 +- .../generators/clap_roar/index.html | 2 +- dev/explanation/generators/clue/index.html | 2 +- dev/explanation/generators/dice/index.html | 2 +- .../generators/feature_tweak/index.html | 2 +- dev/explanation/generators/generic/index.html | 2 +- .../generators/gravitational/index.html | 2 +- dev/explanation/generators/greedy/index.html | 2 +- .../generators/growing_spheres/index.html | 2 +- dev/explanation/generators/mint/index.html | 2 +- .../generators/overview/index.html | 2 +- dev/explanation/generators/probe/index.html | 2 +- dev/explanation/generators/revise/index.html | 2 +- dev/explanation/generators/tcrex/index.html | 2 +- dev/explanation/index.html | 2 +- dev/explanation/optimisers/jsma/index.html | 2 +- .../optimisers/overview/index.html | 2 +- dev/extensions/index.html | 2 +- dev/extensions/laplace_redux/index.html | 2 +- dev/extensions/neurotree/index.html | 2 +- .../custom_generators/index.html | 2 +- dev/how_to_guides/custom_models/index.html | 2 +- dev/how_to_guides/index.html | 2 +- dev/index.html | 2 +- dev/objects.inv | Bin 9534 -> 9524 bytes dev/reference/index.html | 126 +++++++++--------- dev/release-notes/index.html | 2 +- dev/search_index.js | 2 +- dev/tutorials/benchmarking/index.html | 2 +- dev/tutorials/convergence/index.html | 2 +- dev/tutorials/data_catalogue/index.html | 2 +- dev/tutorials/data_preprocessing/index.html | 2 +- dev/tutorials/evaluation/index.html | 2 +- dev/tutorials/generators/index.html | 2 +- dev/tutorials/index.html | 2 +- dev/tutorials/model_catalogue/index.html | 2 +- dev/tutorials/models/index.html | 2 +- dev/tutorials/parallelization/index.html | 2 +- dev/tutorials/simple_example/index.html | 2 +- dev/tutorials/whistle_stop/index.html | 2 +- 48 files changed, 109 insertions(+), 109 deletions(-) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 0c8ad0700..fd7501098 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.5","generation_timestamp":"2024-09-30T11:04:18","documenter_version":"1.7.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-10-23T10:59:36","documenter_version":"1.7.0"}} \ No newline at end of file diff --git a/dev/CHANGELOG/index.html b/dev/CHANGELOG/index.html index f9724ec19..7ab219bbe 100644 --- a/dev/CHANGELOG/index.html +++ b/dev/CHANGELOG/index.html @@ -1,2 +1,2 @@ -Changelog · CounterfactualExplanations.jl

Changelog

All notable changes to this project will be documented in this file.

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

Note: We try to adhere to these practices as of version [v1.1.1].

Version [1.3.3] - 2024-09-30

Changed

  • Fixed a remaining bug in NeuroTreeExt extensions. [#475]

Version [1.3.2] - 2024-09-24

Added

  • Added support for using a random forest as a surrogate model for the T-CREx generator. [#483]

Changed

  • Improved the T-CREx documentation further by bringing example even closer to the example in the paper. [#483]
  • Include citation linking to ICML paper in T-CREx documentation and docstrings. [#480]

Version [1.3.1] - 2024-09-24

Changed

  • Fixed a remaining bug in NeuroTreeExt extensions. [#475]

Version [1.3.0] - 2024-09-16

Changed

  • Fixed bug in NeuroTreeExt extensions. [#475]

Added

  • Added basic support for the T-CREx counterfactual generator. [#473]
  • Added docstrings for package extensions to documentation. [#475]

Version [1.2.0] - 2024-09-10

Added

  • Added documentation for generating counterfactuals consistent with the MINT framework. [#467]
  • Added tests for new evaluation metrics and JEM extension. [#471]
  • Added support for gradient-based causal algorithm-recourse (MNIT) as described in Karimi et al. (2020). This incorporates an input encoder that is based on a Structural Causal Model [#457]
  • Added out-of-the-box support for training joint energy models (JEM). [#454]
  • Added new evaluation metric to measure faithfulness of counterfactual explanations as in Altmeyer et al. (2024). [#454]
  • A tutorial in the documentation ("Explanation" section) explaining the faithfulness metric in detail. [#454]
  • Added support for an energy constraint as in Altmeyer et al. (2024). This is the first step towards adding functionality for ECCCo. [#387]

Changed

  • The fitresult field of Model now takes a concrete Fitresult type, for which some basic methods have been defined. This mutable struct has a field called other that accepts a dictionary Dict that can be filled with additional objects. [#454]
  • Regenerated pre-trained model artifacts. [#454]
  • Updated the tutorial on "Handling Data". [#454]

Removed

  • Removed bug in find_potential_neighbours method. [#454]

Version [1.1.6] - 2024-05-19

Removed

  • Removed the call to the Iris function in the test suite because of HTTPs issues. [#452]
  • Removed the mlj_models_catalogue because it served no obvious purpose. In the future, we may instead add meta information to the all_models_catalogue. [#444]

Added

  • New general Model struct that wraps empty concrete types. This adds a more general interface that is still flexible enough by simply using multiple dispatch on the empty concrete types. [#444]
  • A new incompatible(::AbstractGenerator, ::AbstractCounterfactualExplanation) function has been added to avoid running a counterfactual search if the generator is incompatible with any other specification (e.g. the model). [#444]

Changed

  • No longer exporting many of the deprecated functions. [#452]
  • Updated pre-trained model artifacts. [#444]
  • Some function signatures have been deprecated, e.g. NeuroTreeModel to NeuroTree, LaplaceReduxModel to LaplaceNN. [#444]
  • Support for DecisionTree.jl models and the FeatureTweakGenerator have been moved to an extension (DecisionTreeExt). [#444]
  • Updates to NeuroTreeModels extensions to incorporate breaking changes to package. [#444]
  • No longer running alloc test on Windows. [#441]
  • Slight change to doctests. [#447]

Version [v1.1.5] - 2024-04-30

Added

  • Unit tests: adds a simple performance benchmark to test that for a small problem, generating a counterfactual using the generic generator takes at most 4700 allocations. Only run on julia v1.10 and higher. [#436]

Changed

  • The find_potential_neighbours is now only triggered if one of the penalties of the generator requires access to samples from the target domain. This improves scalability because calling the function can be computationally costly (forward-pass). [#436]
  • The target variable encodings are now handled more efficiently. Previously certain tasks were repeated, which was not necessary. [#436]

Removed

  • Removed the assertion checking that the model ever predicts the target value. While this assertion is useful, it is not essential. For large enough models and datasets, this forward pass can be very costly. [#436]
  • Removed redundant distance_from_targets function. [#436]

Version [v1.1.4] - 2024-04-25

Changed

  • Refactors the encodings and decodings such that it is now more streamlined. Instead of conditional statements, encodings are now dispatched on the type of a new unifying data.input_encoder field. [#432]
  • Refactors the check for redundancy. This is now based on the convergence type and done right before the counterfactual search begins, if not redundant. [#432]

Added

  • Added additional unit tests. [#437]

Version [v1.1.3] - 2024-04-17

Added

  • Adds a section on Convergence to the documentation, Changelog.jl functionality and a few doc tests. [#429]

Changed

  • Changes style of taking gradients for the counterfactual search from implicit to explicit. [#430]
  • Removed all implicit imports. [#430]

Removed

  • Removed CUDA.jl dependency, because redundant. [#430]
  • Removed Parameters.jl dependency, because redundant. [#430]

Version [v1.1.2] - 2024-04-16

Changed

  • Replaces the GIF in the README and introduction of docs for a static image.

Version [v1.1.1] - 2024-04-15

Added

  • Added tests for LaplaceRedux extension. Bumped upper compat bound for LaplaceRedux.jl. [#428]

<!– Links generated by Changelog.jl –>

[#428]: https://github.com/juliatrustworthyai/CounterfactualExplanations.jl/issues/428 [#429]: https://github.com/juliatrustworthyai/CounterfactualExplanations.jl/issues/429

+Changelog · CounterfactualExplanations.jl

Changelog

All notable changes to this project will be documented in this file.

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

Note: We try to adhere to these practices as of version [v1.1.1].

Version [1.3.4] - 2024-10-22

Changed

  • Fixed a bug in the find_potential_neighbours method.

Version [1.3.3] - 2024-09-30

Changed

  • Fixed a remaining bug in NeuroTreeExt extensions. [#475]

Version [1.3.2] - 2024-09-24

Added

  • Added support for using a random forest as a surrogate model for the T-CREx generator. [#483]

Changed

  • Improved the T-CREx documentation further by bringing example even closer to the example in the paper. [#483]
  • Include citation linking to ICML paper in T-CREx documentation and docstrings. [#480]

Version [1.3.1] - 2024-09-24

Changed

  • Fixed a remaining bug in NeuroTreeExt extensions. [#475]

Version [1.3.0] - 2024-09-16

Changed

  • Fixed bug in NeuroTreeExt extensions. [#475]

Added

  • Added basic support for the T-CREx counterfactual generator. [#473]
  • Added docstrings for package extensions to documentation. [#475]

Version [1.2.0] - 2024-09-10

Added

  • Added documentation for generating counterfactuals consistent with the MINT framework. [#467]
  • Added tests for new evaluation metrics and JEM extension. [#471]
  • Added support for gradient-based causal algorithm-recourse (MNIT) as described in Karimi et al. (2020). This incorporates an input encoder that is based on a Structural Causal Model [#457]
  • Added out-of-the-box support for training joint energy models (JEM). [#454]
  • Added new evaluation metric to measure faithfulness of counterfactual explanations as in Altmeyer et al. (2024). [#454]
  • A tutorial in the documentation ("Explanation" section) explaining the faithfulness metric in detail. [#454]
  • Added support for an energy constraint as in Altmeyer et al. (2024). This is the first step towards adding functionality for ECCCo. [#387]

Changed

  • The fitresult field of Model now takes a concrete Fitresult type, for which some basic methods have been defined. This mutable struct has a field called other that accepts a dictionary Dict that can be filled with additional objects. [#454]
  • Regenerated pre-trained model artifacts. [#454]
  • Updated the tutorial on "Handling Data". [#454]

Removed

  • Removed bug in find_potential_neighbours method. [#454]

Version [1.1.6] - 2024-05-19

Removed

  • Removed the call to the Iris function in the test suite because of HTTPs issues. [#452]
  • Removed the mlj_models_catalogue because it served no obvious purpose. In the future, we may instead add meta information to the all_models_catalogue. [#444]

Added

  • New general Model struct that wraps empty concrete types. This adds a more general interface that is still flexible enough by simply using multiple dispatch on the empty concrete types. [#444]
  • A new incompatible(::AbstractGenerator, ::AbstractCounterfactualExplanation) function has been added to avoid running a counterfactual search if the generator is incompatible with any other specification (e.g. the model). [#444]

Changed

  • No longer exporting many of the deprecated functions. [#452]
  • Updated pre-trained model artifacts. [#444]
  • Some function signatures have been deprecated, e.g. NeuroTreeModel to NeuroTree, LaplaceReduxModel to LaplaceNN. [#444]
  • Support for DecisionTree.jl models and the FeatureTweakGenerator have been moved to an extension (DecisionTreeExt). [#444]
  • Updates to NeuroTreeModels extensions to incorporate breaking changes to package. [#444]
  • No longer running alloc test on Windows. [#441]
  • Slight change to doctests. [#447]

Version [v1.1.5] - 2024-04-30

Added

  • Unit tests: adds a simple performance benchmark to test that for a small problem, generating a counterfactual using the generic generator takes at most 4700 allocations. Only run on julia v1.10 and higher. [#436]

Changed

  • The find_potential_neighbours is now only triggered if one of the penalties of the generator requires access to samples from the target domain. This improves scalability because calling the function can be computationally costly (forward-pass). [#436]
  • The target variable encodings are now handled more efficiently. Previously certain tasks were repeated, which was not necessary. [#436]

Removed

  • Removed the assertion checking that the model ever predicts the target value. While this assertion is useful, it is not essential. For large enough models and datasets, this forward pass can be very costly. [#436]
  • Removed redundant distance_from_targets function. [#436]

Version [v1.1.4] - 2024-04-25

Changed

  • Refactors the encodings and decodings such that it is now more streamlined. Instead of conditional statements, encodings are now dispatched on the type of a new unifying data.input_encoder field. [#432]
  • Refactors the check for redundancy. This is now based on the convergence type and done right before the counterfactual search begins, if not redundant. [#432]

Added

  • Added additional unit tests. [#437]

Version [v1.1.3] - 2024-04-17

Added

  • Adds a section on Convergence to the documentation, Changelog.jl functionality and a few doc tests. [#429]

Changed

  • Changes style of taking gradients for the counterfactual search from implicit to explicit. [#430]
  • Removed all implicit imports. [#430]

Removed

  • Removed CUDA.jl dependency, because redundant. [#430]
  • Removed Parameters.jl dependency, because redundant. [#430]

Version [v1.1.2] - 2024-04-16

Changed

  • Replaces the GIF in the README and introduction of docs for a static image.

Version [v1.1.1] - 2024-04-15

Added

  • Added tests for LaplaceRedux extension. Bumped upper compat bound for LaplaceRedux.jl. [#428]

<!– Links generated by Changelog.jl –>

[#428]: https://github.com/juliatrustworthyai/CounterfactualExplanations.jl/issues/428 [#429]: https://github.com/juliatrustworthyai/CounterfactualExplanations.jl/issues/429

diff --git a/dev/assets/resources/index.html b/dev/assets/resources/index.html index fd2832bb1..fed9b29fa 100644 --- a/dev/assets/resources/index.html +++ b/dev/assets/resources/index.html @@ -1,2 +1,2 @@ -📚 Additional Resources · CounterfactualExplanations.jl
+📚 Additional Resources · CounterfactualExplanations.jl
diff --git a/dev/contribute/index.html b/dev/contribute/index.html index c8131e7f8..bfdbee6e4 100644 --- a/dev/contribute/index.html +++ b/dev/contribute/index.html @@ -1,2 +1,2 @@ -🛠 Contribute · CounterfactualExplanations.jl

🛠 Contribute

Contributions of any kind are very much welcome! Take a look at the issue to see what things we are currently working on. If you have an idea for a new feature or want to report a bug, please open a new issue.

Development

If your looking to contribute code, it may be helpful to check out the Explanation section of the docs.

Testing

Please always make sure to add tests for any new features or changes.

Documentation

If you add new features or change existing ones, please make sure to update the documentation accordingly. The documentation is written in Documenter.jl and is located in the docs/src folder.

Log Changes

As of version 1.1.1, we have tried to be more stringent about logging changes. Please make sure to add a note to the CHANGELOG.md file for any changes you make. It is sufficient to add a note under the Unreleased section.

General Pointers

There are also some general pointers for people looking to contribute to any of our Taija packages here.

Please follow the SciML ColPrac guide.

+🛠 Contribute · CounterfactualExplanations.jl

🛠 Contribute

Contributions of any kind are very much welcome! Take a look at the issue to see what things we are currently working on. If you have an idea for a new feature or want to report a bug, please open a new issue.

Development

If your looking to contribute code, it may be helpful to check out the Explanation section of the docs.

Testing

Please always make sure to add tests for any new features or changes.

Documentation

If you add new features or change existing ones, please make sure to update the documentation accordingly. The documentation is written in Documenter.jl and is located in the docs/src folder.

Log Changes

As of version 1.1.1, we have tried to be more stringent about logging changes. Please make sure to add a note to the CHANGELOG.md file for any changes you make. It is sufficient to add a note under the Unreleased section.

General Pointers

There are also some general pointers for people looking to contribute to any of our Taija packages here.

Please follow the SciML ColPrac guide.

diff --git a/dev/contribute/performance/index.html b/dev/contribute/performance/index.html index f8a0ac534..60ec619d6 100644 --- a/dev/contribute/performance/index.html +++ b/dev/contribute/performance/index.html @@ -12,4 +12,4 @@ # Search: generator = GenericGenerator() ce = generate_counterfactual(x, target, counterfactual_data, M, generator)
data_large = TaijaData.load_linearly_separable(100000)
-counterfactual_data_large = DataPreprocessing.CounterfactualData(data_large...)
@time generate_counterfactual(x, target, counterfactual_data, M, generator)
@time generate_counterfactual(x, target, counterfactual_data_large, M, generator)
+counterfactual_data_large = DataPreprocessing.CounterfactualData(data_large...)
@time generate_counterfactual(x, target, counterfactual_data, M, generator)
@time generate_counterfactual(x, target, counterfactual_data_large, M, generator)
diff --git a/dev/explanation/architecture/index.html b/dev/explanation/architecture/index.html index 53f0f8931..fbe93214a 100644 --- a/dev/explanation/architecture/index.html +++ b/dev/explanation/architecture/index.html @@ -1,2 +1,2 @@ -Package Architecture · CounterfactualExplanations.jl

Package Architecture

The diagram below provides an overview of the package architecture. It is built around two core modules that are designed to be as extensible as possible through dispatch: 1) Models is concerned with making any arbitrary model compatible with the package; 2) Generators is used to implement arbitrary counterfactual search algorithms.[1]

The core function of the package, generate_counterfactual, uses an instance of type AbstractModel produced by the Models module and an instance of type AbstractGenerator produced by the Generators module.

Metapackages from the Taija ecosystem provide additional functionality such as datasets, language interoperability, parallelization, and plotting. The CounterfactualExplanations package is designed to be used in conjunction with these metapackages, but can also be used as a standalone package.

[1] We have made an effort to keep the code base a flexible and extensible as possible, but cannot guarantee at this point that any counterfactual generator can be implemented without further adaptation.

+Package Architecture · CounterfactualExplanations.jl

Package Architecture

The diagram below provides an overview of the package architecture. It is built around two core modules that are designed to be as extensible as possible through dispatch: 1) Models is concerned with making any arbitrary model compatible with the package; 2) Generators is used to implement arbitrary counterfactual search algorithms.[1]

The core function of the package, generate_counterfactual, uses an instance of type AbstractModel produced by the Models module and an instance of type AbstractGenerator produced by the Generators module.

Metapackages from the Taija ecosystem provide additional functionality such as datasets, language interoperability, parallelization, and plotting. The CounterfactualExplanations package is designed to be used in conjunction with these metapackages, but can also be used as a standalone package.

[1] We have made an effort to keep the code base a flexible and extensible as possible, but cannot guarantee at this point that any counterfactual generator can be implemented without further adaptation.

diff --git a/dev/explanation/categorical/index.html b/dev/explanation/categorical/index.html index cf14a1dab..5e9956a48 100644 --- a/dev/explanation/categorical/index.html +++ b/dev/explanation/categorical/index.html @@ -117,4 +117,4 @@ 0.0 0.0 1.0 - 1.85 + 1.85 diff --git a/dev/explanation/evaluation/faithfulness/index.html b/dev/explanation/evaluation/faithfulness/index.html index ecc1c3a92..984d4e2d3 100644 --- a/dev/explanation/evaluation/faithfulness/index.html +++ b/dev/explanation/evaluation/faithfulness/index.html @@ -157,4 +157,4 @@ title = "ECCo Generator\nplaus.: $(round(plaus, digits=2))\nfaith.: $(round(faith, digits=2))" p3 = plot(img, title=title, axis=([], false)) -plot(p1, p2, p3; size=(600, 200), layout=(1, 3), topmargin=15mm)

References

Altmeyer, Patrick, Mojtaba Farmanbar, Arie van Deursen, and Cynthia CS Liem. 2024. “Faithful Model Explanations Through Energy-Constrained Conformal Counterfactuals.” In Proceedings of the AAAI Conference on Artificial Intelligence, 38:10829–37. 10.

LeCun, Yann. 1998. “The MNIST Database of Handwritten Digits.” http://yann.lecun.com/exdb/mnist/.

Slack, Dylan, Anna Hilgard, Himabindu Lakkaraju, and Sameer Singh. 2021. “Counterfactual Explanations Can Be Manipulated.” Advances in Neural Information Processing Systems 34.

+plot(p1, p2, p3; size=(600, 200), layout=(1, 3), topmargin=15mm)

References

Altmeyer, Patrick, Mojtaba Farmanbar, Arie van Deursen, and Cynthia CS Liem. 2024. “Faithful Model Explanations Through Energy-Constrained Conformal Counterfactuals.” In Proceedings of the AAAI Conference on Artificial Intelligence, 38:10829–37. 10.

LeCun, Yann. 1998. “The MNIST Database of Handwritten Digits.” http://yann.lecun.com/exdb/mnist/.

Slack, Dylan, Anna Hilgard, Himabindu Lakkaraju, and Sameer Singh. 2021. “Counterfactual Explanations Can Be Manipulated.” Advances in Neural Information Processing Systems 34.

diff --git a/dev/explanation/evaluation/overview/index.html b/dev/explanation/evaluation/overview/index.html index 034edb18f..fd0c54cf2 100644 --- a/dev/explanation/evaluation/overview/index.html +++ b/dev/explanation/evaluation/overview/index.html @@ -1,2 +1,2 @@ -Overview · CounterfactualExplanations.jl

Evaluation

Evaluation of counterfactual explanations is an integral part of the counterfactual explanation process. It is important to evaluate the quality of the generated counterfactual explanations to ensure that they are meaningful and useful. The tutorial provides an overview of the evaluation metrics and methods that can be used to evaluate counterfactual explanations. In this part of the documentation, we dive deeper into specific evaluation metrics and methods that can be used to evaluate counterfactual explanations.

+Overview · CounterfactualExplanations.jl

Evaluation

Evaluation of counterfactual explanations is an integral part of the counterfactual explanation process. It is important to evaluate the quality of the generated counterfactual explanations to ensure that they are meaningful and useful. The tutorial provides an overview of the evaluation metrics and methods that can be used to evaluate counterfactual explanations. In this part of the documentation, we dive deeper into specific evaluation metrics and methods that can be used to evaluate counterfactual explanations.

diff --git a/dev/explanation/generators/clap_roar/index.html b/dev/explanation/generators/clap_roar/index.html index 60fecfa5f..7f78c15e2 100644 --- a/dev/explanation/generators/clap_roar/index.html +++ b/dev/explanation/generators/clap_roar/index.html @@ -3,4 +3,4 @@ \text{extcost}(f(\mathbf{s}^\prime)) = l(M(f(\mathbf{s}^\prime)),y^\prime) \end{aligned}\]

for each counterfactual $k$ where $l$ denotes the loss function used to train $M$. This approach is based on the intuition that (endogenous) model shifts will be triggered by counterfactuals that increase classifier loss (Altmeyer et al. 2023).

Usage

The approach can be used in our package as follows:

generator = ClaPROARGenerator()
 ce = generate_counterfactual(x, target, counterfactual_data, M, generator)
-plot(ce)

Comparison to GenericGenerator

The figure below compares the outcome for the GenericGenerator and the ClaPROARGenerator.

References

Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia Liem. 2023. “Endogenous Macrodynamics in Algorithmic Recourse.” In First IEEE Conference on Secure and Trustworthy Machine Learning. https://doi.org/10.1109/satml54575.2023.00036.

Upadhyay, Sohini, Shalmali Joshi, and Himabindu Lakkaraju. 2021. “Towards Robust and Reliable Algorithmic Recourse.” https://arxiv.org/abs/2102.13620.

+plot(ce)

Comparison to GenericGenerator

The figure below compares the outcome for the GenericGenerator and the ClaPROARGenerator.

References

Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia Liem. 2023. “Endogenous Macrodynamics in Algorithmic Recourse.” In First IEEE Conference on Secure and Trustworthy Machine Learning. https://doi.org/10.1109/satml54575.2023.00036.

Upadhyay, Sohini, Shalmali Joshi, and Himabindu Lakkaraju. 2021. “Towards Robust and Reliable Algorithmic Recourse.” https://arxiv.org/abs/2102.13620.

diff --git a/dev/explanation/generators/clue/index.html b/dev/explanation/generators/clue/index.html index 333b99364..43b2d1c81 100644 --- a/dev/explanation/generators/clue/index.html +++ b/dev/explanation/generators/clue/index.html @@ -7,4 +7,4 @@ ce = generate_counterfactual( x, target, counterfactual_data, M, generator; convergence=conv) -plot(ce)

Extra: The CLUE generator can also be used upon already having achieved a counterfactual with a different generator. In this case, you can use CLUE and make the counterfactual more robust.

Note: The above documentation is based on the information provided in the CLUE paper. Please refer to the original paper for more detailed explanations and implementation specifics.

References

Antorán, Javier, Umang Bhatt, Tameem Adel, Adrian Weller, and José Miguel Hernández-Lobato. 2020. “Getting a Clue: A Method for Explaining Uncertainty Estimates.” https://arxiv.org/abs/2006.06848.

+plot(ce)

Extra: The CLUE generator can also be used upon already having achieved a counterfactual with a different generator. In this case, you can use CLUE and make the counterfactual more robust.

Note: The above documentation is based on the information provided in the CLUE paper. Please refer to the original paper for more detailed explanations and implementation specifics.

References

Antorán, Javier, Umang Bhatt, Tameem Adel, Adrian Weller, and José Miguel Hernández-Lobato. 2020. “Getting a Clue: A Method for Explaining Uncertainty Estimates.” https://arxiv.org/abs/2006.06848.

diff --git a/dev/explanation/generators/dice/index.html b/dev/explanation/generators/dice/index.html index b365d02d7..7c6e24c60 100644 --- a/dev/explanation/generators/dice/index.html +++ b/dev/explanation/generators/dice/index.html @@ -40,4 +40,4 @@ num_counterfactuals=n_cf, convergence=conv ) ) -end

The figure below shows the resulting counterfactual paths. As expected, the resulting counterfactuals are more dispersed across the feature domain for higher choices of $\lambda_2$

References

Mothilal, Ramaravind K, Amit Sharma, and Chenhao Tan. 2020. “Explaining Machine Learning Classifiers Through Diverse Counterfactual Explanations.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 607–17. https://doi.org/10.1145/3351095.3372850.

[1] With thanks to the respondents on Discourse

+end

The figure below shows the resulting counterfactual paths. As expected, the resulting counterfactuals are more dispersed across the feature domain for higher choices of $\lambda_2$

References

Mothilal, Ramaravind K, Amit Sharma, and Chenhao Tan. 2020. “Explaining Machine Learning Classifiers Through Diverse Counterfactual Explanations.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 607–17. https://doi.org/10.1145/3351095.3372850.

[1] With thanks to the respondents on Discourse

diff --git a/dev/explanation/generators/feature_tweak/index.html b/dev/explanation/generators/feature_tweak/index.html index 02a9e4550..f919b9706 100644 --- a/dev/explanation/generators/feature_tweak/index.html +++ b/dev/explanation/generators/feature_tweak/index.html @@ -31,4 +31,4 @@ colorbar=false, ) -display(plot(p1, p2; size=(800, 400)))

References

Tolomei, Gabriele, Fabrizio Silvestri, Andrew Haines, and Mounia Lalmas. 2017. “Interpretable Predictions of Tree-Based Ensembles via Actionable Feature Tweaking.” In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 465–74. https://doi.org/10.1145/3097983.3098039.

+display(plot(p1, p2; size=(800, 400)))

References

Tolomei, Gabriele, Fabrizio Silvestri, Andrew Haines, and Mounia Lalmas. 2017. “Interpretable Predictions of Tree-Based Ensembles via Actionable Feature Tweaking.” In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 465–74. https://doi.org/10.1145/3097983.3098039.

diff --git a/dev/explanation/generators/generic/index.html b/dev/explanation/generators/generic/index.html index c64a98efa..490e3632a 100644 --- a/dev/explanation/generators/generic/index.html +++ b/dev/explanation/generators/generic/index.html @@ -1,4 +1,4 @@ Generic · CounterfactualExplanations.jl

GenericGenerator

We use the term generic to relate to the basic counterfactual generator proposed by Wachter, Mittelstadt, and Russell (2017) with $L1$-norm regularization. There is also a variant of this generator that uses the distance metric proposed in Wachter, Mittelstadt, and Russell (2017), which we call WachterGenerator.

Description

As the term indicates, this approach is simple: it forms the baseline approach for gradient-based counterfactual generators. Wachter, Mittelstadt, and Russell (2017) were among the first to realise that

[…] explanations can, in principle, be offered without opening the “black box.”

— Wachter, Mittelstadt, and Russell (2017)

Gradient descent is performed directly in the feature space. Concerning the cost heuristic, the authors choose to penalize the distance of counterfactuals from the factual value. This is based on the intuitive notion that larger feature perturbations require greater effort.

Usage

The approach can be used in our package as follows:

generator = GenericGenerator()
 ce = generate_counterfactual(x, target, counterfactual_data, M, generator)
-plot(ce)

References

Wachter, Sandra, Brent Mittelstadt, and Chris Russell. 2017. “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR.” Harv. JL & Tech. 31: 841. https://doi.org/10.2139/ssrn.3063289.

+plot(ce)

References

Wachter, Sandra, Brent Mittelstadt, and Chris Russell. 2017. “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR.” Harv. JL & Tech. 31: 841. https://doi.org/10.2139/ssrn.3063289.

diff --git a/dev/explanation/generators/gravitational/index.html b/dev/explanation/generators/gravitational/index.html index 428fba925..ce8aa2e32 100644 --- a/dev/explanation/generators/gravitational/index.html +++ b/dev/explanation/generators/gravitational/index.html @@ -5,4 +5,4 @@ \text{extcost}(f(\mathbf{s}^\prime)) = \text{dist}(f(\mathbf{s}^\prime),\bar{x}^*) \end{aligned}\]

where $\bar{x}$ is some sensible point in the target domain, for example, the subsample average $\bar{x}^*=\text{mean}(x)$, $x \in \mathcal{D}_1$.

There is a tradeoff then, between the distance of counterfactuals from their factual value and the chosen point in the target domain. The chart below illustrates how the counterfactual outcome changes as the penalty $\lambda_2$ on the distance to the point in the target domain is increased from left to right (holding the other penalty term constant).

Usage

The approach can be used in our package as follows:

generator = GravitationalGenerator()
 ce = generate_counterfactual(x, target, counterfactual_data, M, generator)
-display(plot(ce))

Comparison to GenericGenerator

The figure below compares the outcome for the GenericGenerator and the GravitationalGenerator.

References

Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia Liem. 2023. “Endogenous Macrodynamics in Algorithmic Recourse.” In First IEEE Conference on Secure and Trustworthy Machine Learning. https://doi.org/10.1109/satml54575.2023.00036.

+display(plot(ce))

Comparison to GenericGenerator

The figure below compares the outcome for the GenericGenerator and the GravitationalGenerator.

References

Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia Liem. 2023. “Endogenous Macrodynamics in Algorithmic Recourse.” In First IEEE Conference on Secure and Trustworthy Machine Learning. https://doi.org/10.1109/satml54575.2023.00036.

diff --git a/dev/explanation/generators/greedy/index.html b/dev/explanation/generators/greedy/index.html index c28bfc430..0e652bd55 100644 --- a/dev/explanation/generators/greedy/index.html +++ b/dev/explanation/generators/greedy/index.html @@ -2,4 +2,4 @@ Greedy · CounterfactualExplanations.jl

GreedyGenerator

We use the term greedy to describe the counterfactual generator introduced by Schut et al. (2021).

Description

The Greedy generator works under the premise of generating realistic counterfactuals by minimizing predictive uncertainty. Schut et al. (2021) show that for models that incorporates predictive uncertainty in their predictions, maximizing the predictive probability corresponds to minimizing the predictive uncertainty: by construction, the generated counterfactual will therefore be realistic (low epistemic uncertainty) and unambiguous (low aleatoric uncertainty).

For the counterfactual search Schut et al. (2021) propose using a Jacobian-based Saliency Map Attack(JSMA). It is greedy in the sense that it is an “iterative algorithm that updates the most salient feature, i.e. the feature that has the largest influence on the classification, by $\delta$ at each step” (Schut et al. 2021).

Usage

The approach can be used in our package as follows:

M = fit_model(counterfactual_data, :DeepEnsemble)
 generator = GreedyGenerator()
 ce = generate_counterfactual(x, target, counterfactual_data, M, generator)
-plot(ce)

References

Schut, Lisa, Oscar Key, Rory Mc Grath, Luca Costabello, Bogdan Sacaleanu, Yarin Gal, et al. 2021. “Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties.” In International Conference on Artificial Intelligence and Statistics, 1756–64. PMLR.

+plot(ce)

References

Schut, Lisa, Oscar Key, Rory Mc Grath, Luca Costabello, Bogdan Sacaleanu, Yarin Gal, et al. 2021. “Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties.” In International Conference on Artificial Intelligence and Statistics, 1756–64. PMLR.

diff --git a/dev/explanation/generators/growing_spheres/index.html b/dev/explanation/generators/growing_spheres/index.html index b51758371..43f54ceda 100644 --- a/dev/explanation/generators/growing_spheres/index.html +++ b/dev/explanation/generators/growing_spheres/index.html @@ -3,4 +3,4 @@ M = fit_model(counterfactual_data, :DeepEnsemble) ce = generate_counterfactual( x, target, counterfactual_data, M, generator) -plot(ce)

References

Laugel, Thibault, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, and Marcin Detyniecki. 2017. “Inverse Classification for Comparison-Based Interpretability in Machine Learning.” arXiv. https://doi.org/10.48550/arXiv.1712.08443.

+plot(ce)

References

Laugel, Thibault, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, and Marcin Detyniecki. 2017. “Inverse Classification for Comparison-Based Interpretability in Machine Learning.” arXiv. https://doi.org/10.48550/arXiv.1712.08443.

diff --git a/dev/explanation/generators/mint/index.html b/dev/explanation/generators/mint/index.html index e905061bb..eb824972d 100644 --- a/dev/explanation/generators/mint/index.html +++ b/dev/explanation/generators/mint/index.html @@ -35,4 +35,4 @@ data_scm.input_encoder = fit_transformer(data_scm, CausalInference.SCM) ce = generate_counterfactual(x, 2, data_scm, M, GenericGenerator(); initialization=:identity)
CounterfactualExplanation
-Convergence: ❌ after 100 steps.
Note

The above documentation is based on the information provided in the MINT paper. Please refer to the original paper for more detailed explanations and implementation specifics.

References

Karimi, Amir-Hossein, Bernhard Schölkopf, and Isabel Valera. 2021. “Algorithmic Recourse: From Counterfactual Explanations to Interventions.” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 353–62.

+Convergence: ❌ after 100 steps.
Note

The above documentation is based on the information provided in the MINT paper. Please refer to the original paper for more detailed explanations and implementation specifics.

References

Karimi, Amir-Hossein, Bernhard Schölkopf, and Isabel Valera. 2021. “Algorithmic Recourse: From Counterfactual Explanations to Interventions.” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 353–62.

diff --git a/dev/explanation/generators/overview/index.html b/dev/explanation/generators/overview/index.html index 2d7706b71..86f81df22 100644 --- a/dev/explanation/generators/overview/index.html +++ b/dev/explanation/generators/overview/index.html @@ -12,4 +12,4 @@ :generic => GenericGenerator :greedy => GreedyGenerator

The following sections provide brief descriptions of all of them.

Gradient-based Counterfactual Generators

At the time of writing, all generators are gradient-based: that is, counterfactuals are searched through gradient descent. In Altmeyer et al. (2023) we lay out a general methodological framework that can be applied to all of these generators:

\[\begin{aligned} \mathbf{s}^\prime &= \arg \min_{\mathbf{s}^\prime \in \mathcal{S}} \left\{ {\text{yloss}(M(f(\mathbf{s}^\prime)),y^*)}+ \lambda {\text{cost}(f(\mathbf{s}^\prime)) } \right\} -\end{aligned} \]

“Here $\mathbf{s}^\prime=\left\{s_k^\prime\right\}_K$ is a $K$-dimensional array of counterfactual states and $f: \mathcal{S} \mapsto \mathcal{X}$ maps from the counterfactual state space to the feature space.” (Altmeyer et al. 2023)

For most generators, the state space is the feature space ($f$ is the identity function) and the number of counterfactuals $K$ is one. Latent Space generators instead search counterfactuals in some latent space $\mathcal{S}$. In this case, $f$ corresponds to the decoder part of the generative model, that is the function that maps back from the latent space to inputs.

References

Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia Liem. 2023. “Endogenous Macrodynamics in Algorithmic Recourse.” In First IEEE Conference on Secure and Trustworthy Machine Learning. https://doi.org/10.1109/satml54575.2023.00036.

+\end{aligned} \]

“Here $\mathbf{s}^\prime=\left\{s_k^\prime\right\}_K$ is a $K$-dimensional array of counterfactual states and $f: \mathcal{S} \mapsto \mathcal{X}$ maps from the counterfactual state space to the feature space.” (Altmeyer et al. 2023)

For most generators, the state space is the feature space ($f$ is the identity function) and the number of counterfactuals $K$ is one. Latent Space generators instead search counterfactuals in some latent space $\mathcal{S}$. In this case, $f$ corresponds to the decoder part of the generative model, that is the function that maps back from the latent space to inputs.

References

Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia Liem. 2023. “Endogenous Macrodynamics in Algorithmic Recourse.” In First IEEE Conference on Secure and Trustworthy Machine Learning. https://doi.org/10.1109/satml54575.2023.00036.

diff --git a/dev/explanation/generators/probe/index.html b/dev/explanation/generators/probe/index.html index 489055250..5836818fb 100644 --- a/dev/explanation/generators/probe/index.html +++ b/dev/explanation/generators/probe/index.html @@ -10,4 +10,4 @@ generator = CounterfactualExplanations.Generators.ProbeGenerator(opt=opt) conv = CounterfactualExplanations.Convergence.InvalidationRateConvergence(;invalidation_rate=0.5) ce = generate_counterfactual(x, target, counterfactual_data, M, generator, convergence=conv) -plot(ce)

Choosing different invalidation rates makes the counterfactual more or less robust. The following plot shows the counterfactuals generated for different invalidation rates.

References

Pawelczyk, Martin, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci, and Himabindu Lakkaraju. 2022. “Probabilistically Robust Recourse: Navigating the Trade-Offs Between Costs and Robustness in Algorithmic Recourse.” arXiv Preprint arXiv:2203.06768.

+plot(ce)

Choosing different invalidation rates makes the counterfactual more or less robust. The following plot shows the counterfactuals generated for different invalidation rates.

References

Pawelczyk, Martin, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci, and Himabindu Lakkaraju. 2022. “Probabilistically Robust Recourse: Navigating the Trade-Offs Between Costs and Robustness in Algorithmic Recourse.” arXiv Preprint arXiv:2203.06768.

diff --git a/dev/explanation/generators/revise/index.html b/dev/explanation/generators/revise/index.html index bc75ae3f4..f82d84118 100644 --- a/dev/explanation/generators/revise/index.html +++ b/dev/explanation/generators/revise/index.html @@ -52,4 +52,4 @@ # Define generator: generator = REVISEGenerator(opt=Flux.Adam(0.1)) # Generate recourse: -ce = generate_counterfactual(x, target, counterfactual_data, M, generator; convergence=conv)

The chart below shows the results:

References

Joshi, Shalmali, Oluwasanmi Koyejo, Warut Vijitbenjaronk, Been Kim, and Joydeep Ghosh. 2019. “Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems.” https://arxiv.org/abs/1907.09615.

[1] In general, we believe that there may be a trade-off between creating counterfactuals that respect the DGP vs. counterfactuals reflect the behaviour of the black-model in question - both accurately and complete.

[2] We believe that there is another potentially crucial disadvantage of relying on a separate generative model: it reallocates the task of learning realistic explanations for the data from the black-box model to the generative model.

+ce = generate_counterfactual(x, target, counterfactual_data, M, generator; convergence=conv)

The chart below shows the results:

References

Joshi, Shalmali, Oluwasanmi Koyejo, Warut Vijitbenjaronk, Been Kim, and Joydeep Ghosh. 2019. “Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems.” https://arxiv.org/abs/1907.09615.

[1] In general, we believe that there may be a trade-off between creating counterfactuals that respect the DGP vs. counterfactuals reflect the behaviour of the black-model in question - both accurately and complete.

[2] We believe that there is another potentially crucial disadvantage of relying on a separate generative model: it reallocates the task of learning realistic explanations for the data from the black-box model to the generative model.

diff --git a/dev/explanation/generators/tcrex/index.html b/dev/explanation/generators/tcrex/index.html index 5c3381181..59bd812bf 100644 --- a/dev/explanation/generators/tcrex/index.html +++ b/dev/explanation/generators/tcrex/index.html @@ -99,4 +99,4 @@ p4

(g) Local CE example

To generate a local explanation based on the global CE representation, we simply apply the CART decision tree classifier from the previous step to our factual:

optimal_rule = apply_tree(tree, vec(x))
 p5 = deepcopy(p2)
 scatter!(p5, [x[1]], [x[2]], ms=10, color=2+optimal_rule, label="Local CE (move to R$optimal_rule)")
-p5

References

Bewley, Tom, Salim I. Amoukou, Saumitra Mishra, Daniele Magazzeni, and Manuela Veloso. 2024. “Counterfactual Metarules for Local and Global Recourse.” In Proceedings of the 41st International Conference on Machine Learning, edited by Ruslan Salakhutdinov, Zico Kolter, Katherine Heller, Adrian Weller, Nuria Oliver, Jonathan Scarlett, and Felix Berkenkamp, 235:3707–24. Proceedings of Machine Learning Research. PMLR. https://proceedings.mlr.press/v235/bewley24a.html.

+p5

References

Bewley, Tom, Salim I. Amoukou, Saumitra Mishra, Daniele Magazzeni, and Manuela Veloso. 2024. “Counterfactual Metarules for Local and Global Recourse.” In Proceedings of the 41st International Conference on Machine Learning, edited by Ruslan Salakhutdinov, Zico Kolter, Katherine Heller, Adrian Weller, Nuria Oliver, Jonathan Scarlett, and Felix Berkenkamp, 235:3707–24. Proceedings of Machine Learning Research. PMLR. https://proceedings.mlr.press/v235/bewley24a.html.

diff --git a/dev/explanation/index.html b/dev/explanation/index.html index 6c5834c49..54da46998 100644 --- a/dev/explanation/index.html +++ b/dev/explanation/index.html @@ -1,2 +1,2 @@ -Overview · CounterfactualExplanations.jl

Explanation

In this section you will find detailed explanations about the methodology and code.

Explanation clarifies, deepens and broadens the reader’s understanding of a subject.

Diátaxis

In other words, you come here because you are interested in understanding how all of this actually works 🤓.

+Overview · CounterfactualExplanations.jl

Explanation

In this section you will find detailed explanations about the methodology and code.

Explanation clarifies, deepens and broadens the reader’s understanding of a subject.

Diátaxis

In other words, you come here because you are interested in understanding how all of this actually works 🤓.

diff --git a/dev/explanation/optimisers/jsma/index.html b/dev/explanation/optimisers/jsma/index.html index b9ff4c3c0..2c14c56e7 100644 --- a/dev/explanation/optimisers/jsma/index.html +++ b/dev/explanation/optimisers/jsma/index.html @@ -2,4 +2,4 @@ JSMA · CounterfactualExplanations.jl

Jacobian-based Saliency Map Attack

To search counterfactuals, Schut et al. (2021) propose to use a Jacobian-Based Saliency Map Attack (JSMA) inspired by the literature on adversarial attacks. It works by moving in the direction of the most salient feature at a fixed step size in each iteration. Schut et al. (2021) use this optimisation rule in the context of Bayesian classifiers and demonstrate good results in terms of plausibility — how realistic counterfactuals are — and redundancy — how sparse the proposed feature changes are.

JSMADescent

To implement this approach in a reusable manner, we have added JSMA as a Flux optimiser. In particular, we have added a class JSMADescent<:Flux.Optimise.AbstractOptimiser, for which we have overloaded the Flux.Optimise.apply! method. This makes it possible to reuse JSMADescent as an optimiser in composable generators.

The optimiser can be used with with any generator as follows:

using CounterfactualExplanations.Generators: JSMADescent
 generator = GenericGenerator() |>
     gen -> @with_optimiser(gen,JSMADescent(;η=0.1))
-ce = generate_counterfactual(x, target, counterfactual_data, M, generator)

The figure below compares the resulting counterfactual search outcome to the corresponding outcome with generic Descent.

plot(p1,p2,size=(1000,400))

Schut, Lisa, Oscar Key, Rory Mc Grath, Luca Costabello, Bogdan Sacaleanu, Yarin Gal, et al. 2021. “Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties.” In International Conference on Artificial Intelligence and Statistics, 1756–64. PMLR.

+ce = generate_counterfactual(x, target, counterfactual_data, M, generator)

The figure below compares the resulting counterfactual search outcome to the corresponding outcome with generic Descent.

plot(p1,p2,size=(1000,400))

Schut, Lisa, Oscar Key, Rory Mc Grath, Luca Costabello, Bogdan Sacaleanu, Yarin Gal, et al. 2021. “Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties.” In International Conference on Artificial Intelligence and Statistics, 1756–64. PMLR.

diff --git a/dev/explanation/optimisers/overview/index.html b/dev/explanation/optimisers/overview/index.html index 686ea29af..e81417fe5 100644 --- a/dev/explanation/optimisers/overview/index.html +++ b/dev/explanation/optimisers/overview/index.html @@ -1,2 +1,2 @@ -Overview · CounterfactualExplanations.jl

Optimisation Rules

Counterfactual search is an optimization problem. Consequently, the choice of the optimisation rule affects the generated counterfactuals. In the short term, we aim to enable users to choose any of the available Flux optimisers. This has not been sufficiently tested yet, and you may run into issues.

Custom Optimisation Rules

Flux optimisers are specifically designed for deep learning, and in particular, for learning model parameters. In counterfactual search, the features are the free parameters that we are optimising over. To this end, some custom optimisation rules are necessary to incorporate ideas presented in the literature. In the following, we introduce those rules.

+Overview · CounterfactualExplanations.jl

Optimisation Rules

Counterfactual search is an optimization problem. Consequently, the choice of the optimisation rule affects the generated counterfactuals. In the short term, we aim to enable users to choose any of the available Flux optimisers. This has not been sufficiently tested yet, and you may run into issues.

Custom Optimisation Rules

Flux optimisers are specifically designed for deep learning, and in particular, for learning model parameters. In counterfactual search, the features are the free parameters that we are optimising over. To this end, some custom optimisation rules are necessary to incorporate ideas presented in the literature. In the following, we introduce those rules.

diff --git a/dev/extensions/index.html b/dev/extensions/index.html index 2fd6bbcca..7a27e200a 100644 --- a/dev/extensions/index.html +++ b/dev/extensions/index.html @@ -1,2 +1,2 @@ -Overview · CounterfactualExplanations.jl

⛓️ Extensions

In this section, you will find information about package extensions of the CounterfactualExplanations package. Extensions are a relatively new feature of Julia that allows users to conditionally load code based on the presence of other packages. This is useful for creating packages that extend the functionality of other packages, without requiring the user to install the package being extended.

+Overview · CounterfactualExplanations.jl

⛓️ Extensions

In this section, you will find information about package extensions of the CounterfactualExplanations package. Extensions are a relatively new feature of Julia that allows users to conditionally load code based on the presence of other packages. This is useful for creating packages that extend the functionality of other packages, without requiring the user to install the package being extended.

diff --git a/dev/extensions/laplace_redux/index.html b/dev/extensions/laplace_redux/index.html index 5d8cb0bb2..0e8252311 100644 --- a/dev/extensions/laplace_redux/index.html +++ b/dev/extensions/laplace_redux/index.html @@ -14,4 +14,4 @@ decision_threshold=0.9, max_iter=100 ) ce = generate_counterfactual(x, target, data, M, generator; convergence=conv) -plot(ce, alpha=0.1)

References

Daxberger, Erik, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, and Philipp Hennig. 2021. “Laplace Redux-Effortless Bayesian Deep Learning.” Advances in Neural Information Processing Systems 34.

Immer, Alexander, Maciej Korzepa, and Matthias Bauer. 2020. “Improving Predictions of Bayesian Neural Networks via Local Linearization.” https://arxiv.org/abs/2008.08400.

Schut, Lisa, Oscar Key, Rory Mc Grath, Luca Costabello, Bogdan Sacaleanu, Yarin Gal, et al. 2021. “Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties.” In International Conference on Artificial Intelligence and Statistics, 1756–64. PMLR.

+plot(ce, alpha=0.1)

References

Daxberger, Erik, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, and Philipp Hennig. 2021. “Laplace Redux-Effortless Bayesian Deep Learning.” Advances in Neural Information Processing Systems 34.

Immer, Alexander, Maciej Korzepa, and Matthias Bauer. 2020. “Improving Predictions of Bayesian Neural Networks via Local Linearization.” https://arxiv.org/abs/2008.08400.

Schut, Lisa, Oscar Key, Rory Mc Grath, Luca Costabello, Bogdan Sacaleanu, Yarin Gal, et al. 2021. “Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties.” In International Conference on Artificial Intelligence and Statistics, 1756–64. PMLR.

diff --git a/dev/extensions/neurotree/index.html b/dev/extensions/neurotree/index.html index 14a7fe0c4..4aaa3551c 100644 --- a/dev/extensions/neurotree/index.html +++ b/dev/extensions/neurotree/index.html @@ -17,4 +17,4 @@ decision_threshold=0.9, max_iter=100 ) ce = generate_counterfactual(x, target, data, M, generator; convergence=conv) -plot(ce, alpha=0.1)

References

Grinsztajn, Léo, Edouard Oyallon, and Gaël Varoquaux. 2022. “Why Do Tree-Based Models Still Outperform Deep Learning on Tabular Data?” https://arxiv.org/abs/2207.08815.

+plot(ce, alpha=0.1)

References

Grinsztajn, Léo, Edouard Oyallon, and Gaël Varoquaux. 2022. “Why Do Tree-Based Models Still Outperform Deep Learning on Tabular Data?” https://arxiv.org/abs/2207.08815.

diff --git a/dev/how_to_guides/custom_generators/index.html b/dev/how_to_guides/custom_generators/index.html index 3144410f2..6b84405d7 100644 --- a/dev/how_to_guides/custom_generators/index.html +++ b/dev/how_to_guides/custom_generators/index.html @@ -55,4 +55,4 @@ x, target, counterfactual_data, M, generator; num_counterfactuals=5) -plot(ce)

+plot(ce)

diff --git a/dev/how_to_guides/custom_models/index.html b/dev/how_to_guides/custom_models/index.html index 09250d684..df5f9a6a9 100644 --- a/dev/how_to_guides/custom_models/index.html +++ b/dev/how_to_guides/custom_models/index.html @@ -51,4 +51,4 @@ # Counterfactual search: generator = GenericGenerator() ce = generate_counterfactual(x, target, counterfactual_data, M, generator) -plot(ce)

References

Innes, Mike. 2018. “Flux: Elegant Machine Learning with Julia.” Journal of Open Source Software 3 (25): 602. https://doi.org/10.21105/joss.00602.

+plot(ce)

References

Innes, Mike. 2018. “Flux: Elegant Machine Learning with Julia.” Journal of Open Source Software 3 (25): 602. https://doi.org/10.21105/joss.00602.

diff --git a/dev/how_to_guides/index.html b/dev/how_to_guides/index.html index 4d4860cde..d981f2887 100644 --- a/dev/how_to_guides/index.html +++ b/dev/how_to_guides/index.html @@ -1,2 +1,2 @@ -Overview · CounterfactualExplanations.jl

How-To Guides

In this section, you will find a series of how-to-guides that showcase specific use cases of counterfactual explanations (CE).

How-to guides are directions that take the reader through the steps required to solve a real-world problem. How-to guides are goal-oriented.

Diátaxis

In other words, you come here because you may have some particular problem in mind, would like to see how it can be solved using CE and then most likely head off again 🫡.

+Overview · CounterfactualExplanations.jl

How-To Guides

In this section, you will find a series of how-to-guides that showcase specific use cases of counterfactual explanations (CE).

How-to guides are directions that take the reader through the steps required to solve a real-world problem. How-to guides are goal-oriented.

Diátaxis

In other words, you come here because you may have some particular problem in mind, would like to see how it can be solved using CE and then most likely head off again 🫡.

diff --git a/dev/index.html b/dev/index.html index c9ddc8256..804e59b54 100644 --- a/dev/index.html +++ b/dev/index.html @@ -69,4 +69,4 @@ author = {Patrick Altmeyer and Arie van Deursen and Cynthia C. s. Liem}, title = {Explaining Black-Box Models through Counterfactuals}, journal = {Proceedings of the JuliaCon Conferences} -}

📚 References

Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia CS Liem. 2023. “Endogenous Macrodynamics in Algorithmic Recourse.” In 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML), 418–31. IEEE.

Altmeyer, Patrick, Mojtaba Farmanbar, Arie van Deursen, and Cynthia CS Liem. 2024. “Faithful Model Explanations Through Energy-Constrained Conformal Counterfactuals.” In Proceedings of the AAAI Conference on Artificial Intelligence, 38:10829–37. 10.

Antorán, Javier, Umang Bhatt, Tameem Adel, Adrian Weller, and José Miguel Hernández-Lobato. 2020. “Getting a Clue: A Method for Explaining Uncertainty Estimates.” https://arxiv.org/abs/2006.06848.

Bewley, Tom, Salim I. Amoukou, Saumitra Mishra, Daniele Magazzeni, and Manuela Veloso. 2024. “Counterfactual Metarules for Local and Global Recourse.” https://arxiv.org/abs/2405.18875.

Joshi, Shalmali, Oluwasanmi Koyejo, Warut Vijitbenjaronk, Been Kim, and Joydeep Ghosh. 2019. “Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems.” https://arxiv.org/abs/1907.09615.

Kaggle. 2011. “Give Me Some Credit, Improve on the State of the Art in Credit Scoring by Predicting the Probability That Somebody Will Experience Financial Distress in the Next Two Years.” https://www.kaggle.com/c/GiveMeSomeCredit; Kaggle. https://www.kaggle.com/c/GiveMeSomeCredit.

Karimi, Amir-Hossein, Julius Von Kügelgen, Bernhard Schölkopf, and Isabel Valera. 2020. “Algorithmic Recourse Under Imperfect Causal Knowledge: A Probabilistic Approach.” https://arxiv.org/abs/2006.06831.

Laugel, Thibault, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, and Marcin Detyniecki. 2017. “Inverse Classification for Comparison-Based Interpretability in Machine Learning.” https://arxiv.org/abs/1712.08443.

Mothilal, Ramaravind K, Amit Sharma, and Chenhao Tan. 2020. “Explaining Machine Learning Classifiers Through Diverse Counterfactual Explanations.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 607–17. https://doi.org/10.1145/3351095.3372850.

Pawelczyk, Martin, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci, and Himabindu Lakkaraju. 2023. “Probabilistically Robust Recourse: Navigating the Trade-Offs Between Costs and Robustness in Algorithmic Recourse.” https://arxiv.org/abs/2203.06768.

Schut, Lisa, Oscar Key, Rory Mc Grath, Luca Costabello, Bogdan Sacaleanu, Yarin Gal, et al. 2021. “Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties.” In International Conference on Artificial Intelligence and Statistics, 1756–64. PMLR.

Tolomei, Gabriele, Fabrizio Silvestri, Andrew Haines, and Mounia Lalmas. 2017. “Interpretable Predictions of Tree-Based Ensembles via Actionable Feature Tweaking.” In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 465–74. https://doi.org/10.1145/3097983.3098039.

Wachter, Sandra, Brent Mittelstadt, and Chris Russell. 2017. “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR.” Harv. JL & Tech. 31: 841. https://doi.org/10.2139/ssrn.3063289.

+}

📚 References

Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia CS Liem. 2023. “Endogenous Macrodynamics in Algorithmic Recourse.” In 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML), 418–31. IEEE.

Altmeyer, Patrick, Mojtaba Farmanbar, Arie van Deursen, and Cynthia CS Liem. 2024. “Faithful Model Explanations Through Energy-Constrained Conformal Counterfactuals.” In Proceedings of the AAAI Conference on Artificial Intelligence, 38:10829–37. 10.

Antorán, Javier, Umang Bhatt, Tameem Adel, Adrian Weller, and José Miguel Hernández-Lobato. 2020. “Getting a Clue: A Method for Explaining Uncertainty Estimates.” https://arxiv.org/abs/2006.06848.

Bewley, Tom, Salim I. Amoukou, Saumitra Mishra, Daniele Magazzeni, and Manuela Veloso. 2024. “Counterfactual Metarules for Local and Global Recourse.” https://arxiv.org/abs/2405.18875.

Joshi, Shalmali, Oluwasanmi Koyejo, Warut Vijitbenjaronk, Been Kim, and Joydeep Ghosh. 2019. “Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems.” https://arxiv.org/abs/1907.09615.

Kaggle. 2011. “Give Me Some Credit, Improve on the State of the Art in Credit Scoring by Predicting the Probability That Somebody Will Experience Financial Distress in the Next Two Years.” https://www.kaggle.com/c/GiveMeSomeCredit; Kaggle. https://www.kaggle.com/c/GiveMeSomeCredit.

Karimi, Amir-Hossein, Julius Von Kügelgen, Bernhard Schölkopf, and Isabel Valera. 2020. “Algorithmic Recourse Under Imperfect Causal Knowledge: A Probabilistic Approach.” https://arxiv.org/abs/2006.06831.

Laugel, Thibault, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, and Marcin Detyniecki. 2017. “Inverse Classification for Comparison-Based Interpretability in Machine Learning.” https://arxiv.org/abs/1712.08443.

Mothilal, Ramaravind K, Amit Sharma, and Chenhao Tan. 2020. “Explaining Machine Learning Classifiers Through Diverse Counterfactual Explanations.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 607–17. https://doi.org/10.1145/3351095.3372850.

Pawelczyk, Martin, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci, and Himabindu Lakkaraju. 2023. “Probabilistically Robust Recourse: Navigating the Trade-Offs Between Costs and Robustness in Algorithmic Recourse.” https://arxiv.org/abs/2203.06768.

Schut, Lisa, Oscar Key, Rory Mc Grath, Luca Costabello, Bogdan Sacaleanu, Yarin Gal, et al. 2021. “Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties.” In International Conference on Artificial Intelligence and Statistics, 1756–64. PMLR.

Tolomei, Gabriele, Fabrizio Silvestri, Andrew Haines, and Mounia Lalmas. 2017. “Interpretable Predictions of Tree-Based Ensembles via Actionable Feature Tweaking.” In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 465–74. https://doi.org/10.1145/3097983.3098039.

Wachter, Sandra, Brent Mittelstadt, and Chris Russell. 2017. “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR.” Harv. JL & Tech. 31: 841. https://doi.org/10.2139/ssrn.3063289.

diff --git a/dev/objects.inv b/dev/objects.inv index a55c39b1beea9f30f027155ebb3e185cd98d9f3f..055e41a5880102b9b4a528cdcd78c37a340176f5 100644 GIT binary patch delta 9296 zcmV-WB(K}PO0-IlRRJ`SR!SI;+V+fRkV9li_Qs1&B=%bV6!Ph}DcicV4`N!Y>{CDS@WQ8XACQs7PkFsf4Z&kf=-rbWK0WASLlScs` zLX95>5$XK~eRdqv~~&r~xd0OTN7V=GssFC_+*A+ZHlwji%0-_<(~qPh!{?Fhg_2 zcix0edE~FXH1Sgnm}Oh|43U3!{%-$V4R)h{oYPt}0QUA5 zM=C%;IM)G$n*p~750row$j6_{%WskYK?9=De(JoUKQHkw<=095n=mJj{Yd8<@$KPD z@jul#5DMQc-SLlHekOvp^w+aymh%22BN!r0qJAe05 z`6*L$6J|QVXv3*}G018Szl0sB0}`LlH=u~EJh}GJ(RAwU?(RB%5IFReQ@xU7X=+R@%^J|@W0=Pr#TnKJWD>W0 z$A7`{VT?9uk^?(ZbSmFZom2ehCHyI11o0F3bL6Ec*r+fIZKlp!`tuzBnxtp}EUKnw zlnjArlzq#Va5*LVkwaILSpI!VBPQ^rv%*e@=1b%rL`jCS+&zk8w4&l7 z@~=R9)o1LV{nTL)9f0UChz^A~VXBb3Uss}}XaqG10815rsS}j@SpRnyN$MUXn>a`5 z!iRT$>6=Xy z`j;u}hT9~~mv{Te(^|L3_36=UcrAAZurr}4iD=< zW?wnAN0V=Vfe!)+?yqk4cK3FlOGow|!_F(%emE6pi=Qk2I%TU61et$5rGOv1v(IPG z+|Qr6Pd^u@L0qz*6$HT?(4L(`Z;}TOTM2~s1&Qz{5DEWh$Yx*hgfe0r`qFWo>fPj# z|JcPG$h`tMftzj8G`R*z;1W)#wWFw<{|tubW5g|g&;XfJy>`milyhTGx{c(yD#e&sm4aeogL^bn5UEDI0HjGs*KPQ5_I{;-~Rev4&P$3AN0n*A2hMQ z+Q3?WM5%i&@cPziWnDNve>9E4(sAN4FeZ!?mTPyp>bo{KOKzS%#0ef0L->!v_>1_FWaSMcZpCdZ{`&nIWtTr3@FS__9-m}Coo zotz!3##(n^k@m-{)h73^BIFi!K_OI-Sr{%IYT=h@OS)9bb#4 zT|0;5Z~$}w278p)V2ojM4r9Vx+ylXZt;YlEPV58FHBg+pZ^3HLWeT>AZ-6j$0>|Ot zz8D6{BDe!WC=&anK zyjK~M&v~K4*dcK6N-g0`4$&fviBJh|@S@7~%qrrU1O!EQr|d3SE4bD1!~1M>%Z zASU?i?y?dWCZena;V=+|8{lLU<%CNV3f>d<0-C{t+LzK$78|g3W(NvrMblEReB#iC zwFLsJVg(F5%<58B`T1QsBw%@eDWXb%iCkce_-pqyd|McYdy%Ywcrf?EytceqQ!sU; zjSEN)ehI^L@JoljoIJXI?7mIruzL=UTx1|MDqZgy>v|0s0N5c9or5EXJ*>Vl7VTJ@ z)wa8fuT*rL)b{=b!u|jixLUZA>`h@6(C(9Jd>%J!t@+>;X>-1htr3Q^TRkt?Pw0~;p2}NkX+lp$K24d30OFsi!>=kq1 z_)$aI=mJLoI06776qAgYFDmUznz2B*Rne7akT~3GraBkO5Qy8;whw-M1c3B0Ue7iYu|<|M>@}cst>LgT`iLiqU6+^9(k5 zlCFF#j!IXQ{&uN)SB|W#aN@k9gCu0Q=bKg^j4WbTIdynQ4uLeG`%$W|kQw{SKe63d zY93@emwW+a@2iNovs*b%R$4ciQ!oaQtgLHOWhL$yk$Y-7Hydgwan;m7O>kC&*+(bY z5Y)Ga9=negkKNaQQF299HN3%51}jxx%c-O|UX3HmMk# zDFcxuJ(XVRGt9aW*19wI%wMD0Y_+uSpfqDO5%*LQnm3XS3hWfhwES2$W(wUtz!DRw1dgd z$-4zLyDpZ0C|VdZAE7tTf$GswWe~GvV?XCNHWyPYV2V-2{Tn}whze%RnaZxEx=W_W zeMusUCn!(Dc|X@356EGF8cc5_dui{nlACteOMBb2m-YmE>A7ex?TO60p3rLK3;?B; zIviP~SkrB);ph+nxA;Byv@W6~35j2Q78nna=t?|)#EJTgG?YcRVa6-1?UpS&6J+rh zwJkgQwP^kdX_=tqrI`uob~1uE=BHCt6pB|TvamFiVZy@1P%-CW`7?#X6gPHgV+Z99G;PIvfnl0R)seb4A6-3mPyCPJ3LXRJ0Q)p;_c=+e+bg`}WncijBRqG=bFywP6ZbV) zl5MM%u1{#|(6%`C>jP6WOoC&*sw}w6Ft~HyZSr-Ud#fPe4i4>5NCQI+gdGE-`?m1j zXe?XNYr~eRIwA(6TPCnBaB_`L^tw=gJxki5Fb35Tppp%PM?r+4xhIr$r^1hOg^K4` z`lh+KlQ;f6xeEQ5DexDt00GGztkku;pXdJkgORgY1m_gsV1ch43T|S-dn<9dDS_!m8V~*#mRU+MgR4cVA z)MEeaVs3fHJ6Cr7^*Xxq0^(osIRA=!xdG<=I}sM{{>0h8!j0znewzAs-#hQ4uU9Bv zCV?Z2+aeV^U%qV80c!rv>~n#Zoi7jZ3Q{AkjM80sCM4vm3yA z$I&GZ1jeEm68qXp2z>_BjuQ{A3v1`FnhQQfbd_)BIaX~vpfVwz>aA|Ob}FoGJJCt( zRnqs)Tl*k$Wn&)rQMFJu259n4**+*{0&C!>r@a^m$CYZ96q}ho6VL^J6vl4)-H24$ z!pGw9bbp~kG!L<1iai}iAM@SvJb`zGlio88hO7rdlM&Y1D&tgWh>d0=xoeJHjk7zy44#NfMlZV839=OYX z{KF45fl|JFQs%f+4A<&^TIa~vIhyUfc((J^(}tue8&PgskRsCXEh1|L1yH{?M_<^^IL9L*!s_tnY zrj^O@wa=)IUdGLOElJ9bh>`-;5mA**wp-fejHGWtt4q=rN!5pcW;H&bmSXIXSv{1+ zWRnNsN~s#!(FJ)Va?Q_B-af&G?-$SPh+VRY2|PI&=p)(Q9&yYu8qSj>*TyaZ0H`qt zT}|*1q4iOmp%r!x+b{#005x?GZe#tNiQyvuFI)Tb26g^CO?nBiXP>LVo72ktImlb1 z6rf$v7HUa7wtQ}Xqem^Qhg*wXUJ5%bw*|Bg<1Gcwb*OL_DZK_y9DZe6Fl;5Xw2M5R z8v50Eij*d9oo9sfAhSo#IGfuU&F-3ehVxk94|jc6+aqk2g%NadV{>%e7T_jQoVGgK z?*1m&Qa4^tEe*vmxV^>3(P_HYw5c8F;`}*|NO+}#o|nvjHdwz5IxnqtRx)uoA6b5{4jZDNNU5gti%Ak(hl4zxiexCS{j!x%JhKMH?~I$c}hU^H+_fFoPjkM}3Mg=%J$vrqf>b3EkR^}qiJ($Kg!-t0y+k=k% z7cqeN-GRT!{OCAl7Q$2B3VeNbHgJTR(b0RdN>H3Z(No!BB(djxh*5w9(%2hYti6pTaG$K*oo-78 zm=I%Pfe1$y?YJ!@APw9VU0;P}UW`896pb#?SRkmQ2c64l1bwA)6Y`oDD!SQbFpDY-)&TY^d$Mr1=Ao&1OQ1v!z)M`8oQ{ zCQL~ws$%$f-T&@qWVI%XM|#l8WF?$n?d5)Yjq=C6bWj}B*Id_K-fF#+Gew_CO%NK;{L!qX7yoW1eyUy^Qjsr76+w`PhffXFb>rH-t zS@~0XeTYRE2i`i#Nd{Wv#ezS4NNQWTC&+m=G0m;N605mxP7`uyciXFhp&c0A5{gCH z5jql(2DT+RriK%+THKlg^p*yAgCw0Ixg|pWVt9&79g1$A4DWR6zqUD(1RGJC9C<5x z9_8&ti2A^!Ii zU~98J6=WEg3U;3ZB>lPKkVqSIK1kw6_Y~0*91XItmWJ1WW74AUVVz{qB`kG@Q7j}~ z>{g6Iwvb+hur4!vs@zgwMh@3FO2fdA6P?2n1+D5+96iI61$7m%dE~>ESloG4g}fTy zZD+pP@ah~_oAugYB*R_z4}$T3SO}ZZD*Su>b?5Mpv3%ZNez;(Y6;etJy;rF{m`iV_Ec z6?OpqA{2QUT{YQdYkV&a`P)Nro+j&U?J^Z&E4xfp+-95ws6b|CF-qAlAj%P5^lK2Q zBcD>0qwAN1EkySJ_`m{x`(KU`*KZuZN4bZuWWEUlXznp2 zuA~fOB|rO`wD^7t$%RcFSdvKf-$9d3qqSriF#d;STCq|iH;Vaocz^Rm|}eBpUZOGlGY?g zYh~7P3F;xW3wlMbA4nc6*(HNsTNg(aK<2JfYbFK`fqgpND2^Wqd}hrPbYVtmExN>TQ`) zZRbEhh|dCeiAkIscMjliC>rw^=v=T7q zlh~4vP=VKqym)hT!bb5X0m!tt=0TaG(ueXABoHVCszcNe$7OkXrx%dx4d|j?6n;Qa zxJ;6NLRF*uq})4<%hWi0FPZk502unCajyrNk_8?X$?y6b2c^MY;^L! zaSF(f=9>t!Qs|cZgLk49eB1R*xLO%@9FYxx1{qU=%|owD8nOvbli0!yMlzrLfT>c0 z)0wEoVSa%%oV~5(l zXH}jmG_Jw>xY{<{&4NQqq|g%()m5xp$l#Epw3futn5{!%`xTBSrl<4&3}A%{ETS^q z_|wD^Jn=ssLo@!`Q?|z`rYbU5ZprsT+Z(ZH3D{UCQ<7*lr2=2Lp%TjQ26y_IHy80JQbT6tgC&@2z*v*;wMJ6`#to+s2< zIJam0Bh7RwzLd@A8B;>toF120Vi#1Lk9CJ-6S`)F=z7Y@Hqu(`=CstW&b7mTCR^{y z?O`?-9&M5krtcjk9$SAt-vCd!(=3vx1H-HkV)G>DO1U-L9L41|33iT?u39#(78$xV zCTPZ&+83k=0hC=+anM|8Y|<|=KZcQ!P{JiWX*pVCL!T~OV4*>o7zQ8Do&sp1?I)OvwA(?}N&q^EggTdP;( zHGcxieM@e0e}UNR9+h#fU>9Lyg9N_ z&1qZ*1JF3YtAMT1z)u6J?PR(&9^_Z&@lSiX6*aRGJIg0-H{(#NaG%+K*Wq)QU=ma2 z+XR`Dqyae@TP$-NRJ}%OX9Y`T6MhcOrfQ{#M$a%!on8zsKTUiz{(&giHTQW63UAu` zVrq?h93MQ3YfwCZ;n}_?)vAaX=T_8V5~5x^+iJONizlxKCES{vwAN349ZAQRY~W8o zzgKMH9=bh$qL$YE9Fl;4){KzNlW+ zJMT&lXNyXumx0xoeCj6cX2G%)R}0d;m7aDME2C(jK~Wwq0WCg%-=tTx#REBM*hbvA z1wV;TZ-M+;-cdi_U;~cSJ*c`xjK7AAvirBlU*Hl&0bg6nFb>@W&M|h3peRTbGz&#> zi>#3WpVp6I8s5{9Do$H!yHeRRp@`WSF8f;5vH^Z?SvADZw{%LVfzgV*(=RjAOA%Tp z(OsOZ@CB0~{YvM5&^hu!`Q`uLTQtcrz}#TCbS=vM(i2DnWG&tQ@c;bWkFSe5SoPwS z_acl7-cb_A*RxcXx$K-QG0R6_gXJs!d_?(h*ck{JT8=?VP&0nnK(D3rl3L_d1*2`G zG)qz1PHSeK(*ZGU9zthKn-757sPTAoaFAH8`Xm8qSfE;eET=liBIULek<{DGt`N6Y zDdfkCj-XEn1o5G4XHPmkN}ljXEC;WzHuP!KArUqV$2Ot z%F`Z>0m{>3(Ex4SX^bp|piXuBN71C)R%r*_S~;U!1!Y$I#?DnxBZbN&Uh#(> zI+Kt*VsRkqGv8`GAm*m;Qv#^Hz7ENUz-M16^ZaGF{zPl-Gzao2)ZI~?!jzv7KMdh# zc(q}U0q$4(JC-qiTKZjXaZZSHI=~OpPPqJ>xtc_Oz4v9(`l;yDfj`NBa1#@EkGPVk z+t6G4`Lc_d4lh{HKsLxxIaUhFsFGmZ+PgzV!C1?{(ig|Vg6X7T9H5UD&X>wTn&wNl zOPlik4Vw1<{U<7GYL|AfGYo~mc-Ruh;B-m-49UYdSt2_*V)s-DC))zF2xA0xUzn(i z>JF5DpG_Qtt8Eu!K22hawLPTKslBbo!)*HI_|H%vSRp^d$6MP|tBJmOGxy~;m#L7H zWvW``uB?M{`f7w|SN$jWAL)Ils-Kw(u%SXL`_IkCohDC%qru5?)<4SEa;RHW zK$wfUY4{IUE3KauuPZ2=BU<9P7*mt%>ya2*?-AP-v^rHfYFtl_(r!d=$o&h9t=!Ll z{R#z7>s$mRTp!RRW3pC<=2!!Pw|*KFpPtmuKUn%<+}mCi0rMZY0&c6g!O7dRF^Uxc zSo0VqUe!U+VJu+xdgN?>3oaJ7E*0Ni-=LG%57oq4s^xD@rR^-6Z&{V) zb^1U3UZU3DQ&Kun?O*J?n6>Em+q(S5138V)QF$>)XB!!BEeq`x1N{J2T0x4f8MScJ z)yXCTx#p)K7$xKgICqqcRZjFV-@_-~2<;Ey?ki)W4GOB2-VGKav2iM|jI~sMfrtSL zlL9JV%uID`B0~Pc=wQasa~gGoriV3#9TEybQD3d$d36J`qDbOcETKt3d$mL>zS@N! z=F7zP|fckIcm7MYPsXWewhIg-QWrXO$_Vu7tuI>JTT6@RrG#j z9j6uRpi;uNnYA?K?-O9&%Cx`!Q_EMH(K`Gol00CF6qpUBYxI)y_f=P3^V?zP%c^j0 z40XMcKf}50kcAYgVYL0$Pgk4uPr6s8Dy(ZK1)P$>$KZO9V#3(@#XdAL9iW8|_JwBN zmze=!czBmq)4k7=yR4pnugOW7(lfS%$y+8-Ff?aK2;moXW(s!(i|F_FspH5dOs?Fm zF;r0(U=OPrPDAi^a)z{{$M+aa?iXK{Wr(i)pf)_KAGsB!mHd*93&plQxCUyQrPn$$ z10PI6;P4^0Cz)8f$VQoMVsv*6&*1UhXkp$sl*Tm~diGJoAS|zc>X@)@orPgZ?zS#t zs;W)@@?xH(=(2$FAg*LmQdh>C zT~!KGxfMtd_Zq@O9aG z8_BifQGChH+UYHs0%ULVHqgsYXQ{Lu$%Z7!;ct^HLoDQfLn9{h6hDe{5P^4t?JI2@ zwhgJ*^or#Rdc*QSMYz7;dqEM)HTD95;sXgYw9bYXg%;e(alXV}P+PEr<%Jfy+VYY}g#QPFwkT(vD+uub delta 9306 zcmV-gB&FN5O1?^vRRJ@RR!SIKZF|Nu$RV;Mdt_L$ASp9If`Fi!)oPBBxH2z#Rjhj7 z?^PHF=%bV6!B;^tcicV4`N!Y>{CDT;WQ8XACQs7PkFsf4Z&kf=-rtiM0WASrlScs` zLbV?U5$Xfl>9I!|F|wr~xd0E55w~=GssFC_+*A+ZHryji%0-_<(~qPhvP1utIai zciA5B@cB_PM75NbWl)Y)BPk zNmB)$Tubt2IM;y5T2ARM3+F0`^T=O&Y2v3EFw3^^86yAe{Qds98tg`Yxu&&d0PNi_ zj#PkxaIOOgHv?)B9w-4VkdHr?m)|1)g9b#Q{nU9)e_rBW%CD39H(^d5`;pE!;@iWQ z;(w~gKq$D=0*)JEsQ6zsxa1bjx|gk&C`FkTT%iXZfIPUMzf}VPgSgW9kfa@)wDb3` zl%Fz1H({m&j5eG)=SSav9A7BGt-?6h0EU|Z4>&%(Q~}E8shS4W+fAKI_u%~KqX3Mf z?p^dl(o{h|xRwh)%$JKz6a%l;0L(XsZxZ>NEW8S%Fu!wvi8-&i*TRU&IuBQ{@)~4N z(o``GuH|>x%GUshn@yd!7bp7yAmHfe8hFcGeMq?`Q_S$o_KXaFWRqob>*a}ey$J&( zk^02f)r7?7^9=}MD^D&xbTpkhySuxN9|R74}O`7lPCG|7P+DLR$!r_L$<^Ai3PMg;Z~`E%r@DA=e*7TQdmcl75u{xwO_0!UO% z&L|&FL7xB5e{s%#i{BHFJMuKV+T=p}>M8E|`KK&qZ=te}{Yj z@Kxp9(qpBxUuR!Ur`JH>HdnjzWHo)e zi9-J}h0}1Gr1|o0|9D#K_P9QO#ismh=f$g9*PR!KJF}N$)SYMhJG0mDUo|%TXpRmK z>p*5-I<-fClW%|y0t)W0ZuWNfc3((m_8o?uSFrtXDy|m4SO9d&b|Ek_|9VORKXzwN zXV2ZI&)sKF#bpq;>}Lf+um-ee=hU0%!NXPr;eCN3{0T_H|Jh)(FL^{6G7f#|xK8zM z^2mSeVhZG50i3|iHffq%11E3^CDb}ml+S+#)AR9vz%9^VFsFL$l&>k-oxuEH>dm2j~=@`%LWGAWHzDXpWa-HaQ>+z&(NW+!P|Bn&{M3@I(@1MRodiVZKlOa3@hQRkLXmkOSqtde%lQXO?mWDK~jYBnm zOmc)y&W=@ctsAgN`{UJWllxZ@atl46b*V(VC_pItS9-*=a@!m3`e)^KpL|QLy2Oy{ zVneP9E5ZDWPY&0ST7b{eFrJ41)J+y-tHA=V6U?&q*Dx0!c1%xMvV`m;3$g;sTBP@KE(Kx@ur0=ABAfDm;8 z$Kl|>d4cb}f~T;5Y687#VF1#P|Msu{_rHI3xXyLyom&Xb9_)s(3yoG8-n-X-DbRB0 ztURH-RT-ntS)s$&A#m_cE#XWK(ISiqQweYIrpooyDq@+00gCQU*%j}s=A;FGv&B4tW8%h} z)fMOf@GQ)6uQuz1CcVL&kpOonaQv(S-e8d<>GRJXD|v5+KL2c+`usCqpXZo8eJQef zaph)Xt=x=TxzCMOZnn*pn+YrTxwvvOe&s&r7>S^F^kXD~-jRX431=zw!x#k}npVRo zQ1vW09u16ef01Fsq*SVZ#wrVqo*<1%2^v$XZfUBi|J1@03fF$O7S&J<#Gr{+eg?YO zYi7XlqlU221&#o41OS97CK*v*RMM9;Vu4VrqAO1zvAfkwbS{J;V7I5o7ng3(Sf@_C zhjess#|d?9nqk1)q67!*89GGS9L2f1Z{43ndLA$lSE9rJ^AApc@pi&Hjm^*$z0YLK zGdSc)y7DnQDos)P$EE6B8M3m%i1VIKlAz(9Z(40IGLKzl)ZrmH1k!|_N2$C*R_rhT z#7<)=d645=(gonXuOnj4ZlyR`soi8wK^TCuvaU^$m6&5h>ZxhmY>1)6R8s>r!BGun zADu))P}?4Q>^@q5Ja*qi$rTmV@Bv2|>{NXzr;_>*ruOjf0{^Ro34;RRhhZJs1Pg;= zlZwEZ5)fI?Q)!hx!=wvgsXKGe{57hrR;!7c>+6udrQr^L`2N;A+`Hf_qUf;4*(LPN zT6=TaQ!PCnKE#tR!m=) zdNBDpdB324YS+aQMGIr%BlPAeP~AJKbYiv~>=*pO=6s3;Owp^jf8&P{5y6ZpQ`xl? zcgYgDuZTzS1m$Ts?`OK>0XYm%gXoQXhQ(w?9%y%6=KJ&}0V16qxY z0U*>;gCp}4Ynn|p3>_li7P|)@)1)Oe+~-Ew4S z0x$l&c4TM27S&%NEYnok-~&9V-gA6_ULuv}iR#n;e3*+bg`~WncijBRqG=bFyzQ z6ZZ|-l5M+{?oVjz(6-q3>jP0U41#05swlY2Ft~HyZ}N4WdaEGd4i4>5a05dLgrxz| zb6Z$%G`6j1wP9OT?Gc07Efd%mIJm}tCvsh=UL|c;7=!8vP|1P8vmngS+!D&VQ|8B+ zLd86mzF{uzcIw*Q&vSqN!N}Mwf^!ORu))_31vjx^y_J}4 zFN+maEiAQ!vqUgcOJ|WuMu40Pf%z)=6Dgu4RfngFuACfWjtUGxD}c6W%+Y**wMe8J zYNZy1TJ)b?OfAoN$!96Q`xi+oO{KIC3K~6v_8!AFp#7@2|cCc`F26 zXE%WTj=f7BFpNbpCHA$I5c*80eM~&Lu2DOu)!gtYqPu)E&#`FZ0hS4|RB!dzwPRuJ z*oj7Buj0OU-r5JAE1UDckE%6fV}d5%l%0cOC9nj3dfJPCa8#+LNwJlG>9YV`Kw<8t z-;HpkEqp9)obE4lh~^#LZNs5)}maHm4{>BCx&v%_#f`s5)| zo(Gby!rTlkJvzIYa3ikm{0uw1rdk!C5^XU`t2r z;8{I{#blEQ;YukQ+K~nMLFAI3A-{cs72nUF+hM!p5EFQEG|)$Kygg!=V>p~ANv=J* z1ON<;LFh_?hX}2Y;tZ{@aoC0$*aWC4gK!7y=Zp;(@qgLcpEt1c=V{W5fjxh!25*il zv*#dhjZ%PiMO&zUIrZ55xeXsRvmUN3c3COxklYr~I)t}mIM<=VSfsQXJaPD)Z9%b> z;L^_Wcx>ob<1tdIxOJKl(nMyDRjpIRDjl+ED= za6+Dsr`BTnGEr1(WVv`@;mbFXZLyG9K)KK7=j*jlS9l-og*)-5Aed*TD7;>RRF-B# zl%W$n4XCy#{HB(<2y+jXG0t%E zkm7jIx&I;t5WhR{H<=$D$85yii-QwO?0^K7-S}w;68=T*=T+0ga7APFNSYfarOY5m zn&GJCswtp)Sxl#wK|yngT0S6IDEz*;RF+wh@siPhZXL^}V-p!kIF$Nu9=ePuMOs^D z+0TrIEzMH({%wfCTNJB=Vu*D&h0vwF+g zx=d?-sGBeZeQ&jf=Ym0(Q=WTo;wEHM#hKFqNJJ`d-H}xdaTpt7doOPO0C=+*lj3w~ z7DIlHezOTvQi`gWKHm4g`x)7-$>Nb7v@%!;2UvT#pI)QSOK{eD@KdWs}ClIyrd-vo&rc)9a(wrt2w za+F4JPRJA4Y&nG0uyy!7zx9H(7Zf$pwiQU&9Mei5A-6&}-41zClGQzAT=@I=`>>9M zFv}7TTbob$YACBOL&>>2d;lbgj`ZWaFIc`HOdkw2&Eq{x8QV36_cR=sG1{gj1v9LF z;5c4y^2^Ge((6Mk!Z`5ONlqfrA}Uo*~Bon{z9zgzBx@u((bl*0~>ar zcS|T1WryiVKpNr*!_$ZD^=!_%OTY`g*GJ zmCSlr82w4WKt+{a!>-S<^*trT>f&y7l|k@-+BCAa!Iu zg=su3DIuK~)`$!uS%pWHux}t9&y&o<+L;9lhZ3r(bwik=e#rT4>Vw$L9(uTc!~&aC zy&?Yh6kuzUJ{4q`mjyj8Rw zVEK@P(y@3nH*UFUvEiyn2Wc%L(ND!N8@Vlm-7Lv$`4>C#- z;Z45=mO65ls(iS9LD+(SXaA26%(4H)7=Ham`aQ}$d?oWu7(jE6X_FyJ4a0Ku^Zh#p zG|6Mfo0MRMAp(uC4xiRjEC`sm_byDjysktjg4OEhUmi3x#EB>cG;z(u0 zyliuoAvMzlQIJ-_pvk{>c(5>Ni_Bh@!zdDfP3V0JP7J>_#GO)-cN(w`*YOM94EfxZgS0VgB z!5DH)9r_?~CO}(`_-YO~PQ;f?P7B85;w=N(Budokio|+By-Reo0Ok0vw!jqQq<=2T zaZ6H@d{`^1j!RGvsa?>^d;L)S)jjwO7r;=KH1C%v${nYFl6u3#fRy^m_zLj-)nwB_ z`uR9ko4lwYt5Yc3jEmMlLZ!2eIS#;MifF zcsLY)k@~BDf@M)PgoSUtN|H$Yzt#D*Vp7#06$8I89_qjPFH4rAb>^XuOMjD%q$Np* zuW{mUfzuE6s28bgp^HM1D@Z~VsB5Y`L_hsZ(qjn$3wLeb^y+~@y z8&J2e2DsD4N7y_Jmu0qm@*2>sFdR*zh{+KcuezXrkwL`*%0rDY*p?a=sfhwq#=j!G>I+TU?j82 z4_GQSIGu%RB=ZZT;pE+0eiI!tC`jm55kXvk=~kn|Il6{_vYxa>fiXv146!ci=bB)y zqjyV1Ijj6sA#n{p$JMdnZWbI`B86r^R9CTXA%a6bq_rT9#%diB-LEh_F*%+8X8hwCnX zU1DZ1;CKwy*rVhYNA5EjKBJe(clx_j_3j5Uy>^7!9a<`sOcn&5Zn97>Zu*I1p)};M zRd~ejM=ErMjcl^wu9VxVNA&>TTkhQZ;e~R^mdxbz_2#bp_TWA4UO_J&P69)-SNsF z^~_Lb?%baBk2J%n_)<2bM@$KIb9!82iA_*(KGq$UP3W2xqU#|i+X!p18`DyMzdF|r znQXl)w};wXShPt%n7(zCSZw|Id;>J)PP0g&4h*wGh|NsSmGWq|8H&qm66~atu38SR z78<%WCTPT$+7_e<0fb#scF^2uY|$?eT|p^q0Xu+YFvbb}grL0V86 z(ZFcWvd9sU6a||&0KsDHt1jw)0}YyhaF_f5bG-ykX{T2>sp1?2)OvwAQ%@HUq^Ejh zTdG&YHGcxaeM@X}e}>rW9+h#fU>0dO2(RlwS4;HLqVb~4=>5Av(?_@}+hikeZ0o#hj^8*!+ARk+XW>+rEl zu!t$MZ353p(tu>f7TX*5t+I%5Zbco&A?mfWt(MESn0Y-Y;MOG5T0i)8BpF|F zfIk8GUU7(f$o70jEv@^1IV1qBCCm^TafX0CdG~f09GQY_5U_4fmmU&i-dO`ON|7I6 z*9h}#)7p1q>Lsm1+Kk@hqrTQSRuVz`@V8V6g4LRTXb%DFzHFm+2wBtWmZ1oUpx0*m zqH1f=+UlU~sl4TK)t_E+={kw-;$(&I`vhU6wAGy>ALt?f|K21_J_4)_hD-ON>@Pj} zV1VdD&p-S>KlkJ7qAXdxmgv36Mg{FCk>ksYDyxomE|wU7bS03%@)e)1Qa&7(k|GK((qM5scl4s)jx<{0WH8Xa}RAg!|cSOQda4Iq-p1%y& zpJ?f$=0rY)GDzZNsO8FA`}wl#JP$8n&=_oxt!%8Ilu?P%xYc-vQlqi9fu%2w=@k=; z!Z<)5Eo7X^L7GNTw-dbbMh=V;(Qx_f<#wIx6;8<#r>C$dY$#^Xbp2pazyFJ zKm4GI4wYQ=G?u-Y+*kX2=gVg;A*AX%p!!pqbN}{|NLRh%4!KEu>(7_a%CZwi4pqkw z!%+xUgsi`#sU^)@6hN4jv#EOy7b~r2kJkkh&K4@soyU|U`-WK!tpJJj3RuQ6o!@$_A{U|cb zxzt1NAo9=7-|wGWp0xt3&V*~`NbatG^4^RaBpe$AV*#hvBiSyveB8Pee0zO^PFg=y z5^JfSzcrDzvyi@JQI?nK|MYu_SbtAp>O{1Ex$|CkG#O^oeSBksn$UqRwc zy!PiGctyjGl|*Bt!QPJ+R>>THkOiCi{sFMe(wSKD&N`v!e`GYHvW&USwqv%;*UXngh)d z*5xlEad@Ddd8_EVk!76LQ3n+lwoR<1A%C9$@m7ZY^FL{rHerfZD9#664{tl^eZ=DD zseR6UpVP$F7uu`?TvTcdLP$YtDf!*@C+;}&`ZA5yP+n2bKjq7tX>}M@`b>c_7w%z* zHS=s5VXhP9!(* z6hDe{V1ajo^(*Z$Y%5Z4=%vdS^v>mhf^dDq_kto;4D1C0#Si5EnIl5m9o I2WXov$C;locK`qY diff --git a/dev/reference/index.html b/dev/reference/index.html index 163c42396..c029b6c48 100644 --- a/dev/reference/index.html +++ b/dev/reference/index.html @@ -1,5 +1,5 @@ -🧐 Reference · CounterfactualExplanations.jl

Reference

In this reference, you will find a detailed overview of the package API.

Reference guides are technical descriptions of the machinery and how to operate it. Reference material is information-oriented.

Diátaxis

In other words, you come here because you want to take a very close look at the code 🧐.

Content

Exported functions

CounterfactualExplanations.CounterfactualExplanationMethod
function CounterfactualExplanation(;
+🧐 Reference · CounterfactualExplanations.jl

Reference

In this reference, you will find a detailed overview of the package API.

Reference guides are technical descriptions of the machinery and how to operate it. Reference material is information-oriented.

Diátaxis

In other words, you come here because you want to take a very close look at the code 🧐.

Content

Exported functions

CounterfactualExplanations.CounterfactualExplanationMethod
function CounterfactualExplanation(;
 	x::AbstractArray,
 	target::RawTargetType,
 	data::CounterfactualData,
@@ -8,14 +8,14 @@
 	num_counterfactuals::Int = 1,
 	initialization::Symbol = :add_perturbation,
     convergence::Union{AbstractConvergence,Symbol}=:decision_threshold,
-)

Outer method to construct a CounterfactualExplanation structure.

source
CounterfactualExplanations.generate_counterfactualMethod
generate_counterfactual(
     x::Base.Iterators.Zip,
     target::RawTargetType,
     data::CounterfactualData,
     M::Models.AbstractModel,
     generator::AbstractGenerator;
     kwargs...,
-)

Overloads the generate_counterfactual method to accept a zip of factuals x and return a vector of counterfactuals.

source
CounterfactualExplanations.generate_counterfactualMethod
generate_counterfactual(
     x::Matrix,
     target::RawTargetType,
     data::CounterfactualData,
@@ -57,7 +57,7 @@
 julia> ces = generate_counterfactual.(xs, target, counterfactual_data, M, generator);
 
 julia> converged(ce.convergence, ce)
-true
source
CounterfactualExplanations.generate_counterfactualMethod
generate_counterfactual(
     x::Matrix,
     target::RawTargetType,
     data::DataPreprocessing.CounterfactualData,
@@ -68,33 +68,33 @@
         decision_threshold=(1 / length(data.y_levels)), max_iter=1000
     ),
     kwrgs...,
-)

Overloads the generate_counterfactual for the GrowingSpheresGenerator generator.

source
CounterfactualExplanations.generate_counterfactualMethod
generate_counterfactual(
     x::Vector{<:Matrix},
     target::RawTargetType,
     data::CounterfactualData,
     M::Models.AbstractModel,
     generator::AbstractGenerator;
     kwargs...,
-)

Overloads the generate_counterfactual method to accept a vector of factuals x and return a vector of counterfactuals.

source
CounterfactualExplanations.target_probsFunction
target_probs(
     ce::CounterfactualExplanation,
     x::Union{AbstractArray,Nothing}=nothing,
-)

Returns the predicted probability of the target class for x. If x is nothing, the predicted probability corresponding to the counterfactual value is returned.

source
CounterfactualExplanations.Convergence.DecisionThresholdConvergenceType
DecisionThresholdConvergence

Convergence criterion based on the target class probability threshold. The search stops when the target class probability exceeds the predefined threshold.

Fields

  • decision_threshold::AbstractFloat: The predefined threshold for the target class probability.
  • max_iter::Int: The maximum number of iterations.
  • min_success_rate::AbstractFloat: The minimum success rate for the target class probability.
source
CounterfactualExplanations.Convergence.GeneratorConditionsConvergenceType
GeneratorConditionsConvergence

Convergence criterion for counterfactual explanations based on the generator conditions. The search stops when the gradients of the search objective are below a certain threshold and the generator conditions are satisfied.

Fields

  • decision_threshold::AbstractFloat: The threshold for the decision probability.
  • gradient_tol::AbstractFloat: The tolerance for the gradients of the search objective.
  • max_iter::Int: The maximum number of iterations.
  • min_success_rate::AbstractFloat: The minimum success rate for the generator conditions (across counterfactuals).
source
CounterfactualExplanations.Convergence.DecisionThresholdConvergenceType
DecisionThresholdConvergence

Convergence criterion based on the target class probability threshold. The search stops when the target class probability exceeds the predefined threshold.

Fields

  • decision_threshold::AbstractFloat: The predefined threshold for the target class probability.
  • max_iter::Int: The maximum number of iterations.
  • min_success_rate::AbstractFloat: The minimum success rate for the target class probability.
source
CounterfactualExplanations.Convergence.GeneratorConditionsConvergenceType
GeneratorConditionsConvergence

Convergence criterion for counterfactual explanations based on the generator conditions. The search stops when the gradients of the search objective are below a certain threshold and the generator conditions are satisfied.

Fields

  • decision_threshold::AbstractFloat: The threshold for the decision probability.
  • gradient_tol::AbstractFloat: The tolerance for the gradients of the search objective.
  • max_iter::Int: The maximum number of iterations.
  • min_success_rate::AbstractFloat: The minimum success rate for the generator conditions (across counterfactuals).
source
CounterfactualExplanations.Convergence.convergedFunction
converged(
     convergence::DecisionThresholdConvergence,
     ce::AbstractCounterfactualExplanation,
     x::Union{AbstractArray,Nothing}=nothing,
-)

Checks if the counterfactual search has converged when the convergence criterion is the decision threshold.

source
CounterfactualExplanations.Convergence.convergedFunction
converged(
     convergence::GeneratorConditionsConvergence,
     ce::AbstractCounterfactualExplanation,
     x::Union{AbstractArray,Nothing}=nothing,
-)

Checks if the counterfactual search has converged when the convergence criterion is generator_conditions.

source
CounterfactualExplanations.Convergence.convergedFunction
converged(
+    convergence::MaxIterConvergence,
     ce::AbstractCounterfactualExplanation,
     x::Union{AbstractArray,Nothing}=nothing,
-)

Checks if the counterfactual search has converged when the convergence criterion is invalidation rate.

source
CounterfactualExplanations.Convergence.convergedFunction
converged(
-    convergence::MaxIterConvergence,
+)

Checks if the counterfactual search has converged when the convergence criterion is maximum iterations. This means the counterfactual search will not terminate until the maximum number of iterations has been reached independently of the other convergence criteria.

source
CounterfactualExplanations.Convergence.convergedFunction
converged(
+    convergence::InvalidationRateConvergence,
     ce::AbstractCounterfactualExplanation,
     x::Union{AbstractArray,Nothing}=nothing,
-)

Checks if the counterfactual search has converged when the convergence criterion is maximum iterations. This means the counterfactual search will not terminate until the maximum number of iterations has been reached independently of the other convergence criteria.

source
CounterfactualExplanations.Convergence.hinge_lossMethod
hinge_loss(convergence::InvalidationRateConvergence, ce::AbstractCounterfactualExplanation)

Calculates the hinge loss of a counterfactual explanation.

Arguments

  • convergence::InvalidationRateConvergence: The convergence criterion to use.
  • ce::AbstractCounterfactualExplanation: The counterfactual explanation to calculate the hinge loss for.

Returns

The hinge loss of the counterfactual explanation.

source
CounterfactualExplanations.Convergence.invalidation_rateMethod
invalidation_rate(ce::AbstractCounterfactualExplanation)

Calculates the invalidation rate of a counterfactual explanation.

Arguments

  • ce::AbstractCounterfactualExplanation: The counterfactual explanation to calculate the invalidation rate for.
  • kwargs: Additional keyword arguments to pass to the function.

Returns

The invalidation rate of the counterfactual explanation.

source
CounterfactualExplanations.Convergence.hinge_lossMethod
hinge_loss(convergence::InvalidationRateConvergence, ce::AbstractCounterfactualExplanation)

Calculates the hinge loss of a counterfactual explanation.

Arguments

  • convergence::InvalidationRateConvergence: The convergence criterion to use.
  • ce::AbstractCounterfactualExplanation: The counterfactual explanation to calculate the hinge loss for.

Returns

The hinge loss of the counterfactual explanation.

source
CounterfactualExplanations.Convergence.invalidation_rateMethod
invalidation_rate(ce::AbstractCounterfactualExplanation)

Calculates the invalidation rate of a counterfactual explanation.

Arguments

  • ce::AbstractCounterfactualExplanation: The counterfactual explanation to calculate the invalidation rate for.
  • kwargs: Additional keyword arguments to pass to the function.

Returns

The invalidation rate of the counterfactual explanation.

source
CounterfactualExplanations.Evaluation.benchmarkMethod
benchmark(
     data::CounterfactualData;
     models::Dict{<:Any,<:Any}=standard_models_catalogue,
     generators::Union{Nothing,Dict{<:Any,<:AbstractGenerator}}=nothing,
@@ -106,7 +106,7 @@
     store_ce::Bool=false,
     parallelizer::Union{Nothing,AbstractParallelizer}=nothing,
     kwrgs...,
-)

Runs the benchmarking exercise as follows:

  1. Randomly choose a factual and target label unless specified.
  2. If no pretrained models are provided, it is assumed that a dictionary of callable model objects is provided (by default using the standard_models_catalogue).
  3. Each of these models is then trained on the data.
  4. For each model separately choose n_individuals randomly from the non-target (factual) class. For each generator create a benchmark as in benchmark(xs::Union{AbstractArray,Base.Iterators.Zip}).
  5. Finally, concatenate the results.

If vertical_splits is specified to an integer, the computations are split vertically into vertical_splits chunks. In this case, the results are stored in a temporary directory and concatenated afterwards.

source
CounterfactualExplanations.Evaluation.benchmarkMethod
benchmark(
+)

Runs the benchmarking exercise as follows:

  1. Randomly choose a factual and target label unless specified.
  2. If no pretrained models are provided, it is assumed that a dictionary of callable model objects is provided (by default using the standard_models_catalogue).
  3. Each of these models is then trained on the data.
  4. For each model separately choose n_individuals randomly from the non-target (factual) class. For each generator create a benchmark as in benchmark(xs::Union{AbstractArray,Base.Iterators.Zip}).
  5. Finally, concatenate the results.

If vertical_splits is specified to an integer, the computations are split vertically into vertical_splits chunks. In this case, the results are stored in a temporary directory and concatenated afterwards.

source
CounterfactualExplanations.Evaluation.benchmarkMethod
benchmark(
     x::Union{AbstractArray,Base.Iterators.Zip},
     target::RawTargetType,
     data::CounterfactualData;
@@ -119,19 +119,19 @@
     store_ce::Bool=false,
     parallelizer::Union{Nothing,AbstractParallelizer}=nothing,
     kwrgs...,
-)

First generates counterfactual explanations for factual x, the target and data using each of the provided models and generators. Then generates a Benchmark for the vector of counterfactual explanations as in benchmark(counterfactual_explanations::Vector{CounterfactualExplanation}).

source
CounterfactualExplanations.Evaluation.benchmarkMethod
benchmark(
     counterfactual_explanations::Vector{CounterfactualExplanation};
     meta_data::Union{Nothing,<:Vector{<:Dict}}=nothing,
     measure::Union{Function,Vector{Function}}=default_measures,
     store_ce::Bool=false,
-)

Generates a Benchmark for a vector of counterfactual explanations. Optionally meta_data describing each individual counterfactual explanation can be supplied. This should be a vector of dictionaries of the same length as the vector of counterfactuals. If no meta_data is supplied, it will be automatically inferred. All measure functions are applied to each counterfactual explanation. If store_ce=true, the counterfactual explanations are stored in the benchmark.

source
CounterfactualExplanations.Evaluation.evaluateFunction
evaluate(
+)

Generates a Benchmark for a vector of counterfactual explanations. Optionally meta_data describing each individual counterfactual explanation can be supplied. This should be a vector of dictionaries of the same length as the vector of counterfactuals. If no meta_data is supplied, it will be automatically inferred. All measure functions are applied to each counterfactual explanation. If store_ce=true, the counterfactual explanations are stored in the benchmark.

source
CounterfactualExplanations.Evaluation.evaluateFunction
evaluate(
     ce::CounterfactualExplanation;
     measure::Union{Function,Vector{Function}}=default_measures,
     agg::Function=mean,
     report_each::Bool=false,
     output_format::Symbol=:Vector,
     pivot_longer::Bool=true
-)

Just computes evaluation measures for the counterfactual explanation. By default, no meta data is reported. For report_meta=true, meta data is automatically inferred, unless this overwritten by meta_data. The optional meta_data argument should be a vector of dictionaries of the same length as the vector of counterfactual explanations.

source
CounterfactualExplanations.Evaluation.validityMethod
validity(ce::CounterfactualExplanation; γ=0.5)

Checks of the counterfactual search has been successful with respect to the probability threshold γ. In case multiple counterfactuals were generated, the function returns the proportion of successful counterfactuals.

source
CounterfactualExplanations.DataPreprocessing.CounterfactualDataMethod
CounterfactualData(
+)

Just computes evaluation measures for the counterfactual explanation. By default, no meta data is reported. For report_meta=true, meta data is automatically inferred, unless this overwritten by meta_data. The optional meta_data argument should be a vector of dictionaries of the same length as the vector of counterfactual explanations.

source
CounterfactualExplanations.Evaluation.validityMethod
validity(ce::CounterfactualExplanation; γ=0.5)

Checks of the counterfactual search has been successful with respect to the probability threshold γ. In case multiple counterfactuals were generated, the function returns the proportion of successful counterfactuals.

source
CounterfactualExplanations.DataPreprocessing.CounterfactualDataMethod
CounterfactualData(
     X::AbstractMatrix,
     y::RawOutputArrayType;
     mutability::Union{Vector{Symbol},Nothing}=nothing,
@@ -142,38 +142,38 @@
 )

This outer constructor method prepares features X and labels y to be used with the package. Mutability and domain constraints can be added for the features. The function also accepts arguments that specify which features are categorical and which are continues. These arguments are currently not used.

Examples

using CounterfactualExplanations.Data
 x, y = toy_data_linear()
 X = hcat(x...)
-counterfactual_data = CounterfactualData(X,y')
source
CounterfactualExplanations.Models.ModelMethod
Model(model, type::AbstractModelType; likelihood::Symbol=:classification_binary)

Outer constructor for Model where the atomic model is defined and assumed to be pre-trained.

source
CounterfactualExplanations.Models.ModelMethod
(M::Model)(data::CounterfactualData, type::Linear; kwargs...)

Constructs a model with one linear layer for the given data. If the output is binary, this corresponds to logistic regression, since model outputs are passed through the sigmoid function. If the output is multi-class, this corresponds to multinomial logistic regression, since model outputs are passed through the softmax function.

source
CounterfactualExplanations.Models.ModelMethod
Model(model, type::AbstractModelType; likelihood::Symbol=:classification_binary)

Outer constructor for Model where the atomic model is defined and assumed to be pre-trained.

source
CounterfactualExplanations.Models.ModelMethod
(M::Model)(data::CounterfactualData, type::Linear; kwargs...)

Constructs a model with one linear layer for the given data. If the output is binary, this corresponds to logistic regression, since model outputs are passed through the sigmoid function. If the output is multi-class, this corresponds to multinomial logistic regression, since model outputs are passed through the softmax function.

source
CounterfactualExplanations.Models.fit_modelFunction
fit_model(
     counterfactual_data::CounterfactualData, model::Symbol=:MLP;
     kwrgs...
-)

Fits one of the available default models to the counterfactual_data. The model argument can be used to specify the desired model. The available values correspond to the keys of the all_models_catalogue dictionary.

source
CounterfactualExplanations.Models.fit_modelMethod
fit_model(
     counterfactual_data::CounterfactualData, type::AbstractModelType; kwrgs...
 )

A wrapper function to fit a model to the counterfactual_data for a given type of model.

Arguments

  • counterfactual_data::CounterfactualData: The data to be used for training the model.
  • type::AbstractModelType: The type of model to be trained, e.g., MLP, DecisionTreeModel, etc.

Examples

julia> using CounterfactualExplanations
 
@@ -184,39 +184,39 @@
 julia> data = CounterfactualData(load_linearly_separable()...);
 
 julia> M = fit_model(data, Linear())
-CounterfactualExplanations.Models.Model(Chain(Dense(2 => 2)), :classification_multi, CounterfactualExplanations.Models.Fitresult(Chain(Dense(2 => 2)), Dict{Any, Any}()), Linear())
source
CounterfactualExplanations.Models.model_evaluationMethod
model_evaluation(M::AbstractModel, test_data::CounterfactualData)

Helper function to compute F-Score for AbstractModel on a (test) data set. By default, it computes the accuracy. Any other measure, e.g. from the StatisticalMeasures package, can be passed as an argument. Currently, only measures applicable to classification tasks are supported.

source
CounterfactualExplanations.Models.predict_probaMethod
predict_proba(M::AbstractModel, counterfactual_data::CounterfactualData, X::Union{Nothing,AbstractArray})

Returns the predicted output probabilities for a given model M, data set counterfactual_data and input data X.

source
CounterfactualExplanations.Models.probsMethod
probs(
+CounterfactualExplanations.Models.Model(Chain(Dense(2 => 2)), :classification_multi, CounterfactualExplanations.Models.Fitresult(Chain(Dense(2 => 2)), Dict{Any, Any}()), Linear())
source
CounterfactualExplanations.Models.model_evaluationMethod
model_evaluation(M::AbstractModel, test_data::CounterfactualData)

Helper function to compute F-Score for AbstractModel on a (test) data set. By default, it computes the accuracy. Any other measure, e.g. from the StatisticalMeasures package, can be passed as an argument. Currently, only measures applicable to classification tasks are supported.

source
CounterfactualExplanations.Models.predict_probaMethod
predict_proba(M::AbstractModel, counterfactual_data::CounterfactualData, X::Union{Nothing,AbstractArray})

Returns the predicted output probabilities for a given model M, data set counterfactual_data and input data X.

source
CounterfactualExplanations.Models.probsMethod
probs(
     M::Model,
     type::MLJModelType,
     X::AbstractArray,
-)

Overloads the probs method for MLJ models.

Note for developers

Note that currently the underlying MLJ methods (reformat, predict) are incompatible with Zygote's autodiff. For differentiable MLJ models, the probs` and logits methods need to be overloaded.

source
CounterfactualExplanations.Generators.FeatureTweakGeneratorMethod
FeatureTweakGenerator(; penalty::Union{Nothing,Function,Vector{Function}}=Objectives.distance_l2, ϵ::AbstractFloat=0.1)

Constructs a new Feature Tweak Generator object.

Uses the L2-norm as the penalty to measure the distance between the counterfactual and the factual. According to the paper by Tolomei et al., another recommended choice for the penalty in addition to the L2-norm is the L0-norm. The L0-norm simply minimizes the number of features that are changed through the tweak.

Arguments

  • penalty::Union{Nothing,Function,Vector{Function}}: The penalty function to use for the generator. Defaults to distance_l2.
  • ϵ::AbstractFloat: The tolerance value for the feature tweaks. Described at length in Tolomei et al. (https://arxiv.org/pdf/1706.06691.pdf). Defaults to 0.1.

Returns

  • generator::FeatureTweakGenerator: A non-gradient-based generator that can be used to generate counterfactuals using the feature tweak method.
source
CounterfactualExplanations.Generators.FeatureTweakGeneratorMethod
FeatureTweakGenerator(; penalty::Union{Nothing,Function,Vector{Function}}=Objectives.distance_l2, ϵ::AbstractFloat=0.1)

Constructs a new Feature Tweak Generator object.

Uses the L2-norm as the penalty to measure the distance between the counterfactual and the factual. According to the paper by Tolomei et al., another recommended choice for the penalty in addition to the L2-norm is the L0-norm. The L0-norm simply minimizes the number of features that are changed through the tweak.

Arguments

  • penalty::Union{Nothing,Function,Vector{Function}}: The penalty function to use for the generator. Defaults to distance_l2.
  • ϵ::AbstractFloat: The tolerance value for the feature tweaks. Described at length in Tolomei et al. (https://arxiv.org/pdf/1706.06691.pdf). Defaults to 0.1.

Returns

  • generator::FeatureTweakGenerator: A non-gradient-based generator that can be used to generate counterfactuals using the feature tweak method.
source
CounterfactualExplanations.Generators.GradientBasedGeneratorMethod
GradientBasedGenerator(;
 	loss::Union{Nothing,Function}=nothing,
 	penalty::Penalty=nothing,
 	λ::Union{Nothing,AbstractFloat,Vector{AbstractFloat}}=nothing,
 	latent_space::Bool::false,
 	opt::Flux.Optimise.AbstractOptimiser=Flux.Descent(),
     generative_model_params::NamedTuple=(;),
-)

Default outer constructor for GradientBasedGenerator.

Arguments

  • loss::Union{Nothing,Function}=nothing: The loss function used by the model.
  • penalty::Penalty=nothing: A penalty function for the generator to penalize counterfactuals too far from the original point.
  • λ::Union{Nothing,AbstractFloat,Vector{AbstractFloat}}=nothing: The weight of the penalty function.
  • latent_space::Bool=false: Whether to use the latent space of a generative model to generate counterfactuals.
  • opt::Flux.Optimise.AbstractOptimiser=Flux.Descent(): The optimizer to use for the generator.
  • generative_model_params::NamedTuple: The parameters of the generative model associated with the generator.

Returns

  • generator::GradientBasedGenerator: A gradient-based counterfactual generator.
source
CounterfactualExplanations.Generators.conditions_satisfiedMethod
conditions_satisfied(generator::AbstractGradientBasedGenerator, ce::AbstractCounterfactualExplanation)

The default method to check if the all conditions for convergence of the counterfactual search have been satisified for gradient-based generators. By default, gradient-based search is considered to have converged as soon as the proposed feature changes for all features are smaller than one percent of its standard deviation.

source
CounterfactualExplanations.Generators.generate_perturbationsMethod
generate_perturbations(
+)

Default outer constructor for GradientBasedGenerator.

Arguments

  • loss::Union{Nothing,Function}=nothing: The loss function used by the model.
  • penalty::Penalty=nothing: A penalty function for the generator to penalize counterfactuals too far from the original point.
  • λ::Union{Nothing,AbstractFloat,Vector{AbstractFloat}}=nothing: The weight of the penalty function.
  • latent_space::Bool=false: Whether to use the latent space of a generative model to generate counterfactuals.
  • opt::Flux.Optimise.AbstractOptimiser=Flux.Descent(): The optimizer to use for the generator.
  • generative_model_params::NamedTuple: The parameters of the generative model associated with the generator.

Returns

  • generator::GradientBasedGenerator: A gradient-based counterfactual generator.
source
CounterfactualExplanations.Generators.conditions_satisfiedMethod
conditions_satisfied(generator::AbstractGradientBasedGenerator, ce::AbstractCounterfactualExplanation)

The default method to check if the all conditions for convergence of the counterfactual search have been satisified for gradient-based generators. By default, gradient-based search is considered to have converged as soon as the proposed feature changes for all features are smaller than one percent of its standard deviation.

source
CounterfactualExplanations.Generators.hinge_lossMethod
hinge_loss(convergence::AbstractConvergence, ce::AbstractCounterfactualExplanation)

The default hinge loss for any convergence criterion. Can be overridden inside the Convergence module as part of the definition of specific convergence criteria.

source
CounterfactualExplanations.Generators.hinge_lossMethod
hinge_loss(convergence::AbstractConvergence, ce::AbstractCounterfactualExplanation)

The default hinge loss for any convergence criterion. Can be overridden inside the Convergence module as part of the definition of specific convergence criteria.

source
CounterfactualExplanations.Objectives.distanceMethod
distance(
     ce::AbstractCounterfactualExplanation;
     from::Union{Nothing,AbstractArray}=nothing,
     agg=mean,
     p::Real=1,
     weights::Union{Nothing,AbstractArray}=nothing,
-)

Computes the distance of the counterfactual to the original factual.

source
Flux.Losses.logitbinarycrossentropyMethod
Flux.Losses.logitbinarycrossentropy(ce::AbstractCounterfactualExplanation)

Simply extends the logitbinarycrossentropy method to work with objects of type AbstractCounterfactualExplanation.

source
Flux.Losses.logitcrossentropyMethod
Flux.Losses.logitcrossentropy(ce::AbstractCounterfactualExplanation)

Simply extends the logitcrossentropy method to work with objects of type AbstractCounterfactualExplanation.

source
Flux.Losses.mseMethod
Flux.Losses.mse(ce::AbstractCounterfactualExplanation)

Simply extends the mse method to work with objects of type AbstractCounterfactualExplanation.

source

Internal functions

CounterfactualExplanations.CREType
CRE <: AbstractCounterfactualExplanation

A Counterfactual Rule Explanation (CRE) is a global explanation for a given target, model M, data and generator.

source
CounterfactualExplanations.JEMType
JEM

Concrete type for joint-energy models from JointEnergyModels. Since JointEnergyModels has an MLJ interface, we subtype the MLJModelType model type.

source
CounterfactualExplanations.LaplaceReduxModelType
LaplaceReduxModel

Concrete type for neural networks with Laplace Approximation from the LaplaceRedux package. Currently subtyping the AbstractFluxNN model type, although this may be changed to MLJ in the future.

source
Flux.Losses.logitbinarycrossentropyMethod
Flux.Losses.logitbinarycrossentropy(ce::AbstractCounterfactualExplanation)

Simply extends the logitbinarycrossentropy method to work with objects of type AbstractCounterfactualExplanation.

source
Flux.Losses.logitcrossentropyMethod
Flux.Losses.logitcrossentropy(ce::AbstractCounterfactualExplanation)

Simply extends the logitcrossentropy method to work with objects of type AbstractCounterfactualExplanation.

source
Flux.Losses.mseMethod
Flux.Losses.mse(ce::AbstractCounterfactualExplanation)

Simply extends the mse method to work with objects of type AbstractCounterfactualExplanation.

source

Internal functions

CounterfactualExplanations.CREType
CRE <: AbstractCounterfactualExplanation

A Counterfactual Rule Explanation (CRE) is a global explanation for a given target, model M, data and generator.

source
CounterfactualExplanations.JEMType
JEM

Concrete type for joint-energy models from JointEnergyModels. Since JointEnergyModels has an MLJ interface, we subtype the MLJModelType model type.

source
CounterfactualExplanations.LaplaceReduxModelType
LaplaceReduxModel

Concrete type for neural networks with Laplace Approximation from the LaplaceRedux package. Currently subtyping the AbstractFluxNN model type, although this may be changed to MLJ in the future.

source
CounterfactualExplanations.decode_arrayMethod
decode_array(
     data::CounterfactualData,
     dt::CausalInference.SCM,
     x::AbstractArray,
-)

Helper function to decode an array x using a data transform dt::GenerativeModels.AbstractGenerativeModel.

source
CounterfactualExplanations.decode_arrayMethod
decode_array(dt::GenerativeModels.AbstractGenerativeModel, x::AbstractArray)

Helper function to decode an array x using a data transform dt::GenerativeModels.AbstractGenerativeModel.

source
CounterfactualExplanations.decode_arrayMethod
decode_array(dt::MultivariateStats.AbstractDimensionalityReduction, x::AbstractArray)

Helper function to decode an array x using a data transform dt::MultivariateStats.AbstractDimensionalityReduction.

source
CounterfactualExplanations.decode_stateFunction

function decode_state( ce::CounterfactualExplanation, x::Union{AbstractArray,Nothing}=nothing, )

Applies all the applicable decoding functions:

  1. If applicable, map the state variable back from the latent space to the feature space.
  2. If and where applicable, inverse-transform features.
  3. Reconstruct all categorical encodings.

Finally, the decoded counterfactual is returned.

source
CounterfactualExplanations.encode_arrayMethod
encode_array(data::CounterfactualData, dt::CausalInference.SCM, x::AbstractArray)

Helper function to encode an array x using a data transform dt::CausalInference.SCM. This is a no-op.

source
CounterfactualExplanations.encode_arrayMethod
encode_array(dt::GenerativeModels.AbstractGenerativeModel, x::AbstractArray)

Helper function to encode an array x using a data transform dt::GenerativeModels.AbstractGenerativeModel.

source
CounterfactualExplanations.encode_arrayMethod
encode_array(dt::MultivariateStats.AbstractDimensionalityReduction, x::AbstractArray)

Helper function to encode an array x using a data transform dt::MultivariateStats.AbstractDimensionalityReduction.

source
CounterfactualExplanations.encode_stateFunction

function encode_state( ce::CounterfactualExplanation, x::Union{AbstractArray,Nothing} = nothing, )

Applies all required encodings to x:

  1. If applicable, it maps x to the latent space learned by the generative model.
  2. If and where applicable, it rescales features.

Finally, it returns the encoded state variable.

source
CounterfactualExplanations.guess_likelihoodMethod
guess_likelihood(y::RawOutputArrayType)

Guess the likelihood based on the scientific type of the output array. Returns a symbol indicating the guessed likelihood and the scientific type of the output array.

source
CounterfactualExplanations.initialize!Method
initialize!(ce::CounterfactualExplanation)

Initializes the counterfactual explanation. This method is called by the constructor. It does the following:

  1. Creates a dictionary to store information about the search.
  2. Initializes the counterfactual state.
  3. Initializes the search path.
  4. Initializes the loss.
source
CounterfactualExplanations.initialize_stateMethod
initialize_state(ce::CounterfactualExplanation)

Initializes the starting point for the factual(s):

  1. If ce.initialization is set to :identity or counterfactuals are searched in a latent space, then nothing is done.
  2. If ce.initialization is set to :add_perturbation, then a random perturbation is added to the factual following following Slack (2021): https://arxiv.org/abs/2106.02666. The authors show that this improves adversarial robustness.
source
CounterfactualExplanations.polynomial_decayMethod
polynomial_decay(a::Real, b::Real, decay::Real, t::Int)

Computes the polynomial decay function as in Welling et al. (2011): https://www.stats.ox.ac.uk/~teh/research/compstats/WelTeh2011a.pdf.

source
CounterfactualExplanations.update!Method
update!(ce::CounterfactualExplanation)

An important subroutine that updates the counterfactual explanation. It takes a snapshot of the current counterfactual search state and passes it to the generator. Based on the current state the generator generates perturbations. Various constraints are then applied to the proposed vector of feature perturbations. Finally, the counterfactual search state is updated.

source
CounterfactualExplanations.decode_arrayMethod
decode_array(dt::GenerativeModels.AbstractGenerativeModel, x::AbstractArray)

Helper function to decode an array x using a data transform dt::GenerativeModels.AbstractGenerativeModel.

source
CounterfactualExplanations.decode_arrayMethod
decode_array(dt::MultivariateStats.AbstractDimensionalityReduction, x::AbstractArray)

Helper function to decode an array x using a data transform dt::MultivariateStats.AbstractDimensionalityReduction.

source
CounterfactualExplanations.decode_stateFunction

function decode_state( ce::CounterfactualExplanation, x::Union{AbstractArray,Nothing}=nothing, )

Applies all the applicable decoding functions:

  1. If applicable, map the state variable back from the latent space to the feature space.
  2. If and where applicable, inverse-transform features.
  3. Reconstruct all categorical encodings.

Finally, the decoded counterfactual is returned.

source
CounterfactualExplanations.encode_arrayMethod
encode_array(data::CounterfactualData, dt::CausalInference.SCM, x::AbstractArray)

Helper function to encode an array x using a data transform dt::CausalInference.SCM. This is a no-op.

source
CounterfactualExplanations.encode_arrayMethod
encode_array(dt::GenerativeModels.AbstractGenerativeModel, x::AbstractArray)

Helper function to encode an array x using a data transform dt::GenerativeModels.AbstractGenerativeModel.

source
CounterfactualExplanations.encode_arrayMethod
encode_array(dt::MultivariateStats.AbstractDimensionalityReduction, x::AbstractArray)

Helper function to encode an array x using a data transform dt::MultivariateStats.AbstractDimensionalityReduction.

source
CounterfactualExplanations.encode_stateFunction

function encode_state( ce::CounterfactualExplanation, x::Union{AbstractArray,Nothing} = nothing, )

Applies all required encodings to x:

  1. If applicable, it maps x to the latent space learned by the generative model.
  2. If and where applicable, it rescales features.

Finally, it returns the encoded state variable.

source
CounterfactualExplanations.guess_likelihoodMethod
guess_likelihood(y::RawOutputArrayType)

Guess the likelihood based on the scientific type of the output array. Returns a symbol indicating the guessed likelihood and the scientific type of the output array.

source
CounterfactualExplanations.initialize!Method
initialize!(ce::CounterfactualExplanation)

Initializes the counterfactual explanation. This method is called by the constructor. It does the following:

  1. Creates a dictionary to store information about the search.
  2. Initializes the counterfactual state.
  3. Initializes the search path.
  4. Initializes the loss.
source
CounterfactualExplanations.initialize_stateMethod
initialize_state(ce::CounterfactualExplanation)

Initializes the starting point for the factual(s):

  1. If ce.initialization is set to :identity or counterfactuals are searched in a latent space, then nothing is done.
  2. If ce.initialization is set to :add_perturbation, then a random perturbation is added to the factual following following Slack (2021): https://arxiv.org/abs/2106.02666. The authors show that this improves adversarial robustness.
source
CounterfactualExplanations.polynomial_decayMethod
polynomial_decay(a::Real, b::Real, decay::Real, t::Int)

Computes the polynomial decay function as in Welling et al. (2011): https://www.stats.ox.ac.uk/~teh/research/compstats/WelTeh2011a.pdf.

source
CounterfactualExplanations.update!Method
update!(ce::CounterfactualExplanation)

An important subroutine that updates the counterfactual explanation. It takes a snapshot of the current counterfactual search state and passes it to the generator. Based on the current state the generator generates perturbations. Various constraints are then applied to the proposed vector of feature perturbations. Finally, the counterfactual search state is updated.

source
CounterfactualExplanations.Evaluation.EnergySamplerMethod
EnergySampler(
     model::AbstractModel,
     𝒟x::Distribution,
     𝒟y::Distribution,
@@ -231,94 +231,94 @@
     batch_size::Int=50,
     prob_buffer::AbstractFloat=0.95,
     kwargs...,
-)

Constructor for EnergySampler, which is used to sample from the posterior distribution of the model conditioned on y.

Arguments

  • model::AbstractModel: The model to be used for sampling.
  • data::CounterfactualData: The data to be used for sampling.
  • y::Any: The conditioning value.
  • opt::AbstractSamplingRule=ImproperSGLD(): The sampling rule to be used. By default, SGLD is used with a = (2 / std(Uniform()) * std(𝒟x) and b = 1 and γ=0.9.
  • nsamples::Int=100: The number of samples to include in the final empirical posterior distribution.
  • niter_final::Int=1000: The number of iterations for generating samples from the posterior distribution. Typically, this number will be larger than the number of iterations during PMC training.
  • ntransitions::Int=0: The number of transitions for (optionally) warming up the sampler. By default, this is set to 0 and the sampler is not warmed up. For valies larger than 0, the sampler is trained through PMC for niter iterations and ntransitions transitions to build a buffer of samples. The buffer is used for posterior sampling.
  • opt_warmup::Union{Nothing,AbstractSamplingRule}=nothing: The sampling rule to be used for warm-up. By default, ImproperSGLD is used with α = (2 / std(Uniform()) * std(𝒟x) and γ = 0.005α.
  • niter::Int=100: The number of iterations for training the sampler through PMC.
  • batch_size::Int=50: The batch size for training the sampler.
  • prob_buffer::AbstractFloat=0.5: The probability of drawing samples from the replay buffer. Smaller values will result in more samples being drawn from the prior and typically lead to better mixing and diversity in the samples.
  • kwargs...: Additional keyword arguments to be passed on to the sampler and PMC.

Returns

  • EnergySampler: An instance of EnergySampler.
source
Base.randFunction
Base.rand(sampler::EnergySampler, n::Int=100; retrain=false)

Overloads the rand method to randomly draw n samples from EnergySampler. If from_posterior is true, the samples are drawn from the posterior distribution. Otherwise, the samples are generated from the model conditioned on the target value using a single chain (see generate_posterior_samples).

Arguments

  • sampler::EnergySampler: The EnergySampler object to be used for sampling.
  • n::Int=100: The number of samples to draw.
  • from_posterior::Bool=true: Whether to draw samples from the posterior distribution.
  • niter::Int=500: The number of iterations for generating samples through Monte Carlo sampling (single chain).

Returns

  • AbstractArray: The samples.
source
Base.vcatMethod
Base.vcat(bmk1::Benchmark, bmk2::Benchmark)

Vertically concatenates two Benchmark objects.

source
CounterfactualExplanations.Evaluation.compute_measureMethod
compute_measure(ce::CounterfactualExplanation, measure::Function, agg::Function)

Computes a single measure for a counterfactual explanation. The measure is applied to the counterfactual explanation ce and aggregated using the aggregation function agg.

source
CounterfactualExplanations.Evaluation.define_priorMethod
define_prior(
+)

Constructor for EnergySampler, which is used to sample from the posterior distribution of the model conditioned on y.

Arguments

  • model::AbstractModel: The model to be used for sampling.
  • data::CounterfactualData: The data to be used for sampling.
  • y::Any: The conditioning value.
  • opt::AbstractSamplingRule=ImproperSGLD(): The sampling rule to be used. By default, SGLD is used with a = (2 / std(Uniform()) * std(𝒟x) and b = 1 and γ=0.9.
  • nsamples::Int=100: The number of samples to include in the final empirical posterior distribution.
  • niter_final::Int=1000: The number of iterations for generating samples from the posterior distribution. Typically, this number will be larger than the number of iterations during PMC training.
  • ntransitions::Int=0: The number of transitions for (optionally) warming up the sampler. By default, this is set to 0 and the sampler is not warmed up. For valies larger than 0, the sampler is trained through PMC for niter iterations and ntransitions transitions to build a buffer of samples. The buffer is used for posterior sampling.
  • opt_warmup::Union{Nothing,AbstractSamplingRule}=nothing: The sampling rule to be used for warm-up. By default, ImproperSGLD is used with α = (2 / std(Uniform()) * std(𝒟x) and γ = 0.005α.
  • niter::Int=100: The number of iterations for training the sampler through PMC.
  • batch_size::Int=50: The batch size for training the sampler.
  • prob_buffer::AbstractFloat=0.5: The probability of drawing samples from the replay buffer. Smaller values will result in more samples being drawn from the prior and typically lead to better mixing and diversity in the samples.
  • kwargs...: Additional keyword arguments to be passed on to the sampler and PMC.

Returns

  • EnergySampler: An instance of EnergySampler.
source
Base.randFunction
Base.rand(sampler::EnergySampler, n::Int=100; retrain=false)

Overloads the rand method to randomly draw n samples from EnergySampler. If from_posterior is true, the samples are drawn from the posterior distribution. Otherwise, the samples are generated from the model conditioned on the target value using a single chain (see generate_posterior_samples).

Arguments

  • sampler::EnergySampler: The EnergySampler object to be used for sampling.
  • n::Int=100: The number of samples to draw.
  • from_posterior::Bool=true: Whether to draw samples from the posterior distribution.
  • niter::Int=500: The number of iterations for generating samples through Monte Carlo sampling (single chain).

Returns

  • AbstractArray: The samples.
source
Base.vcatMethod
Base.vcat(bmk1::Benchmark, bmk2::Benchmark)

Vertically concatenates two Benchmark objects.

source
CounterfactualExplanations.Evaluation.compute_measureMethod
compute_measure(ce::CounterfactualExplanation, measure::Function, agg::Function)

Computes a single measure for a counterfactual explanation. The measure is applied to the counterfactual explanation ce and aggregated using the aggregation function agg.

source
CounterfactualExplanations.Evaluation.define_priorMethod
define_prior(
     data::CounterfactualData;
     𝒟x::Union{Nothing,Distribution}=nothing,
     𝒟y::Union{Nothing,Distribution}=nothing,
     n_std::Int=3,
-)

Defines the prior for the data. The space is defined as a uniform distribution with bounds defined by the mean and standard deviation of the data. The bounds are extended by n_std standard deviations.

Arguments

  • data::CounterfactualData: The data to be used for defining the prior sampling space.
  • n_std::Int=3: The number of standard deviations to extend the bounds.

Returns

  • Uniform: The uniform distribution defining the prior sampling space.
source
CounterfactualExplanations.Evaluation.distance_from_posteriorMethod
distance_from_posterior(ce::AbstractCounterfactualExplanation)

Computes the distance from the counterfactual to generated conditional samples. The distance is computed as the mean distance from the counterfactual to the samples drawn from the posterior distribution of the model. By default, the cosine distance is used.

Arguments

  • ce::AbstractCounterfactualExplanation: The counterfactual explanation object.
  • nsamples::Int=1000: The number of samples to draw.
  • from_posterior::Bool=true: Whether to draw samples from the posterior distribution.
  • agg: The aggregation function to use for computing the distance.
  • choose_lowest_energy::Bool=true: Whether to choose the samples with the lowest energy.
  • choose_random::Bool=false: Whether to choose random samples.
  • nmin::Int=25: The minimum number of samples to choose.
  • p::Int=1: The norm to use for computing the distance.
  • cosine::Bool=true: Whether to use the cosine distance.
  • kwargs...: Additional keyword arguments to be passed on to the EnergySampler.

Returns

  • AbstractFloat: The distance from the counterfactual to the samples.
source
CounterfactualExplanations.Evaluation.faithfulnessMethod
faithfulness(
+)

Defines the prior for the data. The space is defined as a uniform distribution with bounds defined by the mean and standard deviation of the data. The bounds are extended by n_std standard deviations.

Arguments

  • data::CounterfactualData: The data to be used for defining the prior sampling space.
  • n_std::Int=3: The number of standard deviations to extend the bounds.

Returns

  • Uniform: The uniform distribution defining the prior sampling space.
source
CounterfactualExplanations.Evaluation.distance_from_posteriorMethod
distance_from_posterior(ce::AbstractCounterfactualExplanation)

Computes the distance from the counterfactual to generated conditional samples. The distance is computed as the mean distance from the counterfactual to the samples drawn from the posterior distribution of the model. By default, the cosine distance is used.

Arguments

  • ce::AbstractCounterfactualExplanation: The counterfactual explanation object.
  • nsamples::Int=1000: The number of samples to draw.
  • from_posterior::Bool=true: Whether to draw samples from the posterior distribution.
  • agg: The aggregation function to use for computing the distance.
  • choose_lowest_energy::Bool=true: Whether to choose the samples with the lowest energy.
  • choose_random::Bool=false: Whether to choose random samples.
  • nmin::Int=25: The minimum number of samples to choose.
  • p::Int=1: The norm to use for computing the distance.
  • cosine::Bool=true: Whether to use the cosine distance.
  • kwargs...: Additional keyword arguments to be passed on to the EnergySampler.

Returns

  • AbstractFloat: The distance from the counterfactual to the samples.
source
CounterfactualExplanations.Evaluation.faithfulnessMethod
faithfulness(
     ce::CounterfactualExplanation,
     fun::typeof(Objectives.distance_from_target);
     λ::AbstractFloat=1.0,
     kwrgs...,
-)

Computes the faithfulness of a counterfactual explanation based on the cosine similarity between the counterfactual and samples drawn from the model posterior through SGLD (see distance_from_posterior).

source
CounterfactualExplanations.Evaluation.generate_posterior_samplesFunction
generate_posterior_samples(
     e::EnergySampler, n::Int=1000; niter::Int=1000, kwargs...
-)

Generates n samples from the posterior distribution of the model conditioned on the target value y. The samples are generated through (Persistent) Monte Carlo sampling using the EnergySampler object. If the replay buffer is not empty, the initial samples are drawn from the buffer.

Note that by default the batch size of the sampler is set to round(Int, n / 100) by default for sampling. This is to ensure that the samples are drawn independently from the posterior distribution. It also helps to avoid vanishing gradients.

The chain is run persistently until n samples are generated. The number of transitions is set to ceil(Int, n / batch_size). Once the chain is run, the last n samples are form the replay buffer are returned.

Arguments

  • e::EnergySampler: The EnergySampler object to be used for sampling.
  • n::Int=100: The number of samples to generate.
  • batch_size::Int=round(Int, n / 100): The batch size for sampling.
  • niter::Int=1000: The number of iterations for generating samples from the posterior distribution.
  • kwargs...: Additional keyword arguments to be passed on to the sampler.

Returns

  • AbstractArray: The generated samples.
source
CounterfactualExplanations.Evaluation.get_lowest_energy_sampleMethod
get_lowest_energy_sample(sampler::EnergySampler; n::Int=5)

Chooses the samples with the lowest energy (i.e. highest probability) from EnergySampler.

Arguments

  • sampler::EnergySampler: The EnergySampler object to be used for sampling.
  • n::Int=5: The number of samples to choose.

Returns

  • AbstractArray: The samples with the lowest energy.
source
CounterfactualExplanations.Evaluation.get_sampler!Method
get_sampler!(ce::AbstractCounterfactualExplanation; kwargs...)

Gets the EnergySampler object from the counterfactual explanation. If the sampler is not found, it is constructed and stored in the counterfactual explanation object.

source
CounterfactualExplanations.Evaluation.plausibilityMethod
plausibility(
+)

Generates n samples from the posterior distribution of the model conditioned on the target value y. The samples are generated through (Persistent) Monte Carlo sampling using the EnergySampler object. If the replay buffer is not empty, the initial samples are drawn from the buffer.

Note that by default the batch size of the sampler is set to round(Int, n / 100) by default for sampling. This is to ensure that the samples are drawn independently from the posterior distribution. It also helps to avoid vanishing gradients.

The chain is run persistently until n samples are generated. The number of transitions is set to ceil(Int, n / batch_size). Once the chain is run, the last n samples are form the replay buffer are returned.

Arguments

  • e::EnergySampler: The EnergySampler object to be used for sampling.
  • n::Int=100: The number of samples to generate.
  • batch_size::Int=round(Int, n / 100): The batch size for sampling.
  • niter::Int=1000: The number of iterations for generating samples from the posterior distribution.
  • kwargs...: Additional keyword arguments to be passed on to the sampler.

Returns

  • AbstractArray: The generated samples.
source
CounterfactualExplanations.Evaluation.get_lowest_energy_sampleMethod
get_lowest_energy_sample(sampler::EnergySampler; n::Int=5)

Chooses the samples with the lowest energy (i.e. highest probability) from EnergySampler.

Arguments

  • sampler::EnergySampler: The EnergySampler object to be used for sampling.
  • n::Int=5: The number of samples to choose.

Returns

  • AbstractArray: The samples with the lowest energy.
source
CounterfactualExplanations.Evaluation.get_sampler!Method
get_sampler!(ce::AbstractCounterfactualExplanation; kwargs...)

Gets the EnergySampler object from the counterfactual explanation. If the sampler is not found, it is constructed and stored in the counterfactual explanation object.

source
CounterfactualExplanations.Evaluation.plausibilityMethod
plausibility(
     ce::CounterfactualExplanation,
     fun::typeof(Objectives.distance_from_target);
     K=nothing,
     kwrgs...,
-)

Computes the plausibility of a counterfactual explanation based on the cosine similarity between the counterfactual and samples drawn from the target distribution.

source
CounterfactualExplanations.Evaluation.to_dataframeMethod
evaluate_dataframe(
     ce::CounterfactualExplanation,
     measure::Vector{Function},
     agg::Function,
     report_each::Bool,
     pivot_longer::Bool,
     store_ce::Bool,
-)

Evaluates a counterfactual explanation and returns a dataframe of evaluation measures.

source
CounterfactualExplanations.Evaluation.warmup!Method
warmup!(
     e::EnergySampler,
     y::Int;
     niter::Int=20,
     ntransitions::Int=100,
     kwargs...,
-)

Warms up the EnergySampler to the underlying model for conditioning value y. Specifically, this entails running PMC for niter iterations and ntransitions transitions to build a buffer of samples. The buffer is used for posterior sampling.

Arguments

  • e::EnergySampler: The EnergySampler object to be trained.
  • y::Int: The conditioning value.
  • opt::Union{Nothing,AbstractSamplingRule}: The sampling rule to be used. By default, ImproperSGLD is used with α = 2 * std(Uniform(𝒟x)) and γ = 0.005α.
  • niter::Int=20: The number of iterations for training the sampler through PMC.
  • ntransitions::Int=100: The number of transitions for training the sampler. In each transition, the sampler is updated with a mini-batch of data. Data is either drawn from the replay buffer or reinitialized from the prior.
  • kwargs...: Additional keyword arguments to be passed on to the sampler and PMC.

Returns

  • EnergySampler: The trained EnergySampler.
source
CounterfactualExplanations.DataPreprocessing.InputTransformerType
InputTransformer

Abstract type for data transformers. This can be any of the following:

  • StatsBase.AbstractDataTransform: A data transformation object from the StatsBase package.
  • MultivariateStats.AbstractDimensionalityReduction: A dimensionality reduction object from the MultivariateStats package.
  • GenerativeModels.AbstractGenerativeModel: A generative model object from the GenerativeModels module.
source
CounterfactualExplanations.DataPreprocessing.convert_to_1dMethod
convert_to_1d(y::Matrix, y_levels::AbstractArray)

Helper function to convert a one-hot encoded matrix to a vector of labels. This is necessary because MLJ models require the labels to be represented as a vector, but the synthetic datasets in this package hold the labels in one-hot encoded form.

Arguments

  • y::Matrix: The one-hot encoded matrix.
  • y_levels::AbstractArray: The levels of the categorical variable.

Returns

  • labels: A vector of labels.
source
CounterfactualExplanations.DataPreprocessing.preprocess_data_for_mljMethod
preprocess_data_for_mlj(data::CounterfactualData)

Helper function to preprocess data::CounterfactualData for MLJ models.

Arguments

  • data::CounterfactualData: The data to be preprocessed.

Returns

  • (df_x, y): A tuple containing the preprocessed data, with df_x being a DataFrame object and y being a categorical vector.

Example

X, y = preprocessdatafor_mlj(data)

source
CounterfactualExplanations.DataPreprocessing.train_test_splitMethod
train_test_split(data::CounterfactualData;test_size=0.2,keep_class_ratio=false)

Splits data into train and test split.

Arguments

  • data::CounterfactualData: The data to be preprocessed.
  • test_size=0.2: Proportion of the data to be used for testing.
  • keep_class_ratio=false: Decides whether to sample equally from each class, or keep their relative size.

Returns

  • (train_data::CounterfactualData, test_data::CounterfactualData): A tuple containing the train and test splits.

Example

train, test = traintestsplit(data, testsize=0.1, keepclass_ratio=true)

source
CounterfactualExplanations.Models.FitresultType
Fitresult

A struct to hold the results of fitting a model.

Fields

  • fitresult: The result of fitting the model to the data. This object should be callable on new data.
  • other::Dict: A dictionary to hold any other relevant information.
source
CounterfactualExplanations.Models.trainMethod
train(
+)

Warms up the EnergySampler to the underlying model for conditioning value y. Specifically, this entails running PMC for niter iterations and ntransitions transitions to build a buffer of samples. The buffer is used for posterior sampling.

Arguments

  • e::EnergySampler: The EnergySampler object to be trained.
  • y::Int: The conditioning value.
  • opt::Union{Nothing,AbstractSamplingRule}: The sampling rule to be used. By default, ImproperSGLD is used with α = 2 * std(Uniform(𝒟x)) and γ = 0.005α.
  • niter::Int=20: The number of iterations for training the sampler through PMC.
  • ntransitions::Int=100: The number of transitions for training the sampler. In each transition, the sampler is updated with a mini-batch of data. Data is either drawn from the replay buffer or reinitialized from the prior.
  • kwargs...: Additional keyword arguments to be passed on to the sampler and PMC.

Returns

  • EnergySampler: The trained EnergySampler.
source
CounterfactualExplanations.DataPreprocessing.InputTransformerType
InputTransformer

Abstract type for data transformers. This can be any of the following:

  • StatsBase.AbstractDataTransform: A data transformation object from the StatsBase package.
  • MultivariateStats.AbstractDimensionalityReduction: A dimensionality reduction object from the MultivariateStats package.
  • GenerativeModels.AbstractGenerativeModel: A generative model object from the GenerativeModels module.
source
CounterfactualExplanations.DataPreprocessing.convert_to_1dMethod
convert_to_1d(y::Matrix, y_levels::AbstractArray)

Helper function to convert a one-hot encoded matrix to a vector of labels. This is necessary because MLJ models require the labels to be represented as a vector, but the synthetic datasets in this package hold the labels in one-hot encoded form.

Arguments

  • y::Matrix: The one-hot encoded matrix.
  • y_levels::AbstractArray: The levels of the categorical variable.

Returns

  • labels: A vector of labels.
source
CounterfactualExplanations.DataPreprocessing.preprocess_data_for_mljMethod
preprocess_data_for_mlj(data::CounterfactualData)

Helper function to preprocess data::CounterfactualData for MLJ models.

Arguments

  • data::CounterfactualData: The data to be preprocessed.

Returns

  • (df_x, y): A tuple containing the preprocessed data, with df_x being a DataFrame object and y being a categorical vector.

Example

X, y = preprocessdatafor_mlj(data)

source
CounterfactualExplanations.DataPreprocessing.train_test_splitMethod
train_test_split(data::CounterfactualData;test_size=0.2,keep_class_ratio=false)

Splits data into train and test split.

Arguments

  • data::CounterfactualData: The data to be preprocessed.
  • test_size=0.2: Proportion of the data to be used for testing.
  • keep_class_ratio=false: Decides whether to sample equally from each class, or keep their relative size.

Returns

  • (train_data::CounterfactualData, test_data::CounterfactualData): A tuple containing the train and test splits.

Example

train, test = traintestsplit(data, testsize=0.1, keepclass_ratio=true)

source
CounterfactualExplanations.Models.FitresultType
Fitresult

A struct to hold the results of fitting a model.

Fields

  • fitresult: The result of fitting the model to the data. This object should be callable on new data.
  • other::Dict: A dictionary to hold any other relevant information.
source
Base.randFunction

Random.rand(encoder::Encoder, x, device=cpu)

Draws random samples from the latent distribution.

source
CounterfactualExplanations.Generators.feature_selection!Method
feature_selection!(ce::AbstractCounterfactualExplanation)

Perform feature selection to find the dimension with the closest (but not equal) values between the ce.x (factual) and ce.s′ (counterfactual) arrays.

Arguments

  • ce::AbstractCounterfactualExplanation: An instance of the AbstractCounterfactualExplanation type representing the counterfactual explanation.

Returns

  • nothing

The function iteratively modifies the ce.s′ counterfactual array by updating its elements to match the corresponding elements in the ce.x factual array, one dimension at a time, until the predicted label of the modified ce.s′ matches the predicted label of the ce.x array.

source
CounterfactualExplanations.Generators.find_closest_dimensionMethod
find_closest_dimension(factual, counterfactual)

Find the dimension with the closest (but not equal) values between the factual and counterfactual arrays.

Arguments

  • factual: The factual array.
  • counterfactual: The counterfactual array.

Returns

  • closest_dimension: The index of the dimension with the closest values.

The function iterates over the indices of the factual array and calculates the absolute difference between the corresponding elements in the factual and counterfactual arrays. It returns the index of the dimension with the smallest difference, excluding dimensions where the values in factual and counterfactual are equal.

source
CounterfactualExplanations.Generators.find_counterfactualMethod
find_counterfactual(model, factual_class, counterfactual_data, counterfactual_candidates)

Find the first counterfactual index by predicting labels.

Arguments

  • model: The fitted model used for prediction.
  • target_class: Expected target class.
  • counterfactual_data: Data required for counterfactual generation.
  • counterfactual_candidates: The array of counterfactual candidates.

Returns

  • counterfactual: The index of the first counterfactual found.
source
CounterfactualExplanations.Generators.growing_spheres_generation!Method
growing_spheres_generation(ce::AbstractCounterfactualExplanation)

Generate counterfactual candidates using the growing spheres generation algorithm.

Arguments

  • ce::AbstractCounterfactualExplanation: An instance of the AbstractCounterfactualExplanation type representing the counterfactual explanation.

Returns

  • nothing

This function applies the growing spheres generation algorithm to generate counterfactual candidates. It starts by generating random points uniformly on a sphere, gradually reducing the search space until no counterfactuals are found. Then it expands the search space until at least one counterfactual is found or the maximum number of iterations is reached.

The algorithm iteratively generates counterfactual candidates and predicts their labels using the model stored in ce.M. It checks if any of the predicted labels are different from the factual class. The process of reducing the search space involves halving the search radius, while the process of expanding the search space involves increasing the search radius.

source
CounterfactualExplanations.Generators.hMethod
h(generator::AbstractGenerator, penalty::Function, ce::AbstractCounterfactualExplanation)

Overloads the h function for the case where a single penalty function is provided.

source
CounterfactualExplanations.Generators.hMethod
h(generator::AbstractGenerator, penalty::Tuple, ce::AbstractCounterfactualExplanation)

Overloads the h function for the case where a single penalty function is provided with additional keyword arguments.

source
CounterfactualExplanations.Generators.hMethod
h(generator::AbstractGenerator, penalty::Tuple, ce::AbstractCounterfactualExplanation)

Overloads the h function for the case where a single penalty function is provided with additional keyword arguments.

source
CounterfactualExplanations.Generators.hyper_sphere_coordinatesMethod
hyper_sphere_coordinates(n_search_samples::Int, instance::Vector{Float64}, low::Int, high::Int; p_norm::Int=2)

Generates candidate counterfactuals using the growing spheres method based on hyper-sphere coordinates.

The implementation follows the Random Point Picking over a sphere algorithm described in the paper: "Learning Counterfactual Explanations for Tabular Data" by Pawelczyk, Broelemann & Kascneci (2020), presented at The Web Conference 2020 (WWW). It ensures that points are sampled uniformly at random using insights from: http://mathworld.wolfram.com/HyperspherePointPicking.html

The growing spheres method is originally proposed in the paper: "Comparison-based Inverse Classification for Interpretability in Machine Learning" by Thibaut Laugel et al (2018), presented at the International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (2018).

Arguments

  • n_search_samples::Int: The number of search samples (int > 0).
  • instance::AbstractArray: The input point array.
  • low::AbstractFloat: The lower bound (float >= 0, l < h).
  • high::AbstractFloat: The upper bound (float >= 0, h > l).
  • p_norm::Integer: The norm parameter (int >= 1).

Returns

  • candidate_counterfactuals::Array: An array of candidate counterfactuals.
source
CounterfactualExplanations.Generators.incompatibleMethod
incompatible(AbstractGenerator, AbstractCounterfactualExplanation)

Checks if the generator is incompatible with any of the additional specifications for the counterfactual explanations. By default, generators are assumed to be compatible.

source
Base.randFunction

Random.rand(encoder::Encoder, x, device=cpu)

Draws random samples from the latent distribution.

source
CounterfactualExplanations.Generators.feature_selection!Method
feature_selection!(ce::AbstractCounterfactualExplanation)

Perform feature selection to find the dimension with the closest (but not equal) values between the ce.x (factual) and ce.s′ (counterfactual) arrays.

Arguments

  • ce::AbstractCounterfactualExplanation: An instance of the AbstractCounterfactualExplanation type representing the counterfactual explanation.

Returns

  • nothing

The function iteratively modifies the ce.s′ counterfactual array by updating its elements to match the corresponding elements in the ce.x factual array, one dimension at a time, until the predicted label of the modified ce.s′ matches the predicted label of the ce.x array.

source
CounterfactualExplanations.Generators.find_closest_dimensionMethod
find_closest_dimension(factual, counterfactual)

Find the dimension with the closest (but not equal) values between the factual and counterfactual arrays.

Arguments

  • factual: The factual array.
  • counterfactual: The counterfactual array.

Returns

  • closest_dimension: The index of the dimension with the closest values.

The function iterates over the indices of the factual array and calculates the absolute difference between the corresponding elements in the factual and counterfactual arrays. It returns the index of the dimension with the smallest difference, excluding dimensions where the values in factual and counterfactual are equal.

source
CounterfactualExplanations.Generators.find_counterfactualMethod
find_counterfactual(model, factual_class, counterfactual_data, counterfactual_candidates)

Find the first counterfactual index by predicting labels.

Arguments

  • model: The fitted model used for prediction.
  • target_class: Expected target class.
  • counterfactual_data: Data required for counterfactual generation.
  • counterfactual_candidates: The array of counterfactual candidates.

Returns

  • counterfactual: The index of the first counterfactual found.
source
CounterfactualExplanations.Generators.growing_spheres_generation!Method
growing_spheres_generation(ce::AbstractCounterfactualExplanation)

Generate counterfactual candidates using the growing spheres generation algorithm.

Arguments

  • ce::AbstractCounterfactualExplanation: An instance of the AbstractCounterfactualExplanation type representing the counterfactual explanation.

Returns

  • nothing

This function applies the growing spheres generation algorithm to generate counterfactual candidates. It starts by generating random points uniformly on a sphere, gradually reducing the search space until no counterfactuals are found. Then it expands the search space until at least one counterfactual is found or the maximum number of iterations is reached.

The algorithm iteratively generates counterfactual candidates and predicts their labels using the model stored in ce.M. It checks if any of the predicted labels are different from the factual class. The process of reducing the search space involves halving the search radius, while the process of expanding the search space involves increasing the search radius.

source
CounterfactualExplanations.Generators.hMethod
h(generator::AbstractGenerator, penalty::Function, ce::AbstractCounterfactualExplanation)

Overloads the h function for the case where a single penalty function is provided.

source
CounterfactualExplanations.Generators.hMethod
h(generator::AbstractGenerator, penalty::Tuple, ce::AbstractCounterfactualExplanation)

Overloads the h function for the case where a single penalty function is provided with additional keyword arguments.

source
CounterfactualExplanations.Generators.hMethod
h(generator::AbstractGenerator, penalty::Tuple, ce::AbstractCounterfactualExplanation)

Overloads the h function for the case where a single penalty function is provided with additional keyword arguments.

source
CounterfactualExplanations.Generators.hyper_sphere_coordinatesMethod
hyper_sphere_coordinates(n_search_samples::Int, instance::Vector{Float64}, low::Int, high::Int; p_norm::Int=2)

Generates candidate counterfactuals using the growing spheres method based on hyper-sphere coordinates.

The implementation follows the Random Point Picking over a sphere algorithm described in the paper: "Learning Counterfactual Explanations for Tabular Data" by Pawelczyk, Broelemann & Kascneci (2020), presented at The Web Conference 2020 (WWW). It ensures that points are sampled uniformly at random using insights from: http://mathworld.wolfram.com/HyperspherePointPicking.html

The growing spheres method is originally proposed in the paper: "Comparison-based Inverse Classification for Interpretability in Machine Learning" by Thibaut Laugel et al (2018), presented at the International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (2018).

Arguments

  • n_search_samples::Int: The number of search samples (int > 0).
  • instance::AbstractArray: The input point array.
  • low::AbstractFloat: The lower bound (float >= 0, l < h).
  • high::AbstractFloat: The upper bound (float >= 0, h > l).
  • p_norm::Integer: The norm parameter (int >= 1).

Returns

  • candidate_counterfactuals::Array: An array of candidate counterfactuals.
source
CounterfactualExplanations.Generators.incompatibleMethod
incompatible(AbstractGenerator, AbstractCounterfactualExplanation)

Checks if the generator is incompatible with any of the additional specifications for the counterfactual explanations. By default, generators are assumed to be compatible.

source
CounterfactualExplanations.Generators.propose_stateMethod
propose_state(
     ::Models.IsDifferentiable,
     generator::AbstractGradientBasedGenerator,
     ce::AbstractCounterfactualExplanation,
-)

Proposes new state based on backpropagation for gradient-based generators and differentiable models.

source
CounterfactualExplanations.Generators.∂hMethod
∂h(generator::AbstractGradientBasedGenerator, ce::AbstractCounterfactualExplanation)

The default method to compute the gradient of the complexity penalty at the current counterfactual state for gradient-based generators. It assumes that Zygote.jl has gradient access.

If the penalty is not provided, it returns 0.0. By default, Zygote never works out the gradient for constants and instead returns 'nothing', so we need to add a manual step to override this behaviour. See here: https://discourse.julialang.org/t/zygote-gradient/26715.

source
CounterfactualExplanations.Generators.∂hMethod
∂h(generator::AbstractGradientBasedGenerator, ce::AbstractCounterfactualExplanation)

The default method to compute the gradient of the complexity penalty at the current counterfactual state for gradient-based generators. It assumes that Zygote.jl has gradient access.

If the penalty is not provided, it returns 0.0. By default, Zygote never works out the gradient for constants and instead returns 'nothing', so we need to add a manual step to override this behaviour. See here: https://discourse.julialang.org/t/zygote-gradient/26715.

source
CounterfactualExplanations.Generators.∂ℓMethod
∂ℓ(
     generator::AbstractGradientBasedGenerator,
     ce::AbstractCounterfactualExplanation,
-)

The default method to compute the gradient of the loss function at the current counterfactual state for gradient-based generators. It assumes that Zygote.jl has gradient access.

source
CounterfactualExplanations.Generators.∇Method
∇(
+)

The default method to compute the gradient of the loss function at the current counterfactual state for gradient-based generators. It assumes that Zygote.jl has gradient access.

source
CounterfactualExplanations.Generators.∇Method
∇(
     generator::AbstractGradientBasedGenerator,
     ce::AbstractCounterfactualExplanation,
-)

The default method to compute the gradient of the counterfactual search objective for gradient-based generators. It simply computes the weighted sum over partial derivates. It assumes that Zygote.jl has gradient access. If the counterfactual is being generated using Probe, the hinge loss is added to the gradient.

source
CounterfactualExplanations.Objectives.distance_from_targetMethod
distance_from_target(
+)

The default method to compute the gradient of the counterfactual search objective for gradient-based generators. It simply computes the weighted sum over partial derivates. It assumes that Zygote.jl has gradient access. If the counterfactual is being generated using Probe, the hinge loss is added to the gradient.

source
CounterfactualExplanations.Objectives.distance_from_targetMethod
distance_from_target(
     ce::AbstractCounterfactualExplanation;
     K::Int=50
-)

Computes the distance of the counterfactual from samples in the target main. If choose_randomly is true, the function will randomly sample K neighbours from the target manifold. Otherwise, it will compute the pairwise distances and select the K closest neighbours.

Arguments

  • ce::AbstractCounterfactualExplanation: The counterfactual explanation.
  • K::Int=50: The number of neighbours to sample.
  • choose_randomly::Bool=true: Whether to sample neighbours randomly.
  • kwrgs...: Additional keyword arguments for the distance function.

Returns

  • Δ::AbstractFloat: The distance from the counterfactual to the target manifold.
source
CounterfactualExplanations.Objectives.energyMethod
energy(M::AbstractModel, x::AbstractArray, t::Int)

Computes the energy of the model at a given state as in Altmeyer et al. (2024): https://scholar.google.com/scholar?cluster=3697701546144846732&hl=en&as_sdt=0,5.

source
CounterfactualExplanations.Objectives.energy_constraintMethod
energy_constraint(
+)

Computes the distance of the counterfactual from samples in the target main. If choose_randomly is true, the function will randomly sample K neighbours from the target manifold. Otherwise, it will compute the pairwise distances and select the K closest neighbours.

Arguments

  • ce::AbstractCounterfactualExplanation: The counterfactual explanation.
  • K::Int=50: The number of neighbours to sample.
  • choose_randomly::Bool=true: Whether to sample neighbours randomly.
  • kwrgs...: Additional keyword arguments for the distance function.

Returns

  • Δ::AbstractFloat: The distance from the counterfactual to the target manifold.
source
CounterfactualExplanations.Objectives.energyMethod
energy(M::AbstractModel, x::AbstractArray, t::Int)

Computes the energy of the model at a given state as in Altmeyer et al. (2024): https://scholar.google.com/scholar?cluster=3697701546144846732&hl=en&as_sdt=0,5.

source
CounterfactualExplanations.Objectives.energy_constraintMethod
energy_constraint(
     ce::AbstractCounterfactualExplanation;
     agg=mean,
     reg_strength::AbstractFloat=0.0,
     decay::AbstractFloat=0.9,
     kwargs...,
-)

Computes the energy constraint for the counterfactual explanation as in Altmeyer et al. (2024): https://scholar.google.com/scholar?cluster=3697701546144846732&hl=en&as_sdt=0,5. The energy constraint is a regularization term that penalizes the energy of the counterfactuals. The energy is computed as the negative logit of the target class.

Arguments

  • ce::AbstractCounterfactualExplanation: The counterfactual explanation.
  • agg::Function=mean: The aggregation function (only applicable in case num_counterfactuals > 1). Default is mean.
  • reg_strength::AbstractFloat=0.0: The regularization strength.
  • decay::AbstractFloat=0.9: The decay rate for the polynomial decay function (defaults to 0.9). Parameter a is set to 1.0 / ce.generator.opt.eta, such that the initial step size is equal to 1.0, not accounting for b. Parameter b is set to round(Int, max_steps / 20), where max_steps is the maximum number of iterations.
  • kwargs...: Additional keyword arguments.

Returns

  • ℒ::AbstractFloat: The energy constraint.
source
CounterfactualExplanations.Objectives.model_loss_penaltyMethod
function model_loss_penalty(
+)

Computes the energy constraint for the counterfactual explanation as in Altmeyer et al. (2024): https://scholar.google.com/scholar?cluster=3697701546144846732&hl=en&as_sdt=0,5. The energy constraint is a regularization term that penalizes the energy of the counterfactuals. The energy is computed as the negative logit of the target class.

Arguments

  • ce::AbstractCounterfactualExplanation: The counterfactual explanation.
  • agg::Function=mean: The aggregation function (only applicable in case num_counterfactuals > 1). Default is mean.
  • reg_strength::AbstractFloat=0.0: The regularization strength.
  • decay::AbstractFloat=0.9: The decay rate for the polynomial decay function (defaults to 0.9). Parameter a is set to 1.0 / ce.generator.opt.eta, such that the initial step size is equal to 1.0, not accounting for b. Parameter b is set to round(Int, max_steps / 20), where max_steps is the maximum number of iterations.
  • kwargs...: Additional keyword arguments.

Returns

  • ℒ::AbstractFloat: The energy constraint.
source

Extensions

Extensions

CounterfactualExplanations.Models.ModelMethod
(M::Models.Model)(
     data::CounterfactualData,
     type::CounterfactualExplanations.DecisionTreeModel;
     kwargs...,
-)

Constructs a decision tree for the given data. This method is used internally when a decision-tree model is constructed to be trained from scratch (i.e. no pre-trained model is supplied by the user).

source
CounterfactualExplanations.Models.ModelMethod
(M::Models.Model)(
+)

Constructs a decision tree for the given data. This method is used internally when a decision-tree model is constructed to be trained from scratch (i.e. no pre-trained model is supplied by the user).

source
DecisionTreeExt.calculate_deltaMethod
calculate_delta(ce::AbstractCounterfactualExplanation, penalty::Vector{Function})

Calculates the penalty for the proposed feature tweak.

Arguments

  • ce::AbstractCounterfactualExplanation: The counterfactual explanation object.

Returns

  • delta::Float64: The calculated penalty for the proposed feature tweak.
source
DecisionTreeExt.classify_prototypesMethod
classify_prototypes(prototypes, rule_assignments, bounds)

Builds the second tree model using the given prototypes as inputs and their corresponding rule_assignments as labels. Split thresholds are restricted to the bounds, which can be computed using partition_bounds(rules). For details see Bewley et al. (2024) [arXiv, PMLR].

source
DecisionTreeExt.creMethod
cre(rules, x, X)

Computes the counterfactual rule explanations (CRE) for a given point $x$ and a set of $rules$, where the $rules$ correspond to the set of maximal-valid rules for some given target. For details see Bewley et al. (2024) [arXiv, PMLR].

source
DecisionTreeExt.esatisfactory_instanceMethod
esatisfactory_instance(generator::FeatureTweakGenerator, x::AbstractArray, paths::Dict{String, Dict{String, Any}})

Returns an epsilon-satisfactory counterfactual for x based on the paths provided.

Arguments

  • generator::FeatureTweakGenerator: The feature tweak generator.
  • x::AbstractArray: The factual instance.
  • paths::Dict{String, Dict{String, Any}}: A list of paths to the leaves of the tree to be used for tweaking the feature.

Returns

  • esatisfactory::AbstractArray: The epsilon-satisfactory instance.

Example

esatisfactory = esatisfactory_instance(generator, x, paths) # returns an epsilon-satisfactory counterfactual for x based on the paths provided

source
DecisionTreeExt.extract_leaf_rulesMethod
extract_leaf_rules(root::DT.Root)

Extracts leaf decision rules (i.e. hyperrectangles) from a decision tree (root). For a decision tree with $L$ leaves this results in $L$ hyperrectangles. The rules are returned as a vector of tuples containing 2-element tuples, where each 2-element tuple stores the lower and upper bound imposed by the given rule for a given feature. For details see Bewley et al. (2024) [arXiv, PMLR].

source
DecisionTreeExt.extract_rulesMethod
extract_rules(root::DT.Root)

Extracts decision rules (i.e. hyperrectangles) from a decision tree (root). For a decision tree with $L$ leaves this results in $2L-1$ hyperrectangles. The rules are returned as a vector of vectors of 2-element tuples, where each tuple stores the lower and upper bound imposed by the given rule for a given feature. For details see Bewley et al. (2024) [arXiv, PMLR].

source
DecisionTreeExt.get_individual_classifiersMethod
get_individual_classifiers(M::Model)

Returns the individual classifiers in the forest. If the input is a decision tree, the method returns the decision tree itself inside an array.

Arguments

  • M::Model: The model selected by the user.
  • model::CounterfactualExplanations.D

Returns

  • classifiers::AbstractArray: An array of individual classifiers in the forest.
source
DecisionTreeExt.calculate_deltaMethod
calculate_delta(ce::AbstractCounterfactualExplanation, penalty::Vector{Function})

Calculates the penalty for the proposed feature tweak.

Arguments

  • ce::AbstractCounterfactualExplanation: The counterfactual explanation object.

Returns

  • delta::Float64: The calculated penalty for the proposed feature tweak.
source
DecisionTreeExt.classify_prototypesMethod
classify_prototypes(prototypes, rule_assignments, bounds)

Builds the second tree model using the given prototypes as inputs and their corresponding rule_assignments as labels. Split thresholds are restricted to the bounds, which can be computed using partition_bounds(rules). For details see Bewley et al. (2024) [arXiv, PMLR].

source
DecisionTreeExt.creMethod
cre(rules, x, X)

Computes the counterfactual rule explanations (CRE) for a given point $x$ and a set of $rules$, where the $rules$ correspond to the set of maximal-valid rules for some given target. For details see Bewley et al. (2024) [arXiv, PMLR].

source
DecisionTreeExt.esatisfactory_instanceMethod
esatisfactory_instance(generator::FeatureTweakGenerator, x::AbstractArray, paths::Dict{String, Dict{String, Any}})

Returns an epsilon-satisfactory counterfactual for x based on the paths provided.

Arguments

  • generator::FeatureTweakGenerator: The feature tweak generator.
  • x::AbstractArray: The factual instance.
  • paths::Dict{String, Dict{String, Any}}: A list of paths to the leaves of the tree to be used for tweaking the feature.

Returns

  • esatisfactory::AbstractArray: The epsilon-satisfactory instance.

Example

esatisfactory = esatisfactory_instance(generator, x, paths) # returns an epsilon-satisfactory counterfactual for x based on the paths provided

source
DecisionTreeExt.extract_leaf_rulesMethod
extract_leaf_rules(root::DT.Root)

Extracts leaf decision rules (i.e. hyperrectangles) from a decision tree (root). For a decision tree with $L$ leaves this results in $L$ hyperrectangles. The rules are returned as a vector of tuples containing 2-element tuples, where each 2-element tuple stores the lower and upper bound imposed by the given rule for a given feature. For details see Bewley et al. (2024) [arXiv, PMLR].

source
DecisionTreeExt.extract_rulesMethod
extract_rules(root::DT.Root)

Extracts decision rules (i.e. hyperrectangles) from a decision tree (root). For a decision tree with $L$ leaves this results in $2L-1$ hyperrectangles. The rules are returned as a vector of vectors of 2-element tuples, where each tuple stores the lower and upper bound imposed by the given rule for a given feature. For details see Bewley et al. (2024) [arXiv, PMLR].

source
DecisionTreeExt.get_individual_classifiersMethod
get_individual_classifiers(M::Model)

Returns the individual classifiers in the forest. If the input is a decision tree, the method returns the decision tree itself inside an array.

Arguments

  • M::Model: The model selected by the user.
  • model::CounterfactualExplanations.D

Returns

  • classifiers::AbstractArray: An array of individual classifiers in the forest.
source
DecisionTreeExt.max_validMethod
max_valid(rules, X, fx, target, τ)

Returns the maximal-valid rules for a given target and accuracy threshold τ. For details see Bewley et al. (2024) [arXiv, PMLR].

source
DecisionTreeExt.prototypeMethod
prototype(rule, X; pick_arbitrary::Bool=true)

Picks an arbitrary point $x^C \in X$ (i.e. prototype) from the subet of $X$ that is contained by rule $R_i$. If pick_arbitrary is set to false, the prototype is instead computed as the average across all samples. For details see Bewley et al. (2024) [arXiv, PMLR].

source
DecisionTreeExt.rule_accuracyMethod
rule_accuracy(rule, X, fx, target)

Computes the accuracy of the rule on the data X for predicted outputs fx and the target. Accuracy is defined as the fraction of points contained by the rule, for which predicted values match the target. For details see Bewley et al. (2024) [arXiv, PMLR].

source
DecisionTreeExt.rule_costMethod
rule_cost(rule, x, X)

Computes the cost for $x$ to be contained by rule $R_i$, where cost is defined as rule_changes(rule, x) - rule_feasibility(rule, X). For details see Bewley et al. (2024) [arXiv, PMLR].

source
DecisionTreeExt.rule_feasibilityMethod
rule_feasibility(rule, X)

Computes the feasibility of a rule $R_i$ for a given dataset. Feasibility is defined as fraction of the data points that satisfy the rule. For details see Bewley et al. (2024) [arXiv, PMLR].

source
DecisionTreeExt.search_pathFunction
search_path(tree::Union{DT.Leaf, DT.Node}, target::RawTargetType, path::AbstractArray)

Return a path index list with the inequality symbols, thresholds and feature indices.

Arguments

  • tree::Union{DT.Leaf, DT.Node}: The root node of a decision tree.
  • target::RawTargetType: The target class.
  • path::AbstractArray: A list containing the paths found thus far.

Returns

  • paths::AbstractArray: A list of paths to the leaves of the tree to be used for tweaking the feature.

Example

paths = search_path(tree, target) # returns a list of paths to the leaves of the tree to be used for tweaking the feature

source
DecisionTreeExt.max_validMethod
max_valid(rules, X, fx, target, τ)

Returns the maximal-valid rules for a given target and accuracy threshold τ. For details see Bewley et al. (2024) [arXiv, PMLR].

source
DecisionTreeExt.prototypeMethod
prototype(rule, X; pick_arbitrary::Bool=true)

Picks an arbitrary point $x^C \in X$ (i.e. prototype) from the subet of $X$ that is contained by rule $R_i$. If pick_arbitrary is set to false, the prototype is instead computed as the average across all samples. For details see Bewley et al. (2024) [arXiv, PMLR].

source
DecisionTreeExt.rule_accuracyMethod
rule_accuracy(rule, X, fx, target)

Computes the accuracy of the rule on the data X for predicted outputs fx and the target. Accuracy is defined as the fraction of points contained by the rule, for which predicted values match the target. For details see Bewley et al. (2024) [arXiv, PMLR].

source
DecisionTreeExt.rule_costMethod
rule_cost(rule, x, X)

Computes the cost for $x$ to be contained by rule $R_i$, where cost is defined as rule_changes(rule, x) - rule_feasibility(rule, X). For details see Bewley et al. (2024) [arXiv, PMLR].

source
DecisionTreeExt.rule_feasibilityMethod
rule_feasibility(rule, X)

Computes the feasibility of a rule $R_i$ for a given dataset. Feasibility is defined as fraction of the data points that satisfy the rule. For details see Bewley et al. (2024) [arXiv, PMLR].

source
DecisionTreeExt.search_pathFunction
search_path(tree::Union{DT.Leaf, DT.Node}, target::RawTargetType, path::AbstractArray)

Return a path index list with the inequality symbols, thresholds and feature indices.

Arguments

  • tree::Union{DT.Leaf, DT.Node}: The root node of a decision tree.
  • target::RawTargetType: The target class.
  • path::AbstractArray: A list containing the paths found thus far.

Returns

  • paths::AbstractArray: A list of paths to the leaves of the tree to be used for tweaking the feature.

Example

paths = search_path(tree, target) # returns a list of paths to the leaves of the tree to be used for tweaking the feature

source
CounterfactualExplanations.JEMMethod
CounterfactualExplanations.JEM(
     model::JointEnergyModels.JointEnergyClassifier; likelihood::Symbol=:classification_multi
-)

Outer constructor for a neural network with Laplace Approximation from LaplaceRedux.jl.

source
CounterfactualExplanations.Models.logitsMethod
Models.logits(M::JEM, X::AbstractArray)

Calculates the logit scores output by the model M for the input data X.

Arguments

  • M::JEM: The model selected by the user. Must be a model from the MLJ library.
  • X::AbstractArray: The feature vector for which the logit scores are calculated.

Returns

  • logits::Matrix: A matrix of logits for each output class for each data point in X.

Example

logits = Models.logits(M, x) # calculates the logit scores for each output class for the data point x

source
CounterfactualExplanations.Models.logitsMethod
Models.logits(M::JEM, X::AbstractArray)

Calculates the logit scores output by the model M for the input data X.

Arguments

  • M::JEM: The model selected by the user. Must be a model from the MLJ library.
  • X::AbstractArray: The feature vector for which the logit scores are calculated.

Returns

  • logits::Matrix: A matrix of logits for each output class for each data point in X.

Example

logits = Models.logits(M, x) # calculates the logit scores for each output class for the data point x

source
CounterfactualExplanations.Models.trainMethod
train(M::JEM, data::CounterfactualData; kwargs...)

Fits the model M to the data in the CounterfactualData object. This method is not called by the user directly.

Arguments

  • M::JEM: The wrapper for an JEM model.
  • data::CounterfactualData: The CounterfactualData object containing the data to be used for training the model.

Returns

  • M::JEM: The fitted JEM model.
source
CounterfactualExplanations.Models.trainMethod
train(M::JEM, data::CounterfactualData; kwargs...)

Fits the model M to the data in the CounterfactualData object. This method is not called by the user directly.

Arguments

  • M::JEM: The wrapper for an JEM model.
  • data::CounterfactualData: The CounterfactualData object containing the data to be used for training the model.

Returns

  • M::JEM: The fitted JEM model.
source
CounterfactualExplanations.LaplaceReduxModelMethod
CounterfactualExplanations.LaplaceReduxModel(
     model::LaplaceRedux.Laplace; likelihood::Symbol=:classification_binary
-)

Outer constructor for a neural network with Laplace Approximation from LaplaceRedux.jl.

source
CounterfactualExplanations.Models.trainMethod
train(M::LaplaceReduxModel, data::CounterfactualData; kwargs...)

Fits the model M to the data in the CounterfactualData object. This method is not called by the user directly.

Arguments

  • M::LaplaceReduxModel: The wrapper for an LaplaceReduxModel model.
  • data::CounterfactualData: The CounterfactualData object containing the data to be used for training the model.

Returns

  • M::LaplaceReduxModel: The fitted LaplaceReduxModel model.
source
CounterfactualExplanations.Models.trainMethod
train(M::LaplaceReduxModel, data::CounterfactualData; kwargs...)

Fits the model M to the data in the CounterfactualData object. This method is not called by the user directly.

Arguments

  • M::LaplaceReduxModel: The wrapper for an LaplaceReduxModel model.
  • data::CounterfactualData: The CounterfactualData object containing the data to be used for training the model.

Returns

  • M::LaplaceReduxModel: The fitted LaplaceReduxModel model.
source
CounterfactualExplanations.NeuroTreeModelMethod
CounterfactualExplanations.NeuroTreeModel(
     model::AtomicNeuroTree; likelihood::Symbol=:classification_binary
-)

Outer constructor for a differentiable tree-based model from NeuroTreeModels.jl.

source
CounterfactualExplanations.Models.logitsMethod
Models.logits(M::NeuroTreeModel, X::AbstractArray)

Calculates the logit scores output by the model M for the input data X.

Arguments

  • M::NeuroTreeModel: The model selected by the user. Must be a model from the MLJ library.
  • X::AbstractArray: The feature vector for which the logit scores are calculated.

Returns

  • logits::Matrix: A matrix of logits for each output class for each data point in X.

Example

logits = Models.logits(M, x) # calculates the logit scores for each output class for the data point x

source
CounterfactualExplanations.Models.logitsMethod
Models.logits(M::NeuroTreeModel, X::AbstractArray)

Calculates the logit scores output by the model M for the input data X.

Arguments

  • M::NeuroTreeModel: The model selected by the user. Must be a model from the MLJ library.
  • X::AbstractArray: The feature vector for which the logit scores are calculated.

Returns

  • logits::Matrix: A matrix of logits for each output class for each data point in X.

Example

logits = Models.logits(M, x) # calculates the logit scores for each output class for the data point x

source
CounterfactualExplanations.Models.trainMethod
train(M::NeuroTreeModel, data::CounterfactualData; kwargs...)

Fits the model M to the data in the CounterfactualData object. This method is not called by the user directly.

Arguments

  • M::NeuroTreeModel: The wrapper for an NeuroTree model.
  • data::CounterfactualData: The CounterfactualData object containing the data to be used for training the model.

Returns

  • M::NeuroTreeModel: The fitted NeuroTree model.
source
+)

Overloads the probs method for NeuroTree models.

source
CounterfactualExplanations.Models.trainMethod
train(M::NeuroTreeModel, data::CounterfactualData; kwargs...)

Fits the model M to the data in the CounterfactualData object. This method is not called by the user directly.

Arguments

  • M::NeuroTreeModel: The wrapper for an NeuroTree model.
  • data::CounterfactualData: The CounterfactualData object containing the data to be used for training the model.

Returns

  • M::NeuroTreeModel: The fitted NeuroTree model.
source
diff --git a/dev/release-notes/index.html b/dev/release-notes/index.html index 07b62e31b..af56dca26 100644 --- a/dev/release-notes/index.html +++ b/dev/release-notes/index.html @@ -1,2 +1,2 @@ -Release Notes · CounterfactualExplanations.jl

Changelog

All notable changes to this project will be documented in this file.

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

Note: We try to adhere to these practices as of version v1.1.1.

Version [1.3.3] - 2024-09-30

Changed

  • Fixed a remaining bug in NeuroTreeExt extensions. #475

Version [1.3.2] - 2024-09-24

Added

  • Added support for using a random forest as a surrogate model for the T-CREx generator. #483

Changed

  • Improved the T-CREx documentation further by bringing example even closer to the example in the paper. #483
  • Include citation linking to ICML paper in T-CREx documentation and docstrings. #480

Version [1.3.1] - 2024-09-24

Changed

  • Fixed a remaining bug in NeuroTreeExt extensions. #475

Version [1.3.0] - 2024-09-16

Changed

  • Fixed bug in NeuroTreeExt extensions. #475

Added

  • Added basic support for the T-CREx counterfactual generator. #473
  • Added docstrings for package extensions to documentation. #475

Version [1.2.0] - 2024-09-10

Added

  • Added documentation for generating counterfactuals consistent with the MINT framework. #467
  • Added tests for new evaluation metrics and JEM extension. #471
  • Added support for gradient-based causal algorithm-recourse (MNIT) as described in Karimi et al. (2020). This incorporates an input encoder that is based on a Structural Causal Model #457
  • Added out-of-the-box support for training joint energy models (JEM). #454
  • Added new evaluation metric to measure faithfulness of counterfactual explanations as in Altmeyer et al. (2024). #454
  • A tutorial in the documentation ("Explanation" section) explaining the faithfulness metric in detail. #454
  • Added support for an energy constraint as in Altmeyer et al. (2024). This is the first step towards adding functionality for ECCCo. #387

Changed

  • The fitresult field of Model now takes a concrete Fitresult type, for which some basic methods have been defined. This mutable struct has a field called other that accepts a dictionary Dict that can be filled with additional objects. #454
  • Regenerated pre-trained model artifacts. #454
  • Updated the tutorial on "Handling Data". #454

Removed

  • Removed bug in find_potential_neighbours method. #454

Version [1.1.6] - 2024-05-19

Removed

  • Removed the call to the Iris function in the test suite because of HTTPs issues. #452
  • Removed the mlj_models_catalogue because it served no obvious purpose. In the future, we may instead add meta information to the all_models_catalogue. #444

Added

  • New general Model struct that wraps empty concrete types. This adds a more general interface that is still flexible enough by simply using multiple dispatch on the empty concrete types. #444
  • A new incompatible(::AbstractGenerator, ::AbstractCounterfactualExplanation) function has been added to avoid running a counterfactual search if the generator is incompatible with any other specification (e.g. the model). #444

Changed

  • No longer exporting many of the deprecated functions. #452
  • Updated pre-trained model artifacts. #444
  • Some function signatures have been deprecated, e.g. NeuroTreeModel to NeuroTree, LaplaceReduxModel to LaplaceNN. #444
  • Support for DecisionTree.jl models and the FeatureTweakGenerator have been moved to an extension (DecisionTreeExt). #444
  • Updates to NeuroTreeModels extensions to incorporate breaking changes to package. #444
  • No longer running alloc test on Windows. #441
  • Slight change to doctests. #447

Version v1.1.5 - 2024-04-30

Added

  • Unit tests: adds a simple performance benchmark to test that for a small problem, generating a counterfactual using the generic generator takes at most 4700 allocations. Only run on julia v1.10 and higher. #436

Changed

  • The find_potential_neighbours is now only triggered if one of the penalties of the generator requires access to samples from the target domain. This improves scalability because calling the function can be computationally costly (forward-pass). #436
  • The target variable encodings are now handled more efficiently. Previously certain tasks were repeated, which was not necessary. #436

Removed

  • Removed the assertion checking that the model ever predicts the target value. While this assertion is useful, it is not essential. For large enough models and datasets, this forward pass can be very costly. #436
  • Removed redundant distance_from_targets function. #436

Version v1.1.4 - 2024-04-25

Changed

  • Refactors the encodings and decodings such that it is now more streamlined. Instead of conditional statements, encodings are now dispatched on the type of a new unifying data.input_encoder field. #432
  • Refactors the check for redundancy. This is now based on the convergence type and done right before the counterfactual search begins, if not redundant. #432

Added

  • Added additional unit tests. #437

Version v1.1.3 - 2024-04-17

Added

  • Adds a section on Convergence to the documentation, Changelog.jl functionality and a few doc tests. #429

Changed

  • Changes style of taking gradients for the counterfactual search from implicit to explicit. #430
  • Removed all implicit imports. #430

Removed

  • Removed CUDA.jl dependency, because redundant. #430
  • Removed Parameters.jl dependency, because redundant. #430

Version v1.1.2 - 2024-04-16

Changed

  • Replaces the GIF in the README and introduction of docs for a static image.

Version v1.1.1 - 2024-04-15

Added

  • Added tests for LaplaceRedux extension. Bumped upper compat bound for LaplaceRedux.jl. #428
+Release Notes · CounterfactualExplanations.jl

Changelog

All notable changes to this project will be documented in this file.

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

Note: We try to adhere to these practices as of version v1.1.1.

Version [1.3.4] - 2024-10-22

Changed

  • Fixed a bug in the find_potential_neighbours method.

Version [1.3.3] - 2024-09-30

Changed

  • Fixed a remaining bug in NeuroTreeExt extensions. #475

Version [1.3.2] - 2024-09-24

Added

  • Added support for using a random forest as a surrogate model for the T-CREx generator. #483

Changed

  • Improved the T-CREx documentation further by bringing example even closer to the example in the paper. #483
  • Include citation linking to ICML paper in T-CREx documentation and docstrings. #480

Version [1.3.1] - 2024-09-24

Changed

  • Fixed a remaining bug in NeuroTreeExt extensions. #475

Version [1.3.0] - 2024-09-16

Changed

  • Fixed bug in NeuroTreeExt extensions. #475

Added

  • Added basic support for the T-CREx counterfactual generator. #473
  • Added docstrings for package extensions to documentation. #475

Version [1.2.0] - 2024-09-10

Added

  • Added documentation for generating counterfactuals consistent with the MINT framework. #467
  • Added tests for new evaluation metrics and JEM extension. #471
  • Added support for gradient-based causal algorithm-recourse (MNIT) as described in Karimi et al. (2020). This incorporates an input encoder that is based on a Structural Causal Model #457
  • Added out-of-the-box support for training joint energy models (JEM). #454
  • Added new evaluation metric to measure faithfulness of counterfactual explanations as in Altmeyer et al. (2024). #454
  • A tutorial in the documentation ("Explanation" section) explaining the faithfulness metric in detail. #454
  • Added support for an energy constraint as in Altmeyer et al. (2024). This is the first step towards adding functionality for ECCCo. #387

Changed

  • The fitresult field of Model now takes a concrete Fitresult type, for which some basic methods have been defined. This mutable struct has a field called other that accepts a dictionary Dict that can be filled with additional objects. #454
  • Regenerated pre-trained model artifacts. #454
  • Updated the tutorial on "Handling Data". #454

Removed

  • Removed bug in find_potential_neighbours method. #454

Version [1.1.6] - 2024-05-19

Removed

  • Removed the call to the Iris function in the test suite because of HTTPs issues. #452
  • Removed the mlj_models_catalogue because it served no obvious purpose. In the future, we may instead add meta information to the all_models_catalogue. #444

Added

  • New general Model struct that wraps empty concrete types. This adds a more general interface that is still flexible enough by simply using multiple dispatch on the empty concrete types. #444
  • A new incompatible(::AbstractGenerator, ::AbstractCounterfactualExplanation) function has been added to avoid running a counterfactual search if the generator is incompatible with any other specification (e.g. the model). #444

Changed

  • No longer exporting many of the deprecated functions. #452
  • Updated pre-trained model artifacts. #444
  • Some function signatures have been deprecated, e.g. NeuroTreeModel to NeuroTree, LaplaceReduxModel to LaplaceNN. #444
  • Support for DecisionTree.jl models and the FeatureTweakGenerator have been moved to an extension (DecisionTreeExt). #444
  • Updates to NeuroTreeModels extensions to incorporate breaking changes to package. #444
  • No longer running alloc test on Windows. #441
  • Slight change to doctests. #447

Version v1.1.5 - 2024-04-30

Added

  • Unit tests: adds a simple performance benchmark to test that for a small problem, generating a counterfactual using the generic generator takes at most 4700 allocations. Only run on julia v1.10 and higher. #436

Changed

  • The find_potential_neighbours is now only triggered if one of the penalties of the generator requires access to samples from the target domain. This improves scalability because calling the function can be computationally costly (forward-pass). #436
  • The target variable encodings are now handled more efficiently. Previously certain tasks were repeated, which was not necessary. #436

Removed

  • Removed the assertion checking that the model ever predicts the target value. While this assertion is useful, it is not essential. For large enough models and datasets, this forward pass can be very costly. #436
  • Removed redundant distance_from_targets function. #436

Version v1.1.4 - 2024-04-25

Changed

  • Refactors the encodings and decodings such that it is now more streamlined. Instead of conditional statements, encodings are now dispatched on the type of a new unifying data.input_encoder field. #432
  • Refactors the check for redundancy. This is now based on the convergence type and done right before the counterfactual search begins, if not redundant. #432

Added

  • Added additional unit tests. #437

Version v1.1.3 - 2024-04-17

Added

  • Adds a section on Convergence to the documentation, Changelog.jl functionality and a few doc tests. #429

Changed

  • Changes style of taking gradients for the counterfactual search from implicit to explicit. #430
  • Removed all implicit imports. #430

Removed

  • Removed CUDA.jl dependency, because redundant. #430
  • Removed Parameters.jl dependency, because redundant. #430

Version v1.1.2 - 2024-04-16

Changed

  • Replaces the GIF in the README and introduction of docs for a static image.

Version v1.1.1 - 2024-04-15

Added

  • Added tests for LaplaceRedux extension. Bumped upper compat bound for LaplaceRedux.jl. #428
diff --git a/dev/search_index.js b/dev/search_index.js index ffb9275e2..13783c540 100644 --- a/dev/search_index.js +++ b/dev/search_index.js @@ -1,3 +1,3 @@ var documenterSearchIndex = {"docs": -[{"location":"tutorials/simple_example/","page":"Simple Example","title":"Simple Example","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"tutorials/simple_example/#Simple-Example","page":"Simple Example","title":"Simple Example","text":"","category":"section"},{"location":"tutorials/simple_example/","page":"Simple Example","title":"Simple Example","text":"In this tutorial, we will go through a simple example involving synthetic data and a generic counterfactual generator.","category":"page"},{"location":"tutorials/simple_example/#Data-and-Classifier","page":"Simple Example","title":"Data and Classifier","text":"","category":"section"},{"location":"tutorials/simple_example/","page":"Simple Example","title":"Simple Example","text":"Below we generate some linearly separable data and fit a simple MLP classifier with batch normalization to it. For more information on generating data and models, refer to the Handling Data and Handling Models tutorials respectively.","category":"page"},{"location":"tutorials/simple_example/","page":"Simple Example","title":"Simple Example","text":"# Counteractual data and model:\nflux_training_params.batchsize = 10\ndata = TaijaData.load_linearly_separable()\ncounterfactual_data = DataPreprocessing.CounterfactualData(data...)\ncounterfactual_data.standardize = true\nM = fit_model(counterfactual_data, :MLP, batch_norm=true)","category":"page"},{"location":"tutorials/simple_example/#Counterfactual-Search","page":"Simple Example","title":"Counterfactual Search","text":"","category":"section"},{"location":"tutorials/simple_example/","page":"Simple Example","title":"Simple Example","text":"Next, determine a target and factual class for our counterfactual search and select a random factual instance to explain.","category":"page"},{"location":"tutorials/simple_example/","page":"Simple Example","title":"Simple Example","text":"target = 2\nfactual = 1\nchosen = rand(findall(predict_label(M, counterfactual_data) .== factual))\nx = select_factual(counterfactual_data, chosen)","category":"page"},{"location":"tutorials/simple_example/","page":"Simple Example","title":"Simple Example","text":"Finally, we generate and visualize the generated counterfactual:","category":"page"},{"location":"tutorials/simple_example/","page":"Simple Example","title":"Simple Example","text":"# Search:\ngenerator = WachterGenerator()\nce = generate_counterfactual(x, target, counterfactual_data, M, generator)\nplot(ce)","category":"page"},{"location":"tutorials/simple_example/","page":"Simple Example","title":"Simple Example","text":"(Image: )","category":"page"},{"location":"how_to_guides/custom_generators/","page":"... add custom generators","title":"... add custom generators","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"how_to_guides/custom_generators/#How-to-add-Custom-Generators","page":"... add custom generators","title":"How to add Custom Generators","text":"","category":"section"},{"location":"how_to_guides/custom_generators/","page":"... add custom generators","title":"... add custom generators","text":"As we will see in this short tutorial, building custom counterfactual generators is straightforward. We hope that this will facilitate contributions through the community.","category":"page"},{"location":"how_to_guides/custom_generators/#Generic-generator-with-dropout","page":"... add custom generators","title":"Generic generator with dropout","text":"","category":"section"},{"location":"how_to_guides/custom_generators/","page":"... add custom generators","title":"... add custom generators","text":"To illustrate how custom generators can be implemented we will consider a simple example of a generator that extends the functionality of our GenericGenerator. We have noted elsewhere that the effectiveness of counterfactual explanations depends to some degree on the quality of the fitted model. Another, perhaps trivial, thing to note is that counterfactual explanations are not unique: there are potentially many valid counterfactual paths. One interesting (or silly) idea following these two observations might be to introduce some form of regularization in the counterfactual search. For example, we could use dropout to randomly switch features on and off in each iteration. Without dwelling further on the usefulness of this idea, let us see how it can be implemented.","category":"page"},{"location":"how_to_guides/custom_generators/","page":"... add custom generators","title":"... add custom generators","text":"The first code chunk below implements two important steps: 1) create an abstract subtype of the AbstractGradientBasedGenerator and 2) create a constructor similar to the GenericConstructor, but with one additional field for the probability of dropout.","category":"page"},{"location":"how_to_guides/custom_generators/","page":"... add custom generators","title":"... add custom generators","text":"# Abstract suptype:\nabstract type AbstractDropoutGenerator <: AbstractGradientBasedGenerator end\n\n# Constructor:\nstruct DropoutGenerator <: AbstractDropoutGenerator\n loss::Function # loss function\n penalty::Function\n λ::AbstractFloat # strength of penalty\n latent_space::Bool\n opt::Any # optimizer\n generative_model_params::NamedTuple\n p_dropout::AbstractFloat # dropout rate\nend\n\n# Instantiate:\ngenerator = DropoutGenerator(\n Flux.logitbinarycrossentropy,\n CounterfactualExplanations.Objectives.distance_l1,\n 0.1,\n false,\n Flux.Optimise.Descent(0.1),\n (;),\n 0.5,\n)","category":"page"},{"location":"how_to_guides/custom_generators/","page":"... add custom generators","title":"... add custom generators","text":"Next, we define how feature perturbations are generated for our dropout generator: in particular, we extend the relevant function through a method that implemented the dropout logic.","category":"page"},{"location":"how_to_guides/custom_generators/","page":"... add custom generators","title":"... add custom generators","text":"using CounterfactualExplanations.Generators\nusing StatsBase\nfunction Generators.generate_perturbations(\n generator::AbstractDropoutGenerator, \n ce::CounterfactualExplanation\n)\n s′ = deepcopy(ce.s′)\n new_s′ = Generators.propose_state(generator, ce)\n Δs′ = new_s′ - s′ # gradient step\n\n # Dropout:\n set_to_zero = sample(\n 1:length(Δs′),\n Int(round(generator.p_dropout*length(Δs′))),\n replace=false\n )\n Δs′[set_to_zero] .= 0\n return Δs′\nend","category":"page"},{"location":"how_to_guides/custom_generators/","page":"... add custom generators","title":"... add custom generators","text":"Finally, we proceed to generate counterfactuals in the same way we always do:","category":"page"},{"location":"how_to_guides/custom_generators/","page":"... add custom generators","title":"... add custom generators","text":"# Data and Classifier:\nM = fit_model(counterfactual_data, :DeepEnsemble)\n\n# Factual and Target:\nyhat = predict_label(M, counterfactual_data)\ntarget = 2 # target label\ncandidates = findall(vec(yhat) .!= target)\nchosen = rand(candidates)\nx = select_factual(counterfactual_data, chosen)\n\n# Counterfactual search:\nce = generate_counterfactual(\n x, target, counterfactual_data, M, generator;\n num_counterfactuals=5)\n\nplot(ce)","category":"page"},{"location":"how_to_guides/custom_generators/","page":"... add custom generators","title":"... add custom generators","text":"(Image: )","category":"page"},{"location":"extensions/laplace_redux/","page":"LaplaceRedux","title":"LaplaceRedux","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"extensions/laplace_redux/#[LaplaceRedux.jl](https://github.com/JuliaTrustworthyAI/LaplaceRedux.jl)","page":"LaplaceRedux","title":"LaplaceRedux.jl","text":"","category":"section"},{"location":"extensions/laplace_redux/","page":"LaplaceRedux","title":"LaplaceRedux","text":"LaplaceRedux.jl is one of Taija’s own packages that provides a framework for Effortless Bayesian Deep Learning through Laplace Approximation for Flux.jl neural networks. The methodology was first proposed by Immer, Korzepa, and Bauer (2020) and implemented in Python by Daxberger et al. (2021). This is relevant to the work on counterfactual explanations (CE), because research has shown that counterfactual explanations for Bayesian models are typically more plausible, because Bayesian models are able to capture the uncertainty in the data (Schut et al. 2021).","category":"page"},{"location":"extensions/laplace_redux/","page":"LaplaceRedux","title":"LaplaceRedux","text":"tip: Read More\nTo learn more about Laplace Redux, head over to the official documentation.","category":"page"},{"location":"extensions/laplace_redux/#Example","page":"LaplaceRedux","title":"Example","text":"","category":"section"},{"location":"extensions/laplace_redux/","page":"LaplaceRedux","title":"LaplaceRedux","text":"The extension will be loaded automatically when loading the LaplaceRedux package (assuming the CounterfactualExplanations package is also loaded).","category":"page"},{"location":"extensions/laplace_redux/","page":"LaplaceRedux","title":"LaplaceRedux","text":"using LaplaceRedux","category":"page"},{"location":"extensions/laplace_redux/","page":"LaplaceRedux","title":"LaplaceRedux","text":"Next, we will fit a neural network with Laplace Approximation to the moons dataset using our standard package API for doing so. By default, the Bayesian prior is optimized through empirical Bayes using the LaplaceRedux package.","category":"page"},{"location":"extensions/laplace_redux/","page":"LaplaceRedux","title":"LaplaceRedux","text":"# Fit model to data:\ndata = CounterfactualData(load_moons()...)\nM = fit_model(data, :LaplaceRedux; n_hidden=16)","category":"page"},{"location":"extensions/laplace_redux/","page":"LaplaceRedux","title":"LaplaceRedux","text":"LaplaceReduxExt.LaplaceNN(Laplace(Chain(Dense(2 => 16, relu), Dense(16 => 2)), :classification, :all, nothing, :full, LaplaceRedux.Curvature.GGN(Chain(Dense(2 => 16, relu), Dense(16 => 2)), :classification, Flux.Losses.logitcrossentropy, Array{Float32}[[-1.3098596 0.59241515; 0.91760206 0.02950162; … ; -0.018356863 0.12850936; -0.5381665 -0.7872097], [-0.2581085, -0.90997887, -0.5418944, -0.23735572, 0.81020063, -0.3033359, -0.47902864, -0.6432098, -0.038013518, 0.028280666, 0.009903266, -0.8796683, 0.41090682, 0.011093224, -0.1580453, 0.7911349], [3.092321 -2.4660816 … -0.3446268 -1.465249; -2.9468734 3.167357 … 0.31758657 1.7140366], [-0.3107697, 0.31076983]], 1.0, :all, nothing), 1.0, 0.0, Float32[-1.3098596, 0.91760206, 0.5239727, -1.1579771, -0.851813, -1.9411169, 0.47409698, 0.6679365, 0.8944433, 0.663116 … -0.3172857, 0.15530388, 1.3264753, -0.3506721, -0.3446268, 0.31758657, -1.465249, 1.7140366, -0.3107697, 0.31076983], [0.10530027048093525 0.0 … 0.0 0.0; 0.0 0.10530027048093525 … 0.0 0.0; … ; 0.0 0.0 … 0.10530027048093525 0.0; 0.0 0.0 … 0.0 0.10530027048093525], [0.10066431429751965 0.0 … -0.030656783425475176 0.030656334963944154; 0.0 20.93513766443357 … -2.3185940232360736 2.3185965484008193; … ; -0.030656783425475176 -2.3185940232360736 … 1.0101450999063672 -1.0101448118057204; 0.030656334963944154 2.3185965484008193 … -1.0101448118057204 1.0101451389641771], [1.1006643142975197 0.0 … -0.030656783425475176 0.030656334963944154; 0.0 21.93513766443357 … -2.3185940232360736 2.3185965484008193; … ; -0.030656783425475176 -2.3185940232360736 … 2.0101450999063672 -1.0101448118057204; 0.030656334963944154 2.3185965484008193 … -1.0101448118057204 2.010145138964177], [0.9412600568016627 0.003106911671721699 … 0.003743740333409532 -0.003743452315572739; 0.003106912946573237 0.6539263732691709 … 0.0030385955287734246 -0.0030390041204196414; … ; 0.0037437406323562283 0.003038591829991259 … 0.9624905710233649 0.03750911813897676; -0.0037434526145225856 -0.0030390004216833593 … 0.03750911813898124 0.9624905774453485], 82, 250, 2, 997.8087484836578), :classification_multi)","category":"page"},{"location":"extensions/laplace_redux/","page":"LaplaceRedux","title":"LaplaceRedux","text":"Finally, we select a factual instance and generate a counterfactual explanation for it using the generic gradient-based CE method.","category":"page"},{"location":"extensions/laplace_redux/","page":"LaplaceRedux","title":"LaplaceRedux","text":"# Select a factual instance:\ntarget = 1\nfactual = 0\nchosen = rand(findall(predict_label(M, data) .== factual))\nx = select_factual(data, chosen)\n\n# Generate counterfactual explanation:\nη = 0.01\ngenerator = GenericGenerator(; opt=Descent(η), λ=0.01)\nconv = CounterfactualExplanations.Convergence.DecisionThresholdConvergence(;\n decision_threshold=0.9, max_iter=100\n)\nce = generate_counterfactual(x, target, data, M, generator; convergence=conv)\nplot(ce, alpha=0.1)","category":"page"},{"location":"extensions/laplace_redux/","page":"LaplaceRedux","title":"LaplaceRedux","text":"(Image: )","category":"page"},{"location":"extensions/laplace_redux/#References","page":"LaplaceRedux","title":"References","text":"","category":"section"},{"location":"extensions/laplace_redux/","page":"LaplaceRedux","title":"LaplaceRedux","text":"Daxberger, Erik, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, and Philipp Hennig. 2021. “Laplace Redux-Effortless Bayesian Deep Learning.” Advances in Neural Information Processing Systems 34.","category":"page"},{"location":"extensions/laplace_redux/","page":"LaplaceRedux","title":"LaplaceRedux","text":"Immer, Alexander, Maciej Korzepa, and Matthias Bauer. 2020. “Improving Predictions of Bayesian Neural Networks via Local Linearization.” https://arxiv.org/abs/2008.08400.","category":"page"},{"location":"extensions/laplace_redux/","page":"LaplaceRedux","title":"LaplaceRedux","text":"Schut, Lisa, Oscar Key, Rory Mc Grath, Luca Costabello, Bogdan Sacaleanu, Yarin Gal, et al. 2021. “Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties.” In International Conference on Artificial Intelligence and Statistics, 1756–64. PMLR.","category":"page"},{"location":"contribute/performance/","page":"-","title":"-","text":"Random.seed!(42)\n# Counteractual data and model:\ndata = TaijaData.load_linearly_separable()\ncounterfactual_data = DataPreprocessing.CounterfactualData(data...)\nM = fit_model(counterfactual_data, :Linear)\ntarget = 2\nfactual = 1\nchosen = rand(findall(predict_label(M, counterfactual_data) .== factual))\nx = select_factual(counterfactual_data, chosen)\n\n# Search:\ngenerator = GenericGenerator()\nce = generate_counterfactual(x, target, counterfactual_data, M, generator)","category":"page"},{"location":"contribute/performance/","page":"-","title":"-","text":"data_large = TaijaData.load_linearly_separable(100000)\ncounterfactual_data_large = DataPreprocessing.CounterfactualData(data_large...)","category":"page"},{"location":"contribute/performance/","page":"-","title":"-","text":"@time generate_counterfactual(x, target, counterfactual_data, M, generator)","category":"page"},{"location":"contribute/performance/","page":"-","title":"-","text":"@time generate_counterfactual(x, target, counterfactual_data_large, M, generator)","category":"page"},{"location":"explanation/generators/clue/","page":"CLUE","title":"CLUE","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/generators/clue/#CLUEGenerator","page":"CLUE","title":"CLUEGenerator","text":"","category":"section"},{"location":"explanation/generators/clue/","page":"CLUE","title":"CLUE","text":"In this tutorial, we introduce the CLUEGenerator, a counterfactual generator based on the Counterfactual Latent Uncertainty Explanations (CLUE) method proposed by Antorán et al. (2020).","category":"page"},{"location":"explanation/generators/clue/#Description","page":"CLUE","title":"Description","text":"","category":"section"},{"location":"explanation/generators/clue/","page":"CLUE","title":"CLUE","text":"The CLUEGenerator leverages differentiable probabilistic models, such as Bayesian Neural Networks (BNNs), to estimate uncertainty in predictions. It aims to provide interpretable counterfactual explanations by identifying input patterns that lead to predictive uncertainty. The generator utilizes a latent variable framework and employs a decoder from a variational autoencoder (VAE) to generate counterfactual samples in latent space.","category":"page"},{"location":"explanation/generators/clue/","page":"CLUE","title":"CLUE","text":"The CLUE algorithm minimizes a loss function that combines uncertainty estimates and the distance between the generated counterfactual and the original input. By optimizing this loss function iteratively, the CLUEGenerator generates counterfactuals that are similar to the original observation but assigned low uncertainty.","category":"page"},{"location":"explanation/generators/clue/","page":"CLUE","title":"CLUE","text":"The formula for predictive entropy is as follow:","category":"page"},{"location":"explanation/generators/clue/","page":"CLUE","title":"CLUE","text":"beginaligned\nH(y^*x^* D) = - sum_k=1^K p(y^*=c_kx^* D) log p(y^*=c_kx^* D)\nendaligned","category":"page"},{"location":"explanation/generators/clue/#Usage","page":"CLUE","title":"Usage","text":"","category":"section"},{"location":"explanation/generators/clue/","page":"CLUE","title":"CLUE","text":"While using one must keep in mind that the CLUE algorithim is meant to find a more robust datapoint of the same class, using CLUE generator without any additional penalties/losses will mean that it is not a counterfactual generator. The generated result will be of the same class as the original input, but a more robust datapoint.","category":"page"},{"location":"explanation/generators/clue/","page":"CLUE","title":"CLUE","text":"CLUE works best for BNN’s. The CLUEGenerator can be used with any differentiable probabilistic model, but the results may not be as good as with BNNs.","category":"page"},{"location":"explanation/generators/clue/","page":"CLUE","title":"CLUE","text":"The CLUEGenerator can be used in the following manner:","category":"page"},{"location":"explanation/generators/clue/","page":"CLUE","title":"CLUE","text":"generator = CLUEGenerator()\nM = fit_model(counterfactual_data, :DeepEnsemble)\nconv = CounterfactualExplanations.Convergence.MaxIterConvergence(max_iter=1000)\nce = generate_counterfactual(\n x, target, counterfactual_data, M, generator;\n convergence=conv)\nplot(ce)","category":"page"},{"location":"explanation/generators/clue/","page":"CLUE","title":"CLUE","text":"(Image: )","category":"page"},{"location":"explanation/generators/clue/","page":"CLUE","title":"CLUE","text":"Extra: The CLUE generator can also be used upon already having achieved a counterfactual with a different generator. In this case, you can use CLUE and make the counterfactual more robust.","category":"page"},{"location":"explanation/generators/clue/","page":"CLUE","title":"CLUE","text":"Note: The above documentation is based on the information provided in the CLUE paper. Please refer to the original paper for more detailed explanations and implementation specifics.","category":"page"},{"location":"explanation/generators/clue/#References","page":"CLUE","title":"References","text":"","category":"section"},{"location":"explanation/generators/clue/","page":"CLUE","title":"CLUE","text":"Antorán, Javier, Umang Bhatt, Tameem Adel, Adrian Weller, and José Miguel Hernández-Lobato. 2020. “Getting a Clue: A Method for Explaining Uncertainty Estimates.” https://arxiv.org/abs/2006.06848.","category":"page"},{"location":"CHANGELOG/#Changelog","page":"Changelog","title":"Changelog","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"All notable changes to this project will be documented in this file.","category":"page"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.","category":"page"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Note: We try to adhere to these practices as of version [v1.1.1].","category":"page"},{"location":"CHANGELOG/#Version-[1.3.3]-2024-09-30","page":"Changelog","title":"Version [1.3.3] - 2024-09-30","text":"","category":"section"},{"location":"CHANGELOG/#Changed","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Fixed a remaining bug in NeuroTreeExt extensions. [#475]","category":"page"},{"location":"CHANGELOG/#Version-[1.3.2]-2024-09-24","page":"Changelog","title":"Version [1.3.2] - 2024-09-24","text":"","category":"section"},{"location":"CHANGELOG/#Added","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Added support for using a random forest as a surrogate model for the T-CREx generator. [#483]","category":"page"},{"location":"CHANGELOG/#Changed-2","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Improved the T-CREx documentation further by bringing example even closer to the example in the paper. [#483]\nInclude citation linking to ICML paper in T-CREx documentation and docstrings. [#480]","category":"page"},{"location":"CHANGELOG/#Version-[1.3.1]-2024-09-24","page":"Changelog","title":"Version [1.3.1] - 2024-09-24","text":"","category":"section"},{"location":"CHANGELOG/#Changed-3","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Fixed a remaining bug in NeuroTreeExt extensions. [#475]","category":"page"},{"location":"CHANGELOG/#Version-[1.3.0]-2024-09-16","page":"Changelog","title":"Version [1.3.0] - 2024-09-16","text":"","category":"section"},{"location":"CHANGELOG/#Changed-4","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Fixed bug in NeuroTreeExt extensions. [#475]","category":"page"},{"location":"CHANGELOG/#Added-2","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Added basic support for the T-CREx counterfactual generator. [#473]\nAdded docstrings for package extensions to documentation. [#475]","category":"page"},{"location":"CHANGELOG/#Version-[1.2.0]-2024-09-10","page":"Changelog","title":"Version [1.2.0] - 2024-09-10","text":"","category":"section"},{"location":"CHANGELOG/#Added-3","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Added documentation for generating counterfactuals consistent with the MINT framework. [#467]\nAdded tests for new evaluation metrics and JEM extension. [#471]\nAdded support for gradient-based causal algorithm-recourse (MNIT) as described in Karimi et al. (2020). This incorporates an input encoder that is based on a Structural Causal Model [#457] \nAdded out-of-the-box support for training joint energy models (JEM). [#454]\nAdded new evaluation metric to measure faithfulness of counterfactual explanations as in Altmeyer et al. (2024). [#454]\nA tutorial in the documentation (\"Explanation\" section) explaining the faithfulness metric in detail. [#454]\nAdded support for an energy constraint as in Altmeyer et al. (2024). This is the first step towards adding functionality for ECCCo. [#387] ","category":"page"},{"location":"CHANGELOG/#Changed-5","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"The fitresult field of Model now takes a concrete Fitresult type, for which some basic methods have been defined. This mutable struct has a field called other that accepts a dictionary Dict that can be filled with additional objects. [#454]\nRegenerated pre-trained model artifacts. [#454]\nUpdated the tutorial on \"Handling Data\". [#454]","category":"page"},{"location":"CHANGELOG/#Removed","page":"Changelog","title":"Removed","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Removed bug in find_potential_neighbours method. [#454]","category":"page"},{"location":"CHANGELOG/#Version-[1.1.6]-2024-05-19","page":"Changelog","title":"Version [1.1.6] - 2024-05-19","text":"","category":"section"},{"location":"CHANGELOG/#Removed-2","page":"Changelog","title":"Removed","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Removed the call to the Iris function in the test suite because of HTTPs issues. [#452]\nRemoved the mlj_models_catalogue because it served no obvious purpose. In the future, we may instead add meta information to the all_models_catalogue. [#444]","category":"page"},{"location":"CHANGELOG/#Added-4","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"New general Model struct that wraps empty concrete types. This adds a more general interface that is still flexible enough by simply using multiple dispatch on the empty concrete types. [#444]\nA new incompatible(::AbstractGenerator, ::AbstractCounterfactualExplanation) function has been added to avoid running a counterfactual search if the generator is incompatible with any other specification (e.g. the model). [#444]","category":"page"},{"location":"CHANGELOG/#Changed-6","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"No longer exporting many of the deprecated functions. [#452]\nUpdated pre-trained model artifacts. [#444]\nSome function signatures have been deprecated, e.g. NeuroTreeModel to NeuroTree, LaplaceReduxModel to LaplaceNN. [#444]\nSupport for DecisionTree.jl models and the FeatureTweakGenerator have been moved to an extension (DecisionTreeExt). [#444]\nUpdates to NeuroTreeModels extensions to incorporate breaking changes to package. [#444]\nNo longer running alloc test on Windows. [#441]\nSlight change to doctests. [#447]","category":"page"},{"location":"CHANGELOG/#Version-[v1.1.5]-2024-04-30","page":"Changelog","title":"Version [v1.1.5] - 2024-04-30","text":"","category":"section"},{"location":"CHANGELOG/#Added-5","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Unit tests: adds a simple performance benchmark to test that for a small problem, generating a counterfactual using the generic generator takes at most 4700 allocations. Only run on julia v1.10 and higher. [#436]","category":"page"},{"location":"CHANGELOG/#Changed-7","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"The find_potential_neighbours is now only triggered if one of the penalties of the generator requires access to samples from the target domain. This improves scalability because calling the function can be computationally costly (forward-pass). [#436] \nThe target variable encodings are now handled more efficiently. Previously certain tasks were repeated, which was not necessary. [#436]","category":"page"},{"location":"CHANGELOG/#Removed-3","page":"Changelog","title":"Removed","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Removed the assertion checking that the model ever predicts the target value. While this assertion is useful, it is not essential. For large enough models and datasets, this forward pass can be very costly. [#436]\nRemoved redundant distance_from_targets function. [#436]","category":"page"},{"location":"CHANGELOG/#Version-[v1.1.4]-2024-04-25","page":"Changelog","title":"Version [v1.1.4] - 2024-04-25","text":"","category":"section"},{"location":"CHANGELOG/#Changed-8","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Refactors the encodings and decodings such that it is now more streamlined. Instead of conditional statements, encodings are now dispatched on the type of a new unifying data.input_encoder field. [#432]\nRefactors the check for redundancy. This is now based on the convergence type and done right before the counterfactual search begins, if not redundant. [#432]","category":"page"},{"location":"CHANGELOG/#Added-6","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Added additional unit tests. [#437]","category":"page"},{"location":"CHANGELOG/#Version-[v1.1.3]-2024-04-17","page":"Changelog","title":"Version [v1.1.3] - 2024-04-17","text":"","category":"section"},{"location":"CHANGELOG/#Added-7","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Adds a section on Convergence to the documentation, Changelog.jl functionality and a few doc tests. [#429]","category":"page"},{"location":"CHANGELOG/#Changed-9","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Changes style of taking gradients for the counterfactual search from implicit to explicit. [#430]\nRemoved all implicit imports. [#430]","category":"page"},{"location":"CHANGELOG/#Removed-4","page":"Changelog","title":"Removed","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Removed CUDA.jl dependency, because redundant. [#430]\nRemoved Parameters.jl dependency, because redundant. [#430]","category":"page"},{"location":"CHANGELOG/#Version-[v1.1.2]-2024-04-16","page":"Changelog","title":"Version [v1.1.2] - 2024-04-16","text":"","category":"section"},{"location":"CHANGELOG/#Changed-10","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Replaces the GIF in the README and introduction of docs for a static image. ","category":"page"},{"location":"CHANGELOG/#Version-[v1.1.1]-2024-04-15","page":"Changelog","title":"Version [v1.1.1] - 2024-04-15","text":"","category":"section"},{"location":"CHANGELOG/#Added-8","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Added tests for LaplaceRedux extension. Bumped upper compat bound for LaplaceRedux.jl. [#428]","category":"page"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"","category":"page"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"[#428]: https://github.com/juliatrustworthyai/CounterfactualExplanations.jl/issues/428 [#429]: https://github.com/juliatrustworthyai/CounterfactualExplanations.jl/issues/429","category":"page"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"tutorials/convergence/#convergence","page":"Convergence","title":"Convergence","text":"","category":"section"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"The search for counterfactuals can be seen as an optimization problem, where the goal is to find a point in the input space. One questions that has received surprisingly little attention is how to determine when the search has converged. In a recent paper, we have briefly discussed why it is important to consider convergence (Altmeyer et al. 2024):","category":"page"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"One intuitive way to specify convergence is in terms of threshold probabilities: once the predicted probability p(y^+x^prime) exceeds some user-defined threshold γ such that the counterfactual is valid, we could consider the search to have converged. In the binary case, for example, convergence could be defined as p(y^+x^prime) 05 in this sense. Note, however, how this can be expected to yield counterfactuals in the proximity of the decision boundary, a region characterized by high aleatoric uncertainty. In other words, counterfactuals generated in this way would generally not be plausible. To avoid this from happening, we specify convergence in terms of gradients approaching zero for all our experiments and all of our generators. This is allows us to get a cleaner read on how the different counterfactual search objectives affect counterfactual outcomes.","category":"page"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"In the paper, we were primarily interested in benchmarking counterfactuals generated by different search objectives. In other contexts, however, it may be more appropriate to specify convergence in terms of threshold probabilities. Our package allows you to specify convergence in terms of gradients, threshold probabilities or simply in terms of the total number of iterations. In this section, we will show you how to do this.","category":"page"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"using CounterfactualExplanations.Convergence\ngenerator = GenericGenerator(λ=0.01)","category":"page"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"GradientBasedGenerator(nothing, CounterfactualExplanations.Objectives.distance_l1, 0.01, false, false, Descent(0.1), NamedTuple())","category":"page"},{"location":"tutorials/convergence/#Convergence-in-terms-of-gradients","page":"Convergence","title":"Convergence in terms of gradients","text":"","category":"section"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"As gradients approach zero, the conditions defined by the search objective and hence the generator are satisfied. We therefore refere to this type of convergece criterium as GeneratorConditionsConvergence","category":"page"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"conv = GeneratorConditionsConvergence(gradient_tol=0.01, max_iter=1000)\nce_gen = generate_counterfactual(x, target, counterfactual_data, M, generator; convergence = conv)","category":"page"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"CounterfactualExplanation\nConvergence: ✅ after 179 steps.","category":"page"},{"location":"tutorials/convergence/#Convergence-in-terms-of-threshold-probabilities","page":"Convergence","title":"Convergence in terms of threshold probabilities","text":"","category":"section"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"In this case, the search is considered to have converged once the predicted probability p(y^+x^prime) exceeds some user-defined threshold γ such that the counterfactual is valid. We refer to this type of convergence criterium as DecisionThresholdConvergence.","category":"page"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"conv = DecisionThresholdConvergence(decision_threshold=0.75)\nce_dec = generate_counterfactual(x, target, counterfactual_data, M, generator; convergence = conv)","category":"page"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"CounterfactualExplanation\nConvergence: ✅ after 9 steps.","category":"page"},{"location":"tutorials/convergence/#Convergence-in-terms-of-the-total-number-of-iterations","page":"Convergence","title":"Convergence in terms of the total number of iterations","text":"","category":"section"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"In this case, the search is considered to have converged once the total number of iterations exceeds some user-defined threshold max_iter. We refer to this type of convergence criterium as MaxIterConvergence.","category":"page"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"conv = MaxIterConvergence(max_iter=25)\nce_max = generate_counterfactual(x, target, counterfactual_data, M, generator; convergence = conv)","category":"page"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"CounterfactualExplanation\nConvergence: ✅ after 25 steps.","category":"page"},{"location":"tutorials/convergence/#Comparison","page":"Convergence","title":"Comparison","text":"","category":"section"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"plts = []\nfor (ce, titl) in zip([ce_gen, ce_dec, ce_max], [\"Gradient Convergence\", \"Decision Threshold Convergence\", \"Max Iterations Convergence\"])\n push!(plts, plot(ce; title=titl, cbar=false))\nend\nplot(plts..., layout=(1,3), size=(1200, 380))","category":"page"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"(Image: )","category":"page"},{"location":"tutorials/convergence/#References","page":"Convergence","title":"References","text":"","category":"section"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"Altmeyer, Patrick, Mojtaba Farmanbar, Arie van Deursen, and Cynthia CS Liem. 2024. “Faithful Model Explanations Through Energy-Constrained Conformal Counterfactuals.” In Proceedings of the AAAI Conference on Artificial Intelligence, 38:10829–37. 10.","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/generators/tcrex/#T-CREx-Generator","page":"T-CREx","title":"T-CREx Generator","text":"","category":"section"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"The T-CREx is a novel model-agnostic counterfactual generator that can be used to generate local and global Counterfactual Rule Explanations (CREx) (Bewley et al. 2024).","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"warning: Breaking Changes Expected\nWork on this feature is still in its very early stages and breaking changes should be expected. The introduction of this new generator introduces new concepts such as global counterfactual explanations that are not explained anywhere else in this documentation. If you want to use this generator, please make sure you are familiar with the related literature. ","category":"page"},{"location":"explanation/generators/tcrex/#Usage","page":"T-CREx","title":"Usage","text":"","category":"section"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"The implementation of the TCRExGenerator depends on DecisionTree.jl. For the time being, we have decided to not add a strong dependency on DecisionTree.jl to the package. Instead, the functionality of the TCRExGenerator is made available through the DecisionTreeExt extension, which will be loaded conditionally on loading the DecisionTree.jl (see Julia docs for more details extensions):","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"using DecisionTree","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"Let us first load set up the problem by loading some data. To reproduce the example in Bewley et al. (2024) as accurately as possible, we use Python’s scikit-learn to load the synthetic data:","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"using CondaPkg; CondaPkg.add(\"scikit-learn\");\nusing PythonCall;\nskd = pyimport(\"sklearn.datasets\");\nn = 5000\nX, y = skd.make_moons(n_samples=n, noise=0.3, random_state=0)\nX = pyconvert(Matrix, X) |> permutedims |> x -> Float32.(x)\ny = pyconvert(Vector, y)\n# Setting up color palette as in paper:\ncol_pal = palette(:seaborn_bright)[[4,1,2,3,6,5,7,8,9]];","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"Next, we wrap the data in a CounterfactuaData container, fit a simple classification model to the data and store the model prediction for the entire training dataset (we need those to train the tree-based surrogate model).","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"# Counteractual data and model:\ndata = CounterfactualData(X, y)\nflux_training_params.batchsize = 100\nM = fit_model(data, :MLP)","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"Finally, we determine a target and factual class and choose a random sample from the factual class:","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"target = 1\nfactual = 0\nchosen = rand(findall(predict_label(M, data) .== factual))\nx = select_factual(data, chosen) ","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"Next, we instantiate the generator much like any other counterfactual generator in our package:","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"ρ = 0.02 # feasibility threshold (see Bewley et al. (2024))\nτ = 0.9 # accuracy threshold (see Bewley et al. (2024))\ngenerator = Generators.TCRExGenerator(ρ=ρ, τ=τ)","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"Finally, we can use the TCRExGenerator instance to generate a (global) counterfactual rule epxlanation (CRE) for the given target, data and model as follows:","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"cre = generator(target, data, M) # counterfactual rule explanation (global)","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"The CRE can be applied to our factual x to derive a (local) counterfactual point explanation (CPE):","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"idx, optimal_rule = cre(x) # counterfactual point explanation (local)","category":"page"},{"location":"explanation/generators/tcrex/#Worked-Example-from-Bewley-et-al.-(2024)","page":"T-CREx","title":"Worked Example from Bewley et al. (2024)","text":"","category":"section"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"To make better sense of this, we will now go through the worked example presented in Bewley et al. (2024). For this purpose, we need to make the functions of the DecisionTreeExt extension available.","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"warning: Private API\nPlease note that of the DecisionTreeExt extension is loaded here purely for demonstrative purposes. You should not load the extension like this in your own work.","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"DTExt = Base.get_extension(CounterfactualExplanations, :DecisionTreeExt)","category":"page"},{"location":"explanation/generators/tcrex/#(a)-Tree-based-surrogate-model","page":"T-CREx","title":"(a) Tree-based surrogate model","text":"","category":"section"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"In the first step, we train a tree-based surrogate model based on the data and the black-box model M. Specifically, the surrogate model is trained on pairs of observed input data and the labels predicted by the black-box model: (x M(x))_1leq i leq n.","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"note: Oracle Black-Box\nAs in the paper, we assume here that the black-box model is an oracle with perfect accuracy. This is done purely to stay as close as possible to the example in the paper.","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"Following Bewley et al. (2024), we impose a minimum number of samples per leaf to ensure counterfactual feasibility (also often referred to as plausibility). This number is computed under the hood and based on the generator.ρ field of the TCRExGenerator, which can be used to specify the minimum fraction of all samples that is contained by any given rule.","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"# Surrogate:\nXtrain = permutedims(X)\nytrain = categorical(y)\nfx = ytrain # assume perfect accuracy\nmodel, fitresult = DTExt.grow_surrogate(generator, Xtrain, fx)\nM_sur = CounterfactualExplanations.DecisionTreeModel(model; fitresult=fitresult)","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"We can reassure ourselves that the feasibility constraint is indeed respected:","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"# Extract rules:\nR = DTExt.extract_rules(fitresult[1])\n\n# Compute feasibility and accuracy:\nfeas = DTExt.rule_feasibility.(R, (X,))\n@assert minimum(feas) >= ρ\n@info \"Minimum fraction of samples across all rules is $(round(minimum(feas), digits=3))\"\nacc_factual = DTExt.rule_accuracy.(R, (X,), (fx,), (factual,))\nacc_target = DTExt.rule_accuracy.(R, (X,), (fx,), (target,))\n@assert all(acc_target .+ acc_factual .== 1.0)","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"plt = plot(data; ms=2, markerstrokewidth=0, size=(500, 500), palette=col_pal, alpha=0.5)\nrectangle(w, h, x, y) = Shape(x .+ [0,w,w,0], y .+ [0,0,h,h])\nfunction plot_grid!(p, grid)\n for (i, (bounds_x, bounds_y)) in enumerate(grid)\n lbx, ubx = bounds_x\n lby, uby = bounds_y\n lbx = maximum([lbx, minimum(X[1, :])])\n lby = maximum([lby, minimum(X[2, :])])\n ubx = minimum([ubx, maximum(X[1, :])])\n uby = minimum([uby, maximum(X[2, :])])\n plot!(\n p,\n rectangle(ubx - lbx, uby - lby, lbx, lby);\n fillcolor=\"black\",\n fillalpha=0.0,\n label=nothing,\n lw=2, palette=col_pal\n )\n end\nend\nplot_grid!(plt, R)\nplt","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"(Image: )","category":"page"},{"location":"explanation/generators/tcrex/#(b)-Maximal-valid-rules","page":"T-CREx","title":"(b) Maximal-valid rules","text":"","category":"section"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"From the complete set of rules derived from the surrogate tree, we can derive the maximal-valid rules next. Intuitively, “a maximal-valid rule is one that cannot be made any larger without violating the validity conditions”, where validity is defined in terms of both feasibility (generator.ρ) and accuracy (generator.τ).","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"R_max = DTExt.max_valid(R, X, fx, target, τ)\nfeas_max = DTExt.rule_feasibility.(R_max, (X,))\nacc_max = DTExt.rule_accuracy.(R_max, (X,), (fx,), (target,))\np1 = deepcopy(plt)\nfunction plot_surr!(plt)\n for (i, rule) in enumerate(R_max)\n ubx, uby = minimum([rule[1][2], maximum(X[1, :])]),\n minimum([rule[2][2], maximum(X[2, :])])\n lbx, lby = maximum([rule[1][1], minimum(X[1, :])]),\n maximum([rule[2][1], minimum(X[2, :])])\n _feas = round(feas_max[i]; digits=2)\n _n = Int(round(feas_max[i] * n; digits=2))\n _acc = round(acc_max[i]; digits=2)\n @info \"Rectangle R$i with feasibility $(_feas) (n≈$(_n)) and accuracy $(_acc)\"\n lab = \"R$i (ρ̂=$(_feas), τ̂=$(_acc))\"\n plot!(plt, rectangle(ubx-lbx,uby-lby,lbx,lby), opacity=.5, color=i+2, label=lab, palette=col_pal)\n end\nend\nplot_surr!(p1)\np1","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"(Image: )","category":"page"},{"location":"explanation/generators/tcrex/#(c)-Induced-grid-partition","page":"T-CREx","title":"(c) Induced grid partition","text":"","category":"section"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"Based on the set of maximal-valid rules, we compute and plot the induced grid partition below.","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"_grid = DTExt.induced_grid(R_max)\n\nplt = plot(data; ms=2, markerstrokewidth=0, size=(500, 500), palette=col_pal, alpha=0.1)\np2 = deepcopy(plt)\nplot_surr!(p2)\nplot_grid!(p2, _grid)\np2","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"(Image: )","category":"page"},{"location":"explanation/generators/tcrex/#(d)-Grid-cell-prototypes","page":"T-CREx","title":"(d) Grid cell prototypes","text":"","category":"section"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"Next, we pick prototypes from each cell in the induced grid. By setting pick_arbitrary=false here we enfore that prototypes correspond to cell centroids, which is not necessary. For each prototype, we compute the corresponding CRE, which is indicated by the color of the large markers in the figure below:","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"xs = DTExt.prototype.(_grid, (X,); pick_arbitrary=false)\nRᶜ = DTExt.cre.((R_max,), xs, (X,); return_index=true) \np3 = deepcopy(p2)\nscatter!(p3, eachrow(hcat(xs...))..., ms=10, label=nothing, color=Rᶜ.+2)\np3","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"(Image: )","category":"page"},{"location":"explanation/generators/tcrex/#(e)-(f)-Global-CE-representation","page":"T-CREx","title":"(e) - (f) Global CE representation","text":"","category":"section"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"Based on the prototypes and their corresponding rule assignments, we fit a CART classification tree with restricted feature thresholds. Specificically, features thresholds are restricted to the partition bounds induced by the set of maximal-valid rules as in Bewley et al. (2024). The figure below shows the resulting global CE representation (i.e. the metarules).","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"bounds = DTExt.partition_bounds(R_max)\ntree = DTExt.classify_prototypes(hcat(xs...)', Rᶜ, bounds)\nR_final, labels = DTExt.extract_leaf_rules(tree) \np4 = deepcopy(plt)\nfor (i, rule) in enumerate(R_final)\n ubx, uby = minimum([rule[1][2], maximum(X[1, :])]),\n minimum([rule[2][2], maximum(X[2, :])])\n lbx, lby = maximum([rule[1][1], minimum(X[1, :])]),\n maximum([rule[2][1], minimum(X[2, :])])\n plot!(\n p4,\n rectangle(ubx - lbx, uby - lby, lbx, lby);\n fillalpha=0.5,\n label=nothing,\n color=labels[i] + 2\n )\nend\np4","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"(Image: )","category":"page"},{"location":"explanation/generators/tcrex/#(g)-Local-CE-example","page":"T-CREx","title":"(g) Local CE example","text":"","category":"section"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"To generate a local explanation based on the global CE representation, we simply apply the CART decision tree classifier from the previous step to our factual:","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"optimal_rule = apply_tree(tree, vec(x))\np5 = deepcopy(p2)\nscatter!(p5, [x[1]], [x[2]], ms=10, color=2+optimal_rule, label=\"Local CE (move to R$optimal_rule)\")\np5","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"(Image: )","category":"page"},{"location":"explanation/generators/tcrex/#References","page":"T-CREx","title":"References","text":"","category":"section"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"Bewley, Tom, Salim I. Amoukou, Saumitra Mishra, Daniele Magazzeni, and Manuela Veloso. 2024. “Counterfactual Metarules for Local and Global Recourse.” In Proceedings of the 41st International Conference on Machine Learning, edited by Ruslan Salakhutdinov, Zico Kolter, Katherine Heller, Adrian Weller, Nuria Oliver, Jonathan Scarlett, and Felix Berkenkamp, 235:3707–24. Proceedings of Machine Learning Research. PMLR. https://proceedings.mlr.press/v235/bewley24a.html.","category":"page"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"reference/#Reference","page":"🧐 Reference","title":"Reference","text":"","category":"section"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"In this reference, you will find a detailed overview of the package API.","category":"page"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"Reference guides are technical descriptions of the machinery and how to operate it. Reference material is information-oriented.— Diátaxis","category":"page"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"In other words, you come here because you want to take a very close look at the code 🧐.","category":"page"},{"location":"reference/#Content","page":"🧐 Reference","title":"Content","text":"","category":"section"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"Pages = [\"reference.md\"]\nDepth = 2:3","category":"page"},{"location":"reference/#Exported-functions","page":"🧐 Reference","title":"Exported functions","text":"","category":"section"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"Modules = [\n CounterfactualExplanations, \n CounterfactualExplanations.Convergence,\n CounterfactualExplanations.Evaluation,\n CounterfactualExplanations.DataPreprocessing,\n CounterfactualExplanations.Models,\n CounterfactualExplanations.GenerativeModels, \n CounterfactualExplanations.Generators, \n CounterfactualExplanations.Objectives\n]\nPrivate = false","category":"page"},{"location":"reference/#CounterfactualExplanations.RawOutputArrayType","page":"🧐 Reference","title":"CounterfactualExplanations.RawOutputArrayType","text":"RawOutputArrayType\n\nA type union for the allowed type for the output array y.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.RawTargetType","page":"🧐 Reference","title":"CounterfactualExplanations.RawTargetType","text":"RawTargetType\n\nA type union for the allowed types for the target variable.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.flux_training_params","page":"🧐 Reference","title":"CounterfactualExplanations.flux_training_params","text":"flux_training_params\n\nThe default training parameter for FluxModels etc.\n\n\n\n\n\n","category":"constant"},{"location":"reference/#CounterfactualExplanations.AbstractConvergence","page":"🧐 Reference","title":"CounterfactualExplanations.AbstractConvergence","text":"An abstract type that serves as the base type for convergence objects.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.AbstractCounterfactualExplanation","page":"🧐 Reference","title":"CounterfactualExplanations.AbstractCounterfactualExplanation","text":"Base type for counterfactual explanations.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.AbstractGenerator","page":"🧐 Reference","title":"CounterfactualExplanations.AbstractGenerator","text":"An abstract type that serves as the base type for counterfactual generators.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.AbstractModel","page":"🧐 Reference","title":"CounterfactualExplanations.AbstractModel","text":"Base type for models.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.CounterfactualExplanation","page":"🧐 Reference","title":"CounterfactualExplanations.CounterfactualExplanation","text":"A struct that collects all information relevant to a specific counterfactual explanation for a single individual.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.CounterfactualExplanation-Tuple{AbstractArray, Union{Int64, AbstractFloat, String, Symbol}, CounterfactualData, AbstractModel, AbstractGenerator}","page":"🧐 Reference","title":"CounterfactualExplanations.CounterfactualExplanation","text":"function CounterfactualExplanation(;\n\tx::AbstractArray,\n\ttarget::RawTargetType,\n\tdata::CounterfactualData,\n\tM::Models.AbstractModel,\n\tgenerator::Generators.AbstractGenerator,\n\tnum_counterfactuals::Int = 1,\n\tinitialization::Symbol = :add_perturbation,\n convergence::Union{AbstractConvergence,Symbol}=:decision_threshold,\n)\n\nOuter method to construct a CounterfactualExplanation structure.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.EncodedOutputArrayType","page":"🧐 Reference","title":"CounterfactualExplanations.EncodedOutputArrayType","text":"EncodedOutputArrayType\n\nType of encoded output array.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.EncodedTargetType","page":"🧐 Reference","title":"CounterfactualExplanations.EncodedTargetType","text":"EncodedTargetType\n\nType of encoded target variable.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.OutputEncoder","page":"🧐 Reference","title":"CounterfactualExplanations.OutputEncoder","text":"OutputEncoder\n\nThe OutputEncoder takes a raw output array (y) and encodes it.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.OutputEncoder-Tuple{Union{Int64, AbstractFloat, String, Symbol}}","page":"🧐 Reference","title":"CounterfactualExplanations.OutputEncoder","text":"(encoder::OutputEncoder)(ynew::RawTargetType)\n\nWhen called on a new value ynew, the OutputEncoder encodes it based on the initial encoding.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.OutputEncoder-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.OutputEncoder","text":"(encoder::OutputEncoder)()\n\nOn call, the OutputEncoder returns the encoded output array.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.generate_counterfactual-Tuple{Base.Iterators.Zip, Union{Int64, AbstractFloat, String, Symbol}, CounterfactualData, AbstractModel, AbstractGenerator}","page":"🧐 Reference","title":"CounterfactualExplanations.generate_counterfactual","text":"generate_counterfactual(\n x::Base.Iterators.Zip,\n target::RawTargetType,\n data::CounterfactualData,\n M::Models.AbstractModel,\n generator::AbstractGenerator;\n kwargs...,\n)\n\nOverloads the generate_counterfactual method to accept a zip of factuals x and return a vector of counterfactuals.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.generate_counterfactual-Tuple{Matrix, Union{Int64, AbstractFloat, String, Symbol}, CounterfactualData, AbstractModel, AbstractGenerator}","page":"🧐 Reference","title":"CounterfactualExplanations.generate_counterfactual","text":"generate_counterfactual(\n x::Matrix,\n target::RawTargetType,\n data::CounterfactualData,\n M::Models.AbstractModel,\n generator::AbstractGenerator;\n num_counterfactuals::Int=1,\n initialization::Symbol=:add_perturbation,\n convergence::Union{AbstractConvergence,Symbol}=:decision_threshold,\n timeout::Union{Nothing,Real}=nothing,\n)\n\nThe core function that is used to run counterfactual search for a given factual x, target, counterfactual data, model and generator. Keywords can be used to specify the desired threshold for the predicted target class probability and the maximum number of iterations.\n\nArguments\n\nx::Matrix: Factual data point.\ntarget::RawTargetType: Target class.\ndata::CounterfactualData: Counterfactual data.\nM::Models.AbstractModel: Fitted model.\ngenerator::AbstractGenerator: Generator.\nnum_counterfactuals::Int=1: Number of counterfactuals to generate for factual.\ninitialization::Symbol=:add_perturbation: Initialization method. By default, the initialization is done by adding a small random perturbation to the factual to achieve more robustness.\nconvergence::Union{AbstractConvergence,Symbol}=:decision_threshold: Convergence criterion. By default, the convergence is based on the decision threshold. Possible values are :decision_threshold, :max_iter, :generator_conditions or a conrete convergence object (e.g. DecisionThresholdConvergence). \ntimeout::Union{Nothing,Int}=nothing: Timeout in seconds.\n\nExamples\n\nGeneric generator\n\njulia> using CounterfactualExplanations\n\njulia> using TaijaData\n \n # Counteractual data and model:\n\njulia> counterfactual_data = CounterfactualData(load_linearly_separable()...);\n\njulia> M = fit_model(counterfactual_data, :Linear);\n\njulia> target = 2;\n\njulia> factual = 1;\n\njulia> chosen = rand(findall(predict_label(M, counterfactual_data) .== factual));\n\njulia> x = select_factual(counterfactual_data, chosen);\n \n # Search:\n\njulia> generator = Generators.GenericGenerator();\n\njulia> ce = generate_counterfactual(x, target, counterfactual_data, M, generator);\n\njulia> converged(ce.convergence, ce)\ntrue\n\nBroadcasting\n\nThe generate_counterfactual method can also be broadcasted over a tuple containing an array. This allows for generating multiple counterfactuals in parallel. \n\njulia> chosen = rand(findall(predict_label(M, counterfactual_data) .== factual), 5);\n\njulia> xs = select_factual(counterfactual_data, chosen);\n\njulia> ces = generate_counterfactual.(xs, target, counterfactual_data, M, generator);\n\njulia> converged(ce.convergence, ce)\ntrue\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.generate_counterfactual-Tuple{Matrix, Union{Int64, AbstractFloat, String, Symbol}, CounterfactualData, AbstractModel, GrowingSpheresGenerator}","page":"🧐 Reference","title":"CounterfactualExplanations.generate_counterfactual","text":"generate_counterfactual(\n x::Matrix,\n target::RawTargetType,\n data::DataPreprocessing.CounterfactualData,\n M::Models.AbstractModel,\n generator::Generators.GrowingSpheresGenerator;\n num_counterfactuals::Int=1,\n convergence::Union{AbstractConvergence,Symbol}=Convergence.DecisionThresholdConvergence(;\n decision_threshold=(1 / length(data.y_levels)), max_iter=1000\n ),\n kwrgs...,\n)\n\nOverloads the generate_counterfactual for the GrowingSpheresGenerator generator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.generate_counterfactual-Tuple{Tuple{AbstractArray}, Vararg{Any}}","page":"🧐 Reference","title":"CounterfactualExplanations.generate_counterfactual","text":"generate_counterfactual(x::Tuple{<:AbstractArray}, args...; kwargs...)\n\nOverloads the generate_counterfactual method to accept a tuple containing and array. This allows for broadcasting over Zip iterators.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.generate_counterfactual-Tuple{Vector{<:Matrix}, Union{Int64, AbstractFloat, String, Symbol}, CounterfactualData, AbstractModel, AbstractGenerator}","page":"🧐 Reference","title":"CounterfactualExplanations.generate_counterfactual","text":"generate_counterfactual(\n x::Vector{<:Matrix},\n target::RawTargetType,\n data::CounterfactualData,\n M::Models.AbstractModel,\n generator::AbstractGenerator;\n kwargs...,\n)\n\nOverloads the generate_counterfactual method to accept a vector of factuals x and return a vector of counterfactuals.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.get_target_index-Tuple{Any, Any}","page":"🧐 Reference","title":"CounterfactualExplanations.get_target_index","text":"get_target_index(y_levels, target)\n\nUtility that returns the index of target in y_levels.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.path-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.path","text":"path(ce::CounterfactualExplanation)\n\nA convenience method that returns the entire counterfactual path.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.target_probs","page":"🧐 Reference","title":"CounterfactualExplanations.target_probs","text":"target_probs(\n ce::CounterfactualExplanation,\n x::Union{AbstractArray,Nothing}=nothing,\n)\n\nReturns the predicted probability of the target class for x. If x is nothing, the predicted probability corresponding to the counterfactual value is returned.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.terminated-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.terminated","text":"terminated(ce::CounterfactualExplanation)\n\nA convenience method that checks if the counterfactual search has terminated.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.total_steps-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.total_steps","text":"total_steps(ce::CounterfactualExplanation)\n\nA convenience method that returns the total number of steps of the counterfactual search.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Convergence.convergence_catalogue","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.convergence_catalogue","text":"convergence_catalogue\n\nA dictionary containing all convergence criteria.\n\n\n\n\n\n","category":"constant"},{"location":"reference/#CounterfactualExplanations.Convergence.DecisionThresholdConvergence","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.DecisionThresholdConvergence","text":"DecisionThresholdConvergence\n\nConvergence criterion based on the target class probability threshold. The search stops when the target class probability exceeds the predefined threshold.\n\nFields\n\ndecision_threshold::AbstractFloat: The predefined threshold for the target class probability.\nmax_iter::Int: The maximum number of iterations.\nmin_success_rate::AbstractFloat: The minimum success rate for the target class probability.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Convergence.GeneratorConditionsConvergence","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.GeneratorConditionsConvergence","text":"GeneratorConditionsConvergence\n\nConvergence criterion for counterfactual explanations based on the generator conditions. The search stops when the gradients of the search objective are below a certain threshold and the generator conditions are satisfied.\n\nFields\n\ndecision_threshold::AbstractFloat: The threshold for the decision probability.\ngradient_tol::AbstractFloat: The tolerance for the gradients of the search objective.\nmax_iter::Int: The maximum number of iterations.\nmin_success_rate::AbstractFloat: The minimum success rate for the generator conditions (across counterfactuals).\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Convergence.GeneratorConditionsConvergence-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.GeneratorConditionsConvergence","text":"GeneratorConditionsConvergence(; decision_threshold=0.5, gradient_tol=1e-2, max_iter=100, min_success_rate=0.75, y_levels=nothing)\n\nOuter constructor for GeneratorConditionsConvergence.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Convergence.MaxIterConvergence","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.MaxIterConvergence","text":"MaxIterConvergence\n\nConvergence criterion based on the maximum number of iterations.\n\nFields\n\nmax_iter::Int: The maximum number of iterations.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Convergence.converged","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.converged","text":"converged(\n convergence::DecisionThresholdConvergence,\n ce::AbstractCounterfactualExplanation,\n x::Union{AbstractArray,Nothing}=nothing,\n)\n\nChecks if the counterfactual search has converged when the convergence criterion is the decision threshold.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.Convergence.converged-2","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.converged","text":"converged(\n convergence::GeneratorConditionsConvergence,\n ce::AbstractCounterfactualExplanation,\n x::Union{AbstractArray,Nothing}=nothing,\n)\n\nChecks if the counterfactual search has converged when the convergence criterion is generator_conditions.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.Convergence.converged-3","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.converged","text":"converged(\n convergence::InvalidationRateConvergence,\n ce::AbstractCounterfactualExplanation,\n x::Union{AbstractArray,Nothing}=nothing,\n)\n\nChecks if the counterfactual search has converged when the convergence criterion is invalidation rate.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.Convergence.converged-4","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.converged","text":"converged(\n convergence::MaxIterConvergence,\n ce::AbstractCounterfactualExplanation,\n x::Union{AbstractArray,Nothing}=nothing,\n)\n\nChecks if the counterfactual search has converged when the convergence criterion is maximum iterations. This means the counterfactual search will not terminate until the maximum number of iterations has been reached independently of the other convergence criteria.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.Convergence.converged-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.converged","text":"converged(ce::AbstractCounterfactualExplanation)\n\nReturns true if the counterfactual explanation has converged.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Convergence.get_convergence_type-Tuple{AbstractConvergence, AbstractVector}","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.get_convergence_type","text":"get_convergence_type(convergence::AbstractConvergence)\n\nReturns the convergence object.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Convergence.get_convergence_type-Tuple{Symbol, AbstractVector}","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.get_convergence_type","text":"get_convergence_type(convergence::Symbol)\n\nReturns the convergence object from the dictionary of default convergence types.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Convergence.hinge_loss-Tuple{CounterfactualExplanations.Convergence.InvalidationRateConvergence, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.hinge_loss","text":"hinge_loss(convergence::InvalidationRateConvergence, ce::AbstractCounterfactualExplanation)\n\nCalculates the hinge loss of a counterfactual explanation.\n\nArguments\n\nconvergence::InvalidationRateConvergence: The convergence criterion to use.\nce::AbstractCounterfactualExplanation: The counterfactual explanation to calculate the hinge loss for.\n\nReturns\n\nThe hinge loss of the counterfactual explanation.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Convergence.invalidation_rate-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.invalidation_rate","text":"invalidation_rate(ce::AbstractCounterfactualExplanation)\n\nCalculates the invalidation rate of a counterfactual explanation.\n\nArguments\n\nce::AbstractCounterfactualExplanation: The counterfactual explanation to calculate the invalidation rate for.\nkwargs: Additional keyword arguments to pass to the function.\n\nReturns\n\nThe invalidation rate of the counterfactual explanation.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Convergence.threshold_reached","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.threshold_reached","text":"threshold_reached(ce::AbstractCounterfactualExplanation, x::Union{AbstractArray,Nothing}=nothing)\n\nDetermines if the predefined threshold for the target class probability has been reached.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.Evaluation.default_measures","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.default_measures","text":"The default evaluation measures.\n\n\n\n\n\n","category":"constant"},{"location":"reference/#CounterfactualExplanations.Evaluation.Benchmark","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.Benchmark","text":"A container for benchmarks of counterfactual explanations. Instead of subtyping DataFrame, it contains a DataFrame of evaluation measures (see this discussion for why we don't subtype DataFrame directly).\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Evaluation.Benchmark-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.Benchmark","text":"(bmk::Benchmark)(; agg=mean)\n\nReturns a DataFrame containing evaluation measures aggregated by num_counterfactual.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.benchmark-Tuple{CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.benchmark","text":"benchmark(\n data::CounterfactualData;\n models::Dict{<:Any,<:Any}=standard_models_catalogue,\n generators::Union{Nothing,Dict{<:Any,<:AbstractGenerator}}=nothing,\n measure::Union{Function,Vector{Function}}=default_measures,\n n_individuals::Int=5,\n suppress_training::Bool=false,\n factual::Union{Nothing,RawTargetType}=nothing,\n target::Union{Nothing,RawTargetType}=nothing,\n store_ce::Bool=false,\n parallelizer::Union{Nothing,AbstractParallelizer}=nothing,\n kwrgs...,\n)\n\nRuns the benchmarking exercise as follows:\n\nRandomly choose a factual and target label unless specified. \nIf no pretrained models are provided, it is assumed that a dictionary of callable model objects is provided (by default using the standard_models_catalogue). \nEach of these models is then trained on the data. \nFor each model separately choose n_individuals randomly from the non-target (factual) class. For each generator create a benchmark as in benchmark(xs::Union{AbstractArray,Base.Iterators.Zip}).\nFinally, concatenate the results.\n\nIf vertical_splits is specified to an integer, the computations are split vertically into vertical_splits chunks. In this case, the results are stored in a temporary directory and concatenated afterwards. \n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.benchmark-Tuple{Union{AbstractArray, Base.Iterators.Zip}, Union{Int64, AbstractFloat, String, Symbol}, CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.benchmark","text":"benchmark(\n x::Union{AbstractArray,Base.Iterators.Zip},\n target::RawTargetType,\n data::CounterfactualData;\n models::Dict{<:Any,<:AbstractModel},\n generators::Dict{<:Any,<:AbstractGenerator},\n measure::Union{Function,Vector{Function}}=default_measures,\n xids::Union{Nothing,AbstractArray}=nothing,\n dataname::Union{Nothing,Symbol,String}=nothing,\n verbose::Bool=true,\n store_ce::Bool=false,\n parallelizer::Union{Nothing,AbstractParallelizer}=nothing,\n kwrgs...,\n)\n\nFirst generates counterfactual explanations for factual x, the target and data using each of the provided models and generators. Then generates a Benchmark for the vector of counterfactual explanations as in benchmark(counterfactual_explanations::Vector{CounterfactualExplanation}).\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.benchmark-Tuple{Vector{CounterfactualExplanation}}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.benchmark","text":"benchmark(\n counterfactual_explanations::Vector{CounterfactualExplanation};\n meta_data::Union{Nothing,<:Vector{<:Dict}}=nothing,\n measure::Union{Function,Vector{Function}}=default_measures,\n store_ce::Bool=false,\n)\n\nGenerates a Benchmark for a vector of counterfactual explanations. Optionally meta_data describing each individual counterfactual explanation can be supplied. This should be a vector of dictionaries of the same length as the vector of counterfactuals. If no meta_data is supplied, it will be automatically inferred. All measure functions are applied to each counterfactual explanation. If store_ce=true, the counterfactual explanations are stored in the benchmark.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.evaluate","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.evaluate","text":"evaluate(\n ce::CounterfactualExplanation;\n measure::Union{Function,Vector{Function}}=default_measures,\n agg::Function=mean,\n report_each::Bool=false,\n output_format::Symbol=:Vector,\n pivot_longer::Bool=true\n)\n\nJust computes evaluation measures for the counterfactual explanation. By default, no meta data is reported. For report_meta=true, meta data is automatically inferred, unless this overwritten by meta_data. The optional meta_data argument should be a vector of dictionaries of the same length as the vector of counterfactual explanations. \n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.Evaluation.redundancy-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.redundancy","text":"redundancy(ce::CounterfactualExplanation)\n\nComputes the feature redundancy: that is, the number of features that remain unchanged from their original, factual values.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.validity-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.validity","text":"validity(ce::CounterfactualExplanation; γ=0.5)\n\nChecks of the counterfactual search has been successful with respect to the probability threshold γ. In case multiple counterfactuals were generated, the function returns the proportion of successful counterfactuals.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.CounterfactualData-Tuple{AbstractMatrix, Union{AbstractMatrix, AbstractVector}}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.CounterfactualData","text":"CounterfactualData(\n X::AbstractMatrix,\n y::RawOutputArrayType;\n mutability::Union{Vector{Symbol},Nothing}=nothing,\n domain::Union{Any,Nothing}=nothing,\n features_categorical::Union{Vector{Vector{Int}},Nothing}=nothing,\n features_continuous::Union{Vector{Int},Nothing}=nothing,\n input_encoder::Union{Nothing,InputTransformer,TypedInputTransformer}=nothing,\n)\n\nThis outer constructor method prepares features X and labels y to be used with the package. Mutability and domain constraints can be added for the features. The function also accepts arguments that specify which features are categorical and which are continues. These arguments are currently not used. \n\nExamples\n\nusing CounterfactualExplanations.Data\nx, y = toy_data_linear()\nX = hcat(x...)\ncounterfactual_data = CounterfactualData(X,y')\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.CounterfactualData-Tuple{Tables.MatrixTable, Union{AbstractMatrix, AbstractVector}}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.CounterfactualData","text":"function CounterfactualData(\n X::Tables.MatrixTable,\n y::RawOutputArrayType;\n kwrgs...\n)\n\nOuter constructor method that accepts a Tables.MatrixTable. By default, the indices of categorical and continuous features are automatically inferred the features' scitype.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.apply_domain_constraints-Tuple{CounterfactualData, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.apply_domain_constraints","text":"apply_domain_constraints(counterfactual_data::CounterfactualData, x::AbstractArray)\n\nA subroutine that is used to apply the predetermined domain constraints.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.fit_transformer!-Tuple{CounterfactualData, Union{Nothing, CausalInference.SCM, CounterfactualExplanations.GenerativeModels.AbstractGenerativeModel, MultivariateStats.AbstractDimensionalityReduction, StatsBase.AbstractDataTransform, Type{<:StatsBase.AbstractDataTransform}, Type{<:MultivariateStats.AbstractDimensionalityReduction}, Type{<:CounterfactualExplanations.GenerativeModels.AbstractGenerativeModel}, Type{<:CausalInference.SCM}}}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.fit_transformer!","text":"fit_transformer!(\n data::CounterfactualData,\n input_encoder::Union{Nothing,InputTransformer,TypedInputTransformer};\n kwargs...,\n)\n\nFit a transformer to the data in place.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.fit_transformer-Tuple{CounterfactualData, Nothing}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.fit_transformer","text":"fit_transformer(data::CounterfactualData, input_encoder::Nothing; kwargs...)\n\nFit a transformer to the data. This is a no-op if input_encoder is Nothing.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.fit_transformer-Tuple{CounterfactualData, Type{<:CausalInference.SCM}}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.fit_transformer","text":"fit_transformer(\n data::CounterfactualData,\n input_encoder::Type{<:CausalInference.SCM};\n kwargs...,\n)\n\nFit a transformer to the data for a SCM object.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.fit_transformer-Tuple{CounterfactualData, Type{<:CounterfactualExplanations.GenerativeModels.AbstractGenerativeModel}}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.fit_transformer","text":"fit_transformer(\n data::CounterfactualData,\n input_encoder::Type{GenerativeModels.AbstractGenerativeModel};\n kwargs...,\n)\n\nFit a transformer to the data for a GenerativeModels.AbstractGenerativeModel object.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.fit_transformer-Tuple{CounterfactualData, Type{<:MultivariateStats.AbstractDimensionalityReduction}}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.fit_transformer","text":"fit_transformer(\n data::CounterfactualData,\n input_encoder::Type{MultivariateStats.AbstractDimensionalityReduction};\n kwargs...,\n)\n\nFit a transformer to the data for a MultivariateStats.AbstractDimensionalityReduction object.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.fit_transformer-Tuple{CounterfactualData, Type{<:StatsBase.AbstractDataTransform}}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.fit_transformer","text":"fit_transformer(\n data::CounterfactualData,\n input_encoder::Type{StatsBase.AbstractDataTransform};\n kwargs...,\n)\n\nFit a transformer to the data for a StatsBase.AbstractDataTransform object.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.fit_transformer-Tuple{CounterfactualData, Union{CausalInference.SCM, CounterfactualExplanations.GenerativeModels.AbstractGenerativeModel, MultivariateStats.AbstractDimensionalityReduction, StatsBase.AbstractDataTransform}}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.fit_transformer","text":"fit_transformer(data::CounterfactualData, input_encoder::InputTransformer; kwargs...)\n\nFit a transformer to the data for an InputTransformer object. This is a no-op.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.select_factual-Tuple{CounterfactualData, Int64}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.select_factual","text":"select_factual(counterfactual_data::CounterfactualData, index::Int)\n\nA convenience method that can be used to access the feature matrix.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.select_factual-Tuple{CounterfactualData, Union{UnitRange{Int64}, Vector{Int64}}}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.select_factual","text":"select_factual(counterfactual_data::CounterfactualData, index::Union{Vector{Int},UnitRange{Int}})\n\nA convenience method that can be used to access the feature matrix.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.transformable_features-Tuple{CounterfactualData, Any}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.transformable_features","text":"transformable_features(counterfactual_data::CounterfactualData, input_encoder::Any)\n\nBy default, all continuous features are transformable. This function returns the indices of all continuous features.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.transformable_features-Tuple{CounterfactualData, Type{CausalInference.SCM}}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.transformable_features","text":"transformable_features(\n counterfactual_data::CounterfactualData, input_encoder::Type{CausalInference.SCM}\n)\n\nReturns the indices of all features that have causal parents.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.transformable_features-Tuple{CounterfactualData, Type{StatsBase.ZScoreTransform}}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.transformable_features","text":"transformable_features(\n counterfactual_data::CounterfactualData, input_encoder::Type{ZScoreTransform}\n)\n\nReturns the indices of all continuous features that can be transformed. For constant features ZScoreTransform returns NaN.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.transformable_features-Tuple{CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.transformable_features","text":"transformable_features(counterfactual_data::CounterfactualData)\n\nDispatches the transformable_features function to the appropriate method based on the type of the dt field.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.all_models_catalogue","page":"🧐 Reference","title":"CounterfactualExplanations.Models.all_models_catalogue","text":"all_models_catalogue\n\nA dictionary containing both differentiable and non-differentiable machine learning models.\n\n\n\n\n\n","category":"constant"},{"location":"reference/#CounterfactualExplanations.Models.standard_models_catalogue","page":"🧐 Reference","title":"CounterfactualExplanations.Models.standard_models_catalogue","text":"standard_models_catalogue\n\nA dictionary containing all differentiable machine learning models.\n\n\n\n\n\n","category":"constant"},{"location":"reference/#CounterfactualExplanations.AbstractModel-Tuple{AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.AbstractModel","text":"(model::AbstractModel)(X::AbstractArray)\n\nWhen called on data x, logits are returned.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.DeepEnsemble-Tuple{Any}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.DeepEnsemble","text":"DeepEnsemble(model; likelihood::Symbol=:classification_binary)\n\nAn outer constructor for a deep ensemble model.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Linear-Tuple{Any}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Linear","text":"Linear(model; likelihood::Symbol=:classification_binary)\n\nAn outer constructor for a linear model.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.MLP-Tuple{Any}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.MLP","text":"MLP(model; likelihood::Symbol=:classification_binary)\n\nAn outer constructor for a multi-layer perceptron (MLP) model.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Model","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Model","text":"Model <: AbstractModel\n\nConstructor for all models.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Models.Model-Tuple{Any, CounterfactualExplanations.Models.AbstractFluxNN}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Model","text":"Model(model, type::AbstractFluxNN; likelihood::Symbol=:classification_binary)\n\nOverloaded constructor for Flux models.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Model-Tuple{Any, CounterfactualExplanations.Models.AbstractModelType}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Model","text":"Model(model, type::AbstractModelType; likelihood::Symbol=:classification_binary)\n\nOuter constructor for Model where the atomic model is defined and assumed to be pre-trained.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Model-Tuple{CounterfactualData, DeepEnsemble}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Model","text":"(M::Model)(data::CounterfactualData, type::DeepEnsemble; kwargs...)\n\nConstructs a deep ensemble for the given data.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Model-Tuple{CounterfactualData, Linear}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Model","text":"(M::Model)(data::CounterfactualData, type::Linear; kwargs...)\n\nConstructs a model with one linear layer for the given data. If the output is binary, this corresponds to logistic regression, since model outputs are passed through the sigmoid function. If the output is multi-class, this corresponds to multinomial logistic regression, since model outputs are passed through the softmax function.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Model-Tuple{CounterfactualData, MLP}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Model","text":"(M::Model)(data::CounterfactualData, type::MLP; kwargs...)\n\nConstructs a multi-layer perceptron (MLP) for the given data.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Model-Tuple{CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Model","text":"(M::Model)(data::CounterfactualData; kwargs...)\n\nWrap model M around the data in data.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Model-Tuple{CounterfactualExplanations.Models.AbstractModelType}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Model","text":"Model(type::AbstractModelType; likelihood::Symbol=:classification_binary)\n\nOuter constructor for Model where the atomic model is not yet defined.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.fit_model","page":"🧐 Reference","title":"CounterfactualExplanations.Models.fit_model","text":"fit_model(\n counterfactual_data::CounterfactualData, model::Symbol=:MLP;\n kwrgs...\n)\n\nFits one of the available default models to the counterfactual_data. The model argument can be used to specify the desired model. The available values correspond to the keys of the all_models_catalogue dictionary.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.Models.fit_model-Tuple{CounterfactualData, CounterfactualExplanations.Models.AbstractModelType}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.fit_model","text":"fit_model(\n counterfactual_data::CounterfactualData, type::AbstractModelType; kwrgs...\n)\n\nA wrapper function to fit a model to the counterfactual_data for a given type of model.\n\nArguments\n\ncounterfactual_data::CounterfactualData: The data to be used for training the model.\ntype::AbstractModelType: The type of model to be trained, e.g., MLP, DecisionTreeModel, etc.\n\nExamples\n\njulia> using CounterfactualExplanations\n\njulia> using CounterfactualExplanations.Models\n\njulia> using TaijaData\n\njulia> data = CounterfactualData(load_linearly_separable()...);\n\njulia> M = fit_model(data, Linear())\nCounterfactualExplanations.Models.Model(Chain(Dense(2 => 2)), :classification_multi, CounterfactualExplanations.Models.Fitresult(Chain(Dense(2 => 2)), Dict{Any, Any}()), Linear())\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.logits-Tuple{CounterfactualExplanations.Models.Model, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.logits","text":"logits(M::Model, X::AbstractArray)\n\nReturns the logits of the model.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.logits-Tuple{CounterfactualExplanations.Models.Model, CounterfactualExplanations.Models.AbstractFluxNN, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.logits","text":"logits(M::Model, type::AbstractFluxNN, X::AbstractArray)\n\nOverloads the logits function for Flux models.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.logits-Tuple{CounterfactualExplanations.Models.Model, CounterfactualExplanations.Models.MLJModelType, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.logits","text":"logits(M::Model, type::MLJModelType, X::AbstractArray)\n\nOverloads the logits method for MLJ models.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.logits-Tuple{CounterfactualExplanations.Models.Model, DeepEnsemble, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.logits","text":"logits(M::Model, type::DeepEnsemble, X::AbstractArray)\n\nOverloads the logits function for deep ensembles.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.model_evaluation-Tuple{AbstractModel, CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.model_evaluation","text":"model_evaluation(M::AbstractModel, test_data::CounterfactualData)\n\nHelper function to compute F-Score for AbstractModel on a (test) data set. By default, it computes the accuracy. Any other measure, e.g. from the StatisticalMeasures package, can be passed as an argument. Currently, only measures applicable to classification tasks are supported.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.predict_label-Tuple{AbstractModel, CounterfactualData, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.predict_label","text":"predict_label(M::AbstractModel, counterfactual_data::CounterfactualData, X::AbstractArray)\n\nReturns the predicted output label for a given model M, data set counterfactual_data and input data X.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.predict_label-Tuple{AbstractModel, CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.predict_label","text":"predict_label(M::AbstractModel, counterfactual_data::CounterfactualData)\n\nReturns the predicted output labels for all data points of data set counterfactual_data for a given model M.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.predict_proba-Tuple{AbstractModel, Union{Nothing, CounterfactualData}, Union{Nothing, AbstractArray}}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.predict_proba","text":"predict_proba(M::AbstractModel, counterfactual_data::CounterfactualData, X::Union{Nothing,AbstractArray})\n\nReturns the predicted output probabilities for a given model M, data set counterfactual_data and input data X.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.probs-Tuple{CounterfactualExplanations.Models.Model, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.probs","text":"probs(M::Model, X::AbstractArray)\n\nReturns the probabilities of the model.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.probs-Tuple{CounterfactualExplanations.Models.Model, CounterfactualExplanations.Models.AbstractFluxNN, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.probs","text":"probs(M::Model, type::AbstractFluxNN, X::AbstractArray)\n\nOverloads the probs function for Flux models.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.probs-Tuple{CounterfactualExplanations.Models.Model, CounterfactualExplanations.Models.MLJModelType, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.probs","text":"probs(\n M::Model,\n type::MLJModelType,\n X::AbstractArray,\n)\n\nOverloads the probs method for MLJ models. \n\nNote for developers\n\nNote that currently the underlying MLJ methods (reformat, predict) are incompatible with Zygote's autodiff. For differentiable MLJ models, the probs` and logits methods need to be overloaded.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.probs-Tuple{CounterfactualExplanations.Models.Model, DeepEnsemble, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.probs","text":"probs(M::Model, type::DeepEnsemble, X::AbstractArray)\n\nOverloads the probs function for deep ensembles.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.generator_catalogue","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.generator_catalogue","text":"A dictionary containing the constructors of all available counterfactual generators.\n\n\n\n\n\n","category":"constant"},{"location":"reference/#CounterfactualExplanations.Generators.AbstractGradientBasedGenerator","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.AbstractGradientBasedGenerator","text":"AbstractGradientBasedGenerator\n\nAn abstract type that serves as the base type for gradient-based counterfactual generators. \n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Generators.AbstractNonGradientBasedGenerator","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.AbstractNonGradientBasedGenerator","text":"AbstractNonGradientBasedGenerator\n\nAn abstract type that serves as the base type for non gradient-based counterfactual generators. \n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Generators.FeatureTweakGenerator","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.FeatureTweakGenerator","text":"Feature Tweak counterfactual generator class.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Generators.FeatureTweakGenerator-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.FeatureTweakGenerator","text":"FeatureTweakGenerator(; penalty::Union{Nothing,Function,Vector{Function}}=Objectives.distance_l2, ϵ::AbstractFloat=0.1)\n\nConstructs a new Feature Tweak Generator object.\n\nUses the L2-norm as the penalty to measure the distance between the counterfactual and the factual. According to the paper by Tolomei et al., another recommended choice for the penalty in addition to the L2-norm is the L0-norm. The L0-norm simply minimizes the number of features that are changed through the tweak.\n\nArguments\n\npenalty::Union{Nothing,Function,Vector{Function}}: The penalty function to use for the generator. Defaults to distance_l2.\nϵ::AbstractFloat: The tolerance value for the feature tweaks. Described at length in Tolomei et al. (https://arxiv.org/pdf/1706.06691.pdf). Defaults to 0.1.\n\nReturns\n\ngenerator::FeatureTweakGenerator: A non-gradient-based generator that can be used to generate counterfactuals using the feature tweak method.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.GradientBasedGenerator","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.GradientBasedGenerator","text":"Base class for gradient-based counterfactual generators.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Generators.GradientBasedGenerator-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.GradientBasedGenerator","text":"GradientBasedGenerator(;\n\tloss::Union{Nothing,Function}=nothing,\n\tpenalty::Penalty=nothing,\n\tλ::Union{Nothing,AbstractFloat,Vector{AbstractFloat}}=nothing,\n\tlatent_space::Bool::false,\n\topt::Flux.Optimise.AbstractOptimiser=Flux.Descent(),\n generative_model_params::NamedTuple=(;),\n)\n\nDefault outer constructor for GradientBasedGenerator.\n\nArguments\n\nloss::Union{Nothing,Function}=nothing: The loss function used by the model.\npenalty::Penalty=nothing: A penalty function for the generator to penalize counterfactuals too far from the original point.\nλ::Union{Nothing,AbstractFloat,Vector{AbstractFloat}}=nothing: The weight of the penalty function.\nlatent_space::Bool=false: Whether to use the latent space of a generative model to generate counterfactuals.\nopt::Flux.Optimise.AbstractOptimiser=Flux.Descent(): The optimizer to use for the generator.\ngenerative_model_params::NamedTuple: The parameters of the generative model associated with the generator.\n\nReturns\n\ngenerator::GradientBasedGenerator: A gradient-based counterfactual generator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.GrowingSpheresGenerator","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.GrowingSpheresGenerator","text":"Growing Spheres counterfactual generator class.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Generators.GrowingSpheresGenerator-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.GrowingSpheresGenerator","text":"GrowingSpheresGenerator(; n::Int=100, η::Float64=0.1, kwargs...)\n\nConstructs a new Growing Spheres Generator object.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.JSMADescent","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.JSMADescent","text":"An optimisation rule that can be used to implement a Jacobian-based Saliency Map Attack.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Generators.JSMADescent-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.JSMADescent","text":"Outer constructor for the JSMADescent rule.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.CLUEGenerator-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.CLUEGenerator","text":"Constructor for CLUEGenerator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.ClaPROARGenerator-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.ClaPROARGenerator","text":"Constructor for ClaPGenerator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.DiCEGenerator-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.DiCEGenerator","text":"Constructor for DiCEGenerator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.ECCoGenerator-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.ECCoGenerator","text":"Constructor for ECCoGenerator. This corresponds to the generator proposed in https://arxiv.org/abs/2312.10648, without the conformal set size penalty.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.GenericGenerator-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.GenericGenerator","text":"Constructor for GenericGenerator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.GravitationalGenerator-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.GravitationalGenerator","text":"Constructor for GravitationalGenerator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.GreedyGenerator-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.GreedyGenerator","text":"Constructor for GreedyGenerator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.ProbeGenerator-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.ProbeGenerator","text":"Constructor for ProbeGenerator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.REVISEGenerator-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.REVISEGenerator","text":"Constructor for REVISEGenerator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.WachterGenerator-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.WachterGenerator","text":"Constructor for WachterGenerator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.conditions_satisfied-Tuple{AbstractGradientBasedGenerator, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.conditions_satisfied","text":"conditions_satisfied(generator::AbstractGradientBasedGenerator, ce::AbstractCounterfactualExplanation)\n\nThe default method to check if the all conditions for convergence of the counterfactual search have been satisified for gradient-based generators. By default, gradient-based search is considered to have converged as soon as the proposed feature changes for all features are smaller than one percent of its standard deviation.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.generate_perturbations-Tuple{AbstractGenerator, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.generate_perturbations","text":"generate_perturbations(\n generator::AbstractGenerator, ce::AbstractCounterfactualExplanation\n)\n\nThe default method to generate feature perturbations for any generator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.generate_perturbations-Tuple{AbstractGradientBasedGenerator, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.generate_perturbations","text":"generate_perturbations(generator::AbstractGradientBasedGenerator, ce::AbstractCounterfactualExplanation)\n\nThe default method to generate feature perturbations for gradient-based generators through simple gradient descent.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.hinge_loss-Tuple{AbstractConvergence, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.hinge_loss","text":"hinge_loss(convergence::AbstractConvergence, ce::AbstractCounterfactualExplanation)\n\nThe default hinge loss for any convergence criterion. Can be overridden inside the Convergence module as part of the definition of specific convergence criteria.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.@objective-Tuple{Any, Any}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.@objective","text":"objective(generator, ex)\n\nA macro that can be used to define the counterfactual search objective.\n\n\n\n\n\n","category":"macro"},{"location":"reference/#CounterfactualExplanations.Generators.@search_feature_space-Tuple{Any}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.@search_feature_space","text":"search_feature_space(generator)\n\nA simple macro that can be used to specify feature space search.\n\n\n\n\n\n","category":"macro"},{"location":"reference/#CounterfactualExplanations.Generators.@search_latent_space-Tuple{Any}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.@search_latent_space","text":"search_latent_space(generator)\n\nA simple macro that can be used to specify latent space search.\n\n\n\n\n\n","category":"macro"},{"location":"reference/#CounterfactualExplanations.Generators.@with_optimiser-Tuple{Any, Any}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.@with_optimiser","text":"with_optimiser(generator, optimiser)\n\nA simple macro that can be used to specify the optimiser to be used.\n\n\n\n\n\n","category":"macro"},{"location":"reference/#CounterfactualExplanations.Objectives.ddp_diversity-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.ddp_diversity","text":"ddp_diversity(\n ce::AbstractCounterfactualExplanation;\n perturbation_size=1e-5\n)\n\nEvaluates how diverse the counterfactuals are using a Determinantal Point Process (DDP).\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Objectives.distance-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.distance","text":"distance(\n ce::AbstractCounterfactualExplanation;\n from::Union{Nothing,AbstractArray}=nothing,\n agg=mean,\n p::Real=1,\n weights::Union{Nothing,AbstractArray}=nothing,\n)\n\nComputes the distance of the counterfactual to the original factual.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Objectives.distance_l0-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.distance_l0","text":"distance_l0(ce::AbstractCounterfactualExplanation)\n\nComputes the L0 distance of the counterfactual to the original factual.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Objectives.distance_l1-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.distance_l1","text":"distance_l1(ce::AbstractCounterfactualExplanation)\n\nComputes the L1 distance of the counterfactual to the original factual.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Objectives.distance_l2-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.distance_l2","text":"distance_l2(ce::AbstractCounterfactualExplanation)\n\nComputes the L2 (Euclidean) distance of the counterfactual to the original factual.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Objectives.distance_linf-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.distance_linf","text":"distance_linf(ce::AbstractCounterfactualExplanation)\n\nComputes the L-inf distance of the counterfactual to the original factual.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Objectives.distance_mad-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.distance_mad","text":"distance_mad(ce::AbstractCounterfactualExplanation; agg=mean)\n\nThis is the distance measure proposed by Wachter et al. (2017).\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Objectives.predictive_entropy-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.predictive_entropy","text":"predictive_entropy(ce::AbstractCounterfactualExplanation; agg=Statistics.mean)\n\nComputes the predictive entropy of the counterfactuals. Explained in https://arxiv.org/abs/1406.2541.\n\n\n\n\n\n","category":"method"},{"location":"reference/#Flux.Losses.logitbinarycrossentropy-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"Flux.Losses.logitbinarycrossentropy","text":"Flux.Losses.logitbinarycrossentropy(ce::AbstractCounterfactualExplanation)\n\nSimply extends the logitbinarycrossentropy method to work with objects of type AbstractCounterfactualExplanation.\n\n\n\n\n\n","category":"method"},{"location":"reference/#Flux.Losses.logitcrossentropy-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"Flux.Losses.logitcrossentropy","text":"Flux.Losses.logitcrossentropy(ce::AbstractCounterfactualExplanation)\n\nSimply extends the logitcrossentropy method to work with objects of type AbstractCounterfactualExplanation.\n\n\n\n\n\n","category":"method"},{"location":"reference/#Flux.Losses.mse-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"Flux.Losses.mse","text":"Flux.Losses.mse(ce::AbstractCounterfactualExplanation)\n\nSimply extends the mse method to work with objects of type AbstractCounterfactualExplanation.\n\n\n\n\n\n","category":"method"},{"location":"reference/#Internal-functions","page":"🧐 Reference","title":"Internal functions","text":"","category":"section"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"Modules = [\n CounterfactualExplanations, \n CounterfactualExplanations.Convergence,\n CounterfactualExplanations.Evaluation,\n CounterfactualExplanations.DataPreprocessing,\n CounterfactualExplanations.Models, \n CounterfactualExplanations.GenerativeModels,\n CounterfactualExplanations.Generators, \n CounterfactualExplanations.Objectives\n]\nPublic = false","category":"page"},{"location":"reference/#CounterfactualExplanations.CRE","page":"🧐 Reference","title":"CounterfactualExplanations.CRE","text":"CRE <: AbstractCounterfactualExplanation\n\nA Counterfactual Rule Explanation (CRE) is a global explanation for a given target, model M, data and generator.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.CRE-Tuple{AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.CRE","text":"(cre::CRE)(x::AbstractArray)\n\nGenerates a local counterfactual point explanation for x using the generator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DecisionTreeModel","page":"🧐 Reference","title":"CounterfactualExplanations.DecisionTreeModel","text":"DecisionTreeModel\n\nConcrete type for tree-based models from DecisionTree.jl. Since DecisionTree.jl has an MLJ interface, we subtype the MLJModelType model type.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.FluxModelParams","page":"🧐 Reference","title":"CounterfactualExplanations.FluxModelParams","text":"FluxModelParams\n\nDefault MLP training parameters.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.JEM","page":"🧐 Reference","title":"CounterfactualExplanations.JEM","text":"JEM\n\nConcrete type for joint-energy models from JointEnergyModels. Since JointEnergyModels has an MLJ interface, we subtype the MLJModelType model type.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.LaplaceReduxModel","page":"🧐 Reference","title":"CounterfactualExplanations.LaplaceReduxModel","text":"LaplaceReduxModel\n\nConcrete type for neural networks with Laplace Approximation from the LaplaceRedux package. Currently subtyping the AbstractFluxNN model type, although this may be changed to MLJ in the future.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.NeuroTreeModel","page":"🧐 Reference","title":"CounterfactualExplanations.NeuroTreeModel","text":"NeuroTreeModel\n\nConcrete type for differentiable tree-based models from NeuroTreeModels. Since NeuroTreeModels has an MLJ interface, we subtype the MLJModelType model type.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.RandomForestModel","page":"🧐 Reference","title":"CounterfactualExplanations.RandomForestModel","text":"RandomForestModel\n\nConcrete type for random forest model from DecisionTree.jl. Since the DecisionTree package has an MLJ interface, we subtype the MLJModelType model type.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Rule","page":"🧐 Reference","title":"CounterfactualExplanations.Rule","text":"Rule\n\nA Rule is just a list of bounds for the different features. See also CRE.\n\n\n\n\n\n","category":"type"},{"location":"reference/#Base.Broadcast.broadcastable-Tuple{AbstractGenerator}","page":"🧐 Reference","title":"Base.Broadcast.broadcastable","text":"Treat AbstractGenerator as scalar when broadcasting.\n\n\n\n\n\n","category":"method"},{"location":"reference/#Base.Broadcast.broadcastable-Tuple{AbstractModel}","page":"🧐 Reference","title":"Base.Broadcast.broadcastable","text":"Treat AbstractModel as scalar when broadcasting.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.adjust_shape!-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.adjust_shape!","text":"adjust_shape!(ce::CounterfactualExplanation)\n\nA convenience method that adjusts the dimensions of the counterfactual state and related fields.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.adjust_shape-Tuple{CounterfactualExplanation, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.adjust_shape","text":"adjust_shape(\n ce::CounterfactualExplanation, \n x::AbstractArray\n)\n\nA convenience method that adjusts the dimensions of x.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.already_in_target_class-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.already_in_target_class","text":"already_in_target_class(ce::CounterfactualExplanation)\n\nCheck if the factual is already in the target class.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.apply_domain_constraints!-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.apply_domain_constraints!","text":"apply_domain_constraints!(ce::CounterfactualExplanation)\n\nWrapper function that applies underlying domain constraints.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.apply_mutability-Tuple{CounterfactualExplanation, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.apply_mutability","text":"apply_mutability(\n ce::CounterfactualExplanation,\n Δs′::AbstractArray,\n)\n\nA subroutine that applies mutability constraints to the proposed vector of feature perturbations.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.counterfactual-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.counterfactual","text":"counterfactual(ce::CounterfactualExplanation)\n\nA convenience method that returns the counterfactual.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.counterfactual_label-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.counterfactual_label","text":"counterfactual_label(ce::CounterfactualExplanation)\n\nA convenience method that returns the predicted label of the counterfactual.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.counterfactual_label_path-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.counterfactual_label_path","text":"counterfactual_label_path(ce::CounterfactualExplanation)\n\nReturns the counterfactual labels for each step of the search.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.counterfactual_probability","page":"🧐 Reference","title":"CounterfactualExplanations.counterfactual_probability","text":"counterfactual_probability(ce::CounterfactualExplanation)\n\nA convenience method that computes the class probabilities of the counterfactual.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.counterfactual_probability_path-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.counterfactual_probability_path","text":"counterfactual_probability_path(ce::CounterfactualExplanation)\n\nReturns the counterfactual probabilities for each step of the search.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.decode_array-Tuple{CounterfactualData, CausalInference.SCM, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.decode_array","text":"decode_array(\n data::CounterfactualData,\n dt::CausalInference.SCM,\n x::AbstractArray,\n)\n\nHelper function to decode an array x using a data transform dt::GenerativeModels.AbstractGenerativeModel.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.decode_array-Tuple{CounterfactualData, CounterfactualExplanations.GenerativeModels.AbstractGenerativeModel, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.decode_array","text":"decode_array(dt::GenerativeModels.AbstractGenerativeModel, x::AbstractArray)\n\nHelper function to decode an array x using a data transform dt::GenerativeModels.AbstractGenerativeModel.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.decode_array-Tuple{CounterfactualData, MultivariateStats.AbstractDimensionalityReduction, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.decode_array","text":"decode_array(dt::MultivariateStats.AbstractDimensionalityReduction, x::AbstractArray)\n\nHelper function to decode an array x using a data transform dt::MultivariateStats.AbstractDimensionalityReduction.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.decode_array-Tuple{CounterfactualData, Nothing, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.decode_array","text":"decode_array(dt::Nothing, x::AbstractArray)\n\nHelper function to decode an array x using a data transform dt::Nothing. This is a no-op.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.decode_array-Tuple{CounterfactualData, StatsBase.AbstractDataTransform, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.decode_array","text":"decode_array(dt::StatsBase.AbstractDataTransform, x::AbstractArray)\n\nHelper function to decode an array x using a data transform dt::StatsBase.AbstractDataTransform.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.decode_state","page":"🧐 Reference","title":"CounterfactualExplanations.decode_state","text":"function decode_state( ce::CounterfactualExplanation, x::Union{AbstractArray,Nothing}=nothing, )\n\nApplies all the applicable decoding functions:\n\nIf applicable, map the state variable back from the latent space to the feature space.\nIf and where applicable, inverse-transform features.\nReconstruct all categorical encodings.\n\nFinally, the decoded counterfactual is returned.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.decode_state!","page":"🧐 Reference","title":"CounterfactualExplanations.decode_state!","text":"decode_state!(ce::CounterfactualExplanation, x::Union{AbstractArray,Nothing}=nothing)\n\nIn-place version of decode_state.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.encode_array-Tuple{CounterfactualData, CausalInference.SCM, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.encode_array","text":"encode_array(data::CounterfactualData, dt::CausalInference.SCM, x::AbstractArray)\n\nHelper function to encode an array x using a data transform dt::CausalInference.SCM. This is a no-op.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.encode_array-Tuple{CounterfactualData, CounterfactualExplanations.GenerativeModels.AbstractGenerativeModel, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.encode_array","text":"encode_array(dt::GenerativeModels.AbstractGenerativeModel, x::AbstractArray)\n\nHelper function to encode an array x using a data transform dt::GenerativeModels.AbstractGenerativeModel.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.encode_array-Tuple{CounterfactualData, MultivariateStats.AbstractDimensionalityReduction, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.encode_array","text":"encode_array(dt::MultivariateStats.AbstractDimensionalityReduction, x::AbstractArray)\n\nHelper function to encode an array x using a data transform dt::MultivariateStats.AbstractDimensionalityReduction.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.encode_array-Tuple{CounterfactualData, Nothing, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.encode_array","text":"encode_array(dt::Nothing, x::AbstractArray)\n\nHelper function to encode an array x using a data transform dt::Nothing. This is a no-op.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.encode_array-Tuple{CounterfactualData, StatsBase.AbstractDataTransform, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.encode_array","text":"encode_array(dt::StatsBase.AbstractDataTransform, x::AbstractArray)\n\nHelper function to encode an array x using a data transform dt::StatsBase.AbstractDataTransform.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.encode_state","page":"🧐 Reference","title":"CounterfactualExplanations.encode_state","text":"function encode_state( ce::CounterfactualExplanation, x::Union{AbstractArray,Nothing} = nothing, )\n\nApplies all required encodings to x:\n\nIf applicable, it maps x to the latent space learned by the generative model.\nIf and where applicable, it rescales features. \n\nFinally, it returns the encoded state variable.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.encode_state!","page":"🧐 Reference","title":"CounterfactualExplanations.encode_state!","text":"encode_state!(ce::CounterfactualExplanation, x::Union{AbstractArray,Nothing}=nothing)\n\nIn-place version of encode_state.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.factual-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.factual","text":"factual(ce::CounterfactualExplanation)\n\nA convenience method to retrieve the factual x.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.factual_label-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.factual_label","text":"factual_label(ce::CounterfactualExplanation)\n\nA convenience method to get the predicted label associated with the factual.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.factual_probability-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.factual_probability","text":"factual_probability(ce::CounterfactualExplanation)\n\nA convenience method to compute the class probabilities of the factual.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.find_potential_neighbours","page":"🧐 Reference","title":"CounterfactualExplanations.find_potential_neighbours","text":"find_potential_neighbors(ce::AbstractCounterfactualExplanation)\n\nFinds potential neighbors for the selected factual data point.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.get_meta-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.get_meta","text":"get_meta(ce::CounterfactualExplanation)\n\nReturns meta data for a counterfactual explanation.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.guess_likelihood-Tuple{Union{AbstractMatrix, AbstractVector}}","page":"🧐 Reference","title":"CounterfactualExplanations.guess_likelihood","text":"guess_likelihood(y::RawOutputArrayType)\n\nGuess the likelihood based on the scientific type of the output array. Returns a symbol indicating the guessed likelihood and the scientific type of the output array.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.guess_loss-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.guess_loss","text":"guess_loss(ce::CounterfactualExplanation)\n\nGuesses the loss function to be used for the counterfactual search in case likelihood field is specified for the AbstractModel instance and no loss function was explicitly declared for AbstractGenerator instance.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.initialize!-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.initialize!","text":"initialize!(ce::CounterfactualExplanation)\n\nInitializes the counterfactual explanation. This method is called by the constructor. It does the following:\n\nCreates a dictionary to store information about the search.\nInitializes the counterfactual state.\nInitializes the search path.\nInitializes the loss.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.initialize_state!-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.initialize_state!","text":"initialize_state!(ce::CounterfactualExplanation)\n\nInitializes the starting point for the factual(s) in-place.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.initialize_state-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.initialize_state","text":"initialize_state(ce::CounterfactualExplanation)\n\nInitializes the starting point for the factual(s):\n\nIf ce.initialization is set to :identity or counterfactuals are searched in a latent space, then nothing is done.\nIf ce.initialization is set to :add_perturbation, then a random perturbation is added to the factual following following Slack (2021): https://arxiv.org/abs/2106.02666. The authors show that this improves adversarial robustness.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.outdim-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.outdim","text":"outdim(ce::CounterfactualExplanation)\n\nA convenience method that returns the output dimension of the predictive model.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.polynomial_decay-Tuple{Real, Real, Real, Int64}","page":"🧐 Reference","title":"CounterfactualExplanations.polynomial_decay","text":"polynomial_decay(a::Real, b::Real, decay::Real, t::Int)\n\nComputes the polynomial decay function as in Welling et al. (2011): https://www.stats.ox.ac.uk/~teh/research/compstats/WelTeh2011a.pdf.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.reset!-Tuple{CounterfactualExplanations.FluxModelParams}","page":"🧐 Reference","title":"CounterfactualExplanations.reset!","text":"reset!(flux_training_params::FluxModelParams)\n\nRestores the default parameter values.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.steps_exhausted-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.steps_exhausted","text":"steps_exhausted(ce::CounterfactualExplanation)\n\nA convenience method that checks if the number of maximum iterations has been exhausted.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.target_probs_path-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.target_probs_path","text":"target_probs_path(ce::CounterfactualExplanation)\n\nReturns the target probabilities for each step of the search.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.update!-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.update!","text":"update!(ce::CounterfactualExplanation)\n\nAn important subroutine that updates the counterfactual explanation. It takes a snapshot of the current counterfactual search state and passes it to the generator. Based on the current state the generator generates perturbations. Various constraints are then applied to the proposed vector of feature perturbations. Finally, the counterfactual search state is updated.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Convergence.max_iter-Tuple{AbstractConvergence}","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.max_iter","text":"max_iter(conv::AbstractConvergence)\n\nReturns the maximum number of iterations specified.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.distance_measures","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.distance_measures","text":"All distance measures.\n\n\n\n\n\n","category":"constant"},{"location":"reference/#CounterfactualExplanations.Evaluation.EnergySampler","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.EnergySampler","text":"Base type that stores information relevant to energy-based posterior sampling from AbstractModel.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Evaluation.EnergySampler-Tuple{AbstractModel, Distributions.Distribution, Distributions.Distribution, Tuple{Vararg{Int64, N}} where N, Int64}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.EnergySampler","text":"EnergySampler(\n model::AbstractModel,\n 𝒟x::Distribution,\n 𝒟y::Distribution,\n input_size::Dims,\n yidx::Int;\n opt::Union{Nothing,AbstractSamplingRule}=nothing,\n nsamples::Int=100,\n niter_final::Int=1000,\n ntransitions::Int=0,\n opt_warmup::Union{Nothing,AbstractSamplingRule}=nothing,\n niter::Int=20,\n batch_size::Int=50,\n prob_buffer::AbstractFloat=0.95,\n kwargs...,\n)\n\nConstructor for EnergySampler, which is used to sample from the posterior distribution of the model conditioned on y.\n\nArguments\n\nmodel::AbstractModel: The model to be used for sampling.\ndata::CounterfactualData: The data to be used for sampling.\ny::Any: The conditioning value.\nopt::AbstractSamplingRule=ImproperSGLD(): The sampling rule to be used. By default, SGLD is used with a = (2 / std(Uniform()) * std(𝒟x) and b = 1 and γ=0.9.\nnsamples::Int=100: The number of samples to include in the final empirical posterior distribution.\nniter_final::Int=1000: The number of iterations for generating samples from the posterior distribution. Typically, this number will be larger than the number of iterations during PMC training. \nntransitions::Int=0: The number of transitions for (optionally) warming up the sampler. By default, this is set to 0 and the sampler is not warmed up. For valies larger than 0, the sampler is trained through PMC for niter iterations and ntransitions transitions to build a buffer of samples. The buffer is used for posterior sampling.\nopt_warmup::Union{Nothing,AbstractSamplingRule}=nothing: The sampling rule to be used for warm-up. By default, ImproperSGLD is used with α = (2 / std(Uniform()) * std(𝒟x) and γ = 0.005α.\nniter::Int=100: The number of iterations for training the sampler through PMC.\nbatch_size::Int=50: The batch size for training the sampler.\nprob_buffer::AbstractFloat=0.5: The probability of drawing samples from the replay buffer. Smaller values will result in more samples being drawn from the prior and typically lead to better mixing and diversity in the samples.\nkwargs...: Additional keyword arguments to be passed on to the sampler and PMC.\n\nReturns\n\nEnergySampler: An instance of EnergySampler.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.EnergySampler-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.EnergySampler","text":"EnergySampler(ce::CounterfactualExplanation; kwrgs...)\n\nOverloads the EnergySampler constructor to accept a CounterfactualExplanation object.\n\n\n\n\n\n","category":"method"},{"location":"reference/#Base.rand","page":"🧐 Reference","title":"Base.rand","text":"Base.rand(sampler::EnergySampler, n::Int=100; retrain=false)\n\nOverloads the rand method to randomly draw n samples from EnergySampler. If from_posterior is true, the samples are drawn from the posterior distribution. Otherwise, the samples are generated from the model conditioned on the target value using a single chain (see generate_posterior_samples).\n\nArguments\n\nsampler::EnergySampler: The EnergySampler object to be used for sampling.\nn::Int=100: The number of samples to draw.\nfrom_posterior::Bool=true: Whether to draw samples from the posterior distribution.\nniter::Int=500: The number of iterations for generating samples through Monte Carlo sampling (single chain).\n\nReturns\n\nAbstractArray: The samples.\n\n\n\n\n\n","category":"function"},{"location":"reference/#Base.vcat-Tuple{CounterfactualExplanations.Evaluation.Benchmark, CounterfactualExplanations.Evaluation.Benchmark}","page":"🧐 Reference","title":"Base.vcat","text":"Base.vcat(bmk1::Benchmark, bmk2::Benchmark)\n\nVertically concatenates two Benchmark objects.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.compute_measure-Tuple{CounterfactualExplanation, Function, Function}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.compute_measure","text":"compute_measure(ce::CounterfactualExplanation, measure::Function, agg::Function)\n\nComputes a single measure for a counterfactual explanation. The measure is applied to the counterfactual explanation ce and aggregated using the aggregation function agg.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.define_prior-Tuple{CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.define_prior","text":"define_prior(\n data::CounterfactualData;\n 𝒟x::Union{Nothing,Distribution}=nothing,\n 𝒟y::Union{Nothing,Distribution}=nothing,\n n_std::Int=3,\n)\n\nDefines the prior for the data. The space is defined as a uniform distribution with bounds defined by the mean and standard deviation of the data. The bounds are extended by n_std standard deviations.\n\nArguments\n\ndata::CounterfactualData: The data to be used for defining the prior sampling space.\nn_std::Int=3: The number of standard deviations to extend the bounds.\n\nReturns\n\nUniform: The uniform distribution defining the prior sampling space.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.distance_from_posterior-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.distance_from_posterior","text":"distance_from_posterior(ce::AbstractCounterfactualExplanation)\n\nComputes the distance from the counterfactual to generated conditional samples. The distance is computed as the mean distance from the counterfactual to the samples drawn from the posterior distribution of the model. By default, the cosine distance is used.\n\nArguments\n\nce::AbstractCounterfactualExplanation: The counterfactual explanation object.\nnsamples::Int=1000: The number of samples to draw.\nfrom_posterior::Bool=true: Whether to draw samples from the posterior distribution.\nagg: The aggregation function to use for computing the distance.\nchoose_lowest_energy::Bool=true: Whether to choose the samples with the lowest energy.\nchoose_random::Bool=false: Whether to choose random samples.\nnmin::Int=25: The minimum number of samples to choose.\np::Int=1: The norm to use for computing the distance.\ncosine::Bool=true: Whether to use the cosine distance.\nkwargs...: Additional keyword arguments to be passed on to the EnergySampler.\n\nReturns\n\nAbstractFloat: The distance from the counterfactual to the samples.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.faithfulness-Tuple{CounterfactualExplanation, typeof(CounterfactualExplanations.Evaluation.distance_from_posterior)}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.faithfulness","text":"faithfulness(\n ce::CounterfactualExplanation,\n fun::typeof(Objectives.distance_from_target);\n λ::AbstractFloat=1.0,\n kwrgs...,\n)\n\nComputes the faithfulness of a counterfactual explanation based on the cosine similarity between the counterfactual and samples drawn from the model posterior through SGLD (see distance_from_posterior).\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.generate_posterior_samples","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.generate_posterior_samples","text":"generate_posterior_samples(\n e::EnergySampler, n::Int=1000; niter::Int=1000, kwargs...\n)\n\nGenerates n samples from the posterior distribution of the model conditioned on the target value y. The samples are generated through (Persistent) Monte Carlo sampling using the EnergySampler object. If the replay buffer is not empty, the initial samples are drawn from the buffer. \n\nNote that by default the batch size of the sampler is set to round(Int, n / 100) by default for sampling. This is to ensure that the samples are drawn independently from the posterior distribution. It also helps to avoid vanishing gradients. \n\nThe chain is run persistently until n samples are generated. The number of transitions is set to ceil(Int, n / batch_size). Once the chain is run, the last n samples are form the replay buffer are returned.\n\nArguments\n\ne::EnergySampler: The EnergySampler object to be used for sampling.\nn::Int=100: The number of samples to generate.\nbatch_size::Int=round(Int, n / 100): The batch size for sampling.\nniter::Int=1000: The number of iterations for generating samples from the posterior distribution.\nkwargs...: Additional keyword arguments to be passed on to the sampler.\n\nReturns\n\nAbstractArray: The generated samples.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.Evaluation.get_lowest_energy_sample-Tuple{CounterfactualExplanations.Evaluation.EnergySampler}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.get_lowest_energy_sample","text":"get_lowest_energy_sample(sampler::EnergySampler; n::Int=5)\n\nChooses the samples with the lowest energy (i.e. highest probability) from EnergySampler.\n\nArguments\n\nsampler::EnergySampler: The EnergySampler object to be used for sampling.\nn::Int=5: The number of samples to choose.\n\nReturns\n\nAbstractArray: The samples with the lowest energy.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.get_sampler!-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.get_sampler!","text":"get_sampler!(ce::AbstractCounterfactualExplanation; kwargs...)\n\nGets the EnergySampler object from the counterfactual explanation. If the sampler is not found, it is constructed and stored in the counterfactual explanation object.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.plausibility-Tuple{CounterfactualExplanation, typeof(CounterfactualExplanations.Objectives.distance_from_target)}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.plausibility","text":"plausibility(\n ce::CounterfactualExplanation,\n fun::typeof(Objectives.distance_from_target);\n K=nothing,\n kwrgs...,\n)\n\nComputes the plausibility of a counterfactual explanation based on the cosine similarity between the counterfactual and samples drawn from the target distribution.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.to_dataframe-Tuple{Vector, Any, Bool, Bool, Bool, CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.to_dataframe","text":"evaluate_dataframe(\n ce::CounterfactualExplanation,\n measure::Vector{Function},\n agg::Function,\n report_each::Bool,\n pivot_longer::Bool,\n store_ce::Bool,\n)\n\nEvaluates a counterfactual explanation and returns a dataframe of evaluation measures.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.validity_strict-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.validity_strict","text":"validity_strict(ce::CounterfactualExplanation)\n\nChecks if the counterfactual search has been strictly valid in the sense that it has converged with respect to the pre-specified target probability γ.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.warmup!-Tuple{CounterfactualExplanations.Evaluation.EnergySampler, Int64}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.warmup!","text":"warmup!(\n e::EnergySampler,\n y::Int;\n niter::Int=20,\n ntransitions::Int=100,\n kwargs...,\n)\n\nWarms up the EnergySampler to the underlying model for conditioning value y. Specifically, this entails running PMC for niter iterations and ntransitions transitions to build a buffer of samples. The buffer is used for posterior sampling.\n\nArguments\n\ne::EnergySampler: The EnergySampler object to be trained.\ny::Int: The conditioning value.\nopt::Union{Nothing,AbstractSamplingRule}: The sampling rule to be used. By default, ImproperSGLD is used with α = 2 * std(Uniform(𝒟x)) and γ = 0.005α.\nniter::Int=20: The number of iterations for training the sampler through PMC.\nntransitions::Int=100: The number of transitions for training the sampler. In each transition, the sampler is updated with a mini-batch of data. Data is either drawn from the replay buffer or reinitialized from the prior.\nkwargs...: Additional keyword arguments to be passed on to the sampler and PMC.\n\nReturns\n\nEnergySampler: The trained EnergySampler.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.InputTransformer","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.InputTransformer","text":"InputTransformer\n\nAbstract type for data transformers. This can be any of the following:\n\nStatsBase.AbstractDataTransform: A data transformation object from the StatsBase package.\nMultivariateStats.AbstractDimensionalityReduction: A dimensionality reduction object from the MultivariateStats package.\nGenerativeModels.AbstractGenerativeModel: A generative model object from the GenerativeModels module.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.TypedInputTransformer","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.TypedInputTransformer","text":"TypedInputTransformer\n\nAbstract type for data transformers.\n\n\n\n\n\n","category":"type"},{"location":"reference/#Base.Broadcast.broadcastable-Tuple{CounterfactualData}","page":"🧐 Reference","title":"Base.Broadcast.broadcastable","text":"Treat CounterfactualData as scalar when broadcasting.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing._subset-Tuple{CounterfactualData, Vector{Int64}}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing._subset","text":"_subset(data::CounterfactualData, idx::Vector{Int})\n\nCreates a subset of the data.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.convert_to_1d-Tuple{Matrix, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.convert_to_1d","text":"convert_to_1d(y::Matrix, y_levels::AbstractArray)\n\nHelper function to convert a one-hot encoded matrix to a vector of labels. This is necessary because MLJ models require the labels to be represented as a vector, but the synthetic datasets in this package hold the labels in one-hot encoded form.\n\nArguments\n\ny::Matrix: The one-hot encoded matrix.\ny_levels::AbstractArray: The levels of the categorical variable.\n\nReturns\n\nlabels: A vector of labels.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.input_dim-Tuple{CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.input_dim","text":"input_dim(counterfactual_data::CounterfactualData)\n\nHelper function that returns the input dimension (number of features) of the data. \n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.mutability_constraints-Tuple{CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.mutability_constraints","text":"mutability_constraints(counterfactual_data::CounterfactualData)\n\nA convenience function that returns the mutability constraints. If none were specified, it is assumed that all features are mutable in :both directions.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.outdim-Tuple{CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.outdim","text":"outdim(data::CounterfactualData)\n\nReturns the number of output classes.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.preprocess_data_for_mlj-Tuple{CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.preprocess_data_for_mlj","text":"preprocess_data_for_mlj(data::CounterfactualData)\n\nHelper function to preprocess data::CounterfactualData for MLJ models.\n\nArguments\n\ndata::CounterfactualData: The data to be preprocessed.\n\nReturns\n\n(df_x, y): A tuple containing the preprocessed data, with df_x being a DataFrame object and y being a categorical vector.\n\nExample\n\nX, y = preprocessdatafor_mlj(data)\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.reconstruct_cat_encoding-Tuple{CounterfactualData, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.reconstruct_cat_encoding","text":"reconstruct_cat_encoding(counterfactual_data::CounterfactualData, x::Vector)\n\nReconstruct the categorical encoding for a single instance.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.subsample-Tuple{CounterfactualData, Int64}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.subsample","text":"subsample(data::CounterfactualData, n::Int)\n\nHelper function to randomly subsample data::CounterfactualData.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.train_test_split-Tuple{CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.train_test_split","text":"train_test_split(data::CounterfactualData;test_size=0.2,keep_class_ratio=false)\n\nSplits data into train and test split.\n\nArguments\n\ndata::CounterfactualData: The data to be preprocessed.\ntest_size=0.2: Proportion of the data to be used for testing. \nkeep_class_ratio=false: Decides whether to sample equally from each class, or keep their relative size.\n\nReturns\n\n(train_data::CounterfactualData, test_data::CounterfactualData): A tuple containing the train and test splits.\n\nExample\n\ntrain, test = traintestsplit(data, testsize=0.1, keepclass_ratio=true)\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.unpack_data-Tuple{CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.unpack_data","text":"unpack_data(data::CounterfactualData)\n\nHelper function that unpacks data.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.AbstractCustomDifferentiableModel","page":"🧐 Reference","title":"CounterfactualExplanations.Models.AbstractCustomDifferentiableModel","text":"Base type for custom differentiable models.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Models.AbstractDifferentiableModel","page":"🧐 Reference","title":"CounterfactualExplanations.Models.AbstractDifferentiableModel","text":"Base type for differentiable models.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Models.AbstractDifferentiableModelType","page":"🧐 Reference","title":"CounterfactualExplanations.Models.AbstractDifferentiableModelType","text":"Abstract types for differentiable models.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Models.AbstractFluxModel","page":"🧐 Reference","title":"CounterfactualExplanations.Models.AbstractFluxModel","text":"Base type for differentiable models written in Flux.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Models.AbstractFluxNN","page":"🧐 Reference","title":"CounterfactualExplanations.Models.AbstractFluxNN","text":"Abstract type for Flux models.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Models.AbstractMLJModel","page":"🧐 Reference","title":"CounterfactualExplanations.Models.AbstractMLJModel","text":"Base type for differentiable models from the MLJ library.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Models.AbstractModelType-Tuple{Any}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.AbstractModelType","text":"(type::AbstractModelType)(model; likelihood::Symbol=:classification_binary)\n\nWrap model type around the pre-trained model model.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.AbstractModelType-Tuple{CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.AbstractModelType","text":"(type::AbstractModelType)(data::CounterfactualData; kwargs...)\n\nWrap model type around the data in data. This is a convenience function to avoid having to construct a Model object.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Differentiability","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Differentiability","text":"A base type for model differentiability.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Models.Differentiability-Tuple{CounterfactualExplanations.Models.Model}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Differentiability","text":"Dispatches on the type of model for the differentiability trait.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Fitresult","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Fitresult","text":"Fitresult\n\nA struct to hold the results of fitting a model.\n\nFields\n\nfitresult: The result of fitting the model to the data. This object should be callable on new data.\nother::Dict: A dictionary to hold any other relevant information.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Models.Fitresult-Tuple{AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Fitresult","text":"(fitresult::Fitresult)(newdata::AbstractArray)\n\nWhen called on new data, the Fitresult object returns the result of calling the fitresult on new data.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Fitresult-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Fitresult","text":"(fitresult::Fitresult)()\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.FluxNN","page":"🧐 Reference","title":"CounterfactualExplanations.Models.FluxNN","text":"Concrete type for Flux models.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Models.IsDifferentiable","page":"🧐 Reference","title":"CounterfactualExplanations.Models.IsDifferentiable","text":"Struct for models that are differentiable.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Models.MLJModelType","page":"🧐 Reference","title":"CounterfactualExplanations.Models.MLJModelType","text":"Abstract type for MLJ models.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Models.NonDifferentiable","page":"🧐 Reference","title":"CounterfactualExplanations.Models.NonDifferentiable","text":"By default, models are assumed not to be differentiable.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Models.binary_to_onehot-Tuple{Any}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.binary_to_onehot","text":"binary_to_onehot(p)\n\nHelper function to turn dummy-encoded variable into onehot-encoded variable.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.build_ensemble-Tuple{Int64}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.build_ensemble","text":"build_ensemble(K::Int;kw=(input_dim=2,n_hidden=32,output_dim=1))\n\nHelper function that builds an ensemble of K models.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.build_mlp-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.build_mlp","text":"build_mlp()\n\nHelper function to build simple MLP.\n\nExamples\n\nnn = build_mlp()\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.data_loader-Tuple{CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.data_loader","text":"data_loader(data::CounterfactualData)\n\nPrepares counterfactual data for training in Flux.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.forward!-Tuple{Flux.Chain, Any}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.forward!","text":"forward!(model::Flux.Chain, data; loss::Symbol, opt::Symbol, n_epochs::Int=10, model_name=\"MLP\")\n\nForward pass for training a Flux.Chain model.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.load_mnist_model-Tuple{CounterfactualExplanations.Models.AbstractModelType}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.load_mnist_model","text":"load_mnist_model(type::AbstractModelType)\n\nEmpty function to be overloaded for loading a pre-trained model for the AbstractModelType model type.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.load_mnist_model-Tuple{DeepEnsemble}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.load_mnist_model","text":"load_mnist_model(type::DeepEnsemble)\n\nLoad a pre-trained deep ensemble model for the MNIST dataset.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.load_mnist_model-Tuple{MLP}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.load_mnist_model","text":"load_mnist_model(type::MLP)\n\nLoad a pre-trained MLP model for the MNIST dataset.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.load_mnist_vae-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.load_mnist_vae","text":"load_mnist_vae(; strong=true)\n\nLoad a pre-trained VAE model for the MNIST dataset.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.train-Tuple{CounterfactualExplanations.Models.Model, CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.train","text":"train(M::Model, data::CounterfactualData)\n\nTrains the model M on the data in data.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.train-Tuple{CounterfactualExplanations.Models.Model, CounterfactualExplanations.Models.AbstractFluxNN, CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.train","text":"train(M::FluxModel, data::CounterfactualData; kwargs...)\n\nWrapper function to train Flux models.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.train-Tuple{CounterfactualExplanations.Models.Model, CounterfactualExplanations.Models.MLJModelType, CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.train","text":"train(\n M::Model,\n type::MLJModelType,\n data::CounterfactualData,\n)\n\nOverloads the train function for MLJ models.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.train-Tuple{CounterfactualExplanations.Models.Model, DeepEnsemble, CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.train","text":"train(M::Model, type::DeepEnsemble, data::CounterfactualData; kwargs...)\n\nOverloads the train function for deep ensembles.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.GenerativeModels.AbstractGMParams","page":"🧐 Reference","title":"CounterfactualExplanations.GenerativeModels.AbstractGMParams","text":"Base type of generative model hyperparameter container.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.GenerativeModels.AbstractGenerativeModel","page":"🧐 Reference","title":"CounterfactualExplanations.GenerativeModels.AbstractGenerativeModel","text":"Base type for generative model.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.GenerativeModels.Encoder","page":"🧐 Reference","title":"CounterfactualExplanations.GenerativeModels.Encoder","text":"Encoder\n\nConstructs encoder part of VAE: a simple Flux neural network with one hidden layer and two linear output layers for the first two moments of the latent distribution.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.GenerativeModels.VAE","page":"🧐 Reference","title":"CounterfactualExplanations.GenerativeModels.VAE","text":"VAE <: AbstractGenerativeModel\n\nConstructs the Variational Autoencoder. The VAE is a subtype of AbstractGenerativeModel. Any (sub-)type of AbstractGenerativeModel is accepted by latent space generators. \n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.GenerativeModels.VAE-Tuple{Any}","page":"🧐 Reference","title":"CounterfactualExplanations.GenerativeModels.VAE","text":"VAE(input_dim;kws...)\n\nOuter method for instantiating a VAE.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.GenerativeModels.VAEParams","page":"🧐 Reference","title":"CounterfactualExplanations.GenerativeModels.VAEParams","text":"VAEParams <: AbstractGMParams\n\nThe default VAE parameters describing both the encoder/decoder architecture and the training process.\n\n\n\n\n\n","category":"type"},{"location":"reference/#Base.rand-2","page":"🧐 Reference","title":"Base.rand","text":"Random.rand(encoder::Encoder, x, device=cpu)\n\nDraws random samples from the latent distribution.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.GenerativeModels.Decoder-Tuple{Int64, Int64, Int64}","page":"🧐 Reference","title":"CounterfactualExplanations.GenerativeModels.Decoder","text":"Decoder(input_dim::Int, latent_dim::Int, hidden_dim::Int; activation=relu)\n\nThe default decoder architecture is just a Flux Chain with one hidden layer and a linear output layer. \n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.GenerativeModels.decode-Tuple{CounterfactualExplanations.GenerativeModels.VAE, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.GenerativeModels.decode","text":"decode(generative_model::VAE, x::AbstractArray)\n\nDecodes an array x using the VAE decoder.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.GenerativeModels.encode-Tuple{CounterfactualExplanations.GenerativeModels.VAE, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.GenerativeModels.encode","text":"encode(generative_model::VAE, x::AbstractArray)\n\nEncodes an array x using the VAE encoder. Specifically, it samples from the latent distribution. It does so by first passing x through the encoder to obtain the mean and log-variance of the latent distribution. Then, it samples from the latent distribution using the reparameterization trick. See Random.rand(encoder::Encoder, x, device=cpu) for more details.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.GenerativeModels.get_data-Tuple{AbstractArray, AbstractArray, Any}","page":"🧐 Reference","title":"CounterfactualExplanations.GenerativeModels.get_data","text":"get_data(X::AbstractArray, y::AbstractArray, batch_size)\n\nPreparing data for mini-batch training .\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.GenerativeModels.get_data-Tuple{AbstractArray, Any}","page":"🧐 Reference","title":"CounterfactualExplanations.GenerativeModels.get_data","text":"get_data(X::AbstractArray, batch_size)\n\nPreparing data for mini-batch training .\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.GenerativeModels.reconstruct","page":"🧐 Reference","title":"CounterfactualExplanations.GenerativeModels.reconstruct","text":"reconstruct(generative_model::VAE, x, device=cpu)\n\nImplements a full pass of some input x through the VAE: x ↦ x̂.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.GenerativeModels.reparameterization_trick","page":"🧐 Reference","title":"CounterfactualExplanations.GenerativeModels.reparameterization_trick","text":"reparameterization_trick(μ,logσ,device=cpu)\n\nHelper function that implements the reparameterization trick: z ∼ 𝒩(μ,σ²) ⇔ z=μ + σ ⊙ ε, ε ∼ 𝒩(0,I).\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.Generators.Penalty","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.Penalty","text":"Type union for acceptable argument types for the penalty field of GradientBasedGenerator.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Generators.TCRExGenerator","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.TCRExGenerator","text":"T-CREx counterfactual generator class.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Generators._replace_nans","page":"🧐 Reference","title":"CounterfactualExplanations.Generators._replace_nans","text":"_replace_nans(Δs′::AbstractArray, old_new::Pair=(NaN => 0))\n\nHelper function to deal with exploding gradients. This is only a temporary fix and will be improved.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.Generators.feature_selection!-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.feature_selection!","text":"feature_selection!(ce::AbstractCounterfactualExplanation)\n\nPerform feature selection to find the dimension with the closest (but not equal) values between the ce.x (factual) and ce.s′ (counterfactual) arrays.\n\nArguments\n\nce::AbstractCounterfactualExplanation: An instance of the AbstractCounterfactualExplanation type representing the counterfactual explanation.\n\nReturns\n\nnothing\n\nThe function iteratively modifies the ce.s′ counterfactual array by updating its elements to match the corresponding elements in the ce.x factual array, one dimension at a time, until the predicted label of the modified ce.s′ matches the predicted label of the ce.x array.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.find_closest_dimension-Tuple{Any, Any}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.find_closest_dimension","text":"find_closest_dimension(factual, counterfactual)\n\nFind the dimension with the closest (but not equal) values between the factual and counterfactual arrays.\n\nArguments\n\nfactual: The factual array.\ncounterfactual: The counterfactual array.\n\nReturns\n\nclosest_dimension: The index of the dimension with the closest values.\n\nThe function iterates over the indices of the factual array and calculates the absolute difference between the corresponding elements in the factual and counterfactual arrays. It returns the index of the dimension with the smallest difference, excluding dimensions where the values in factual and counterfactual are equal.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.find_counterfactual-NTuple{4, Any}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.find_counterfactual","text":"find_counterfactual(model, factual_class, counterfactual_data, counterfactual_candidates)\n\nFind the first counterfactual index by predicting labels.\n\nArguments\n\nmodel: The fitted model used for prediction.\ntarget_class: Expected target class.\ncounterfactual_data: Data required for counterfactual generation.\ncounterfactual_candidates: The array of counterfactual candidates.\n\nReturns\n\ncounterfactual: The index of the first counterfactual found.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.growing_spheres_generation!-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.growing_spheres_generation!","text":"growing_spheres_generation(ce::AbstractCounterfactualExplanation)\n\nGenerate counterfactual candidates using the growing spheres generation algorithm.\n\nArguments\n\nce::AbstractCounterfactualExplanation: An instance of the AbstractCounterfactualExplanation type representing the counterfactual explanation.\n\nReturns\n\nnothing\n\nThis function applies the growing spheres generation algorithm to generate counterfactual candidates. It starts by generating random points uniformly on a sphere, gradually reducing the search space until no counterfactuals are found. Then it expands the search space until at least one counterfactual is found or the maximum number of iterations is reached.\n\nThe algorithm iteratively generates counterfactual candidates and predicts their labels using the model stored in ce.M. It checks if any of the predicted labels are different from the factual class. The process of reducing the search space involves halving the search radius, while the process of expanding the search space involves increasing the search radius.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.h-Tuple{AbstractGenerator, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.h","text":"h(generator::AbstractGenerator, ce::AbstractCounterfactualExplanation)\n\nDispatches to the appropriate complexity function for any generator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.h-Tuple{AbstractGenerator, Function, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.h","text":"h(generator::AbstractGenerator, penalty::Function, ce::AbstractCounterfactualExplanation)\n\nOverloads the h function for the case where a single penalty function is provided.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.h-Tuple{AbstractGenerator, Nothing, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.h","text":"h(generator::AbstractGenerator, penalty::Nothing, ce::AbstractCounterfactualExplanation)\n\nOverloads the h function for the case where no penalty is provided.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.h-Tuple{AbstractGenerator, Vector{<:Tuple}, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.h","text":"h(generator::AbstractGenerator, penalty::Tuple, ce::AbstractCounterfactualExplanation)\n\nOverloads the h function for the case where a single penalty function is provided with additional keyword arguments.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.h-Tuple{AbstractGenerator, Vector{Function}, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.h","text":"h(generator::AbstractGenerator, penalty::Tuple, ce::AbstractCounterfactualExplanation)\n\nOverloads the h function for the case where a single penalty function is provided with additional keyword arguments.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.hyper_sphere_coordinates-Tuple{Integer, AbstractArray, AbstractFloat, AbstractFloat}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.hyper_sphere_coordinates","text":"hyper_sphere_coordinates(n_search_samples::Int, instance::Vector{Float64}, low::Int, high::Int; p_norm::Int=2)\n\nGenerates candidate counterfactuals using the growing spheres method based on hyper-sphere coordinates.\n\nThe implementation follows the Random Point Picking over a sphere algorithm described in the paper: \"Learning Counterfactual Explanations for Tabular Data\" by Pawelczyk, Broelemann & Kascneci (2020), presented at The Web Conference 2020 (WWW). It ensures that points are sampled uniformly at random using insights from: http://mathworld.wolfram.com/HyperspherePointPicking.html\n\nThe growing spheres method is originally proposed in the paper: \"Comparison-based Inverse Classification for Interpretability in Machine Learning\" by Thibaut Laugel et al (2018), presented at the International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (2018).\n\nArguments\n\nn_search_samples::Int: The number of search samples (int > 0).\ninstance::AbstractArray: The input point array.\nlow::AbstractFloat: The lower bound (float >= 0, l < h).\nhigh::AbstractFloat: The upper bound (float >= 0, h > l).\np_norm::Integer: The norm parameter (int >= 1).\n\nReturns\n\ncandidate_counterfactuals::Array: An array of candidate counterfactuals.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.incompatible-Tuple{Any, Any}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.incompatible","text":"incompatible(AbstractGenerator, AbstractCounterfactualExplanation)\n\nChecks if the generator is incompatible with any of the additional specifications for the counterfactual explanations. By default, generators are assumed to be compatible.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.propose_state-Tuple{CounterfactualExplanations.Models.IsDifferentiable, AbstractGradientBasedGenerator, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.propose_state","text":"propose_state(\n ::Models.IsDifferentiable,\n generator::AbstractGradientBasedGenerator,\n ce::AbstractCounterfactualExplanation,\n)\n\nProposes new state based on backpropagation for gradient-based generators and differentiable models.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.total_loss-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.total_loss","text":"total_loss(ce::AbstractCounterfactualExplanation)\n\nComputes the total loss of a counterfactual explanation with respect to the search objective.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.ℓ-Tuple{AbstractGenerator, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.ℓ","text":"ℓ(generator::AbstractGenerator, ce::AbstractCounterfactualExplanation)\n\nDispatches to the appropriate loss function for any generator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.ℓ-Tuple{AbstractGenerator, Function, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.ℓ","text":"ℓ(generator::AbstractGenerator, loss::Function, ce::AbstractCounterfactualExplanation)\n\nOverloads the ℓ function for the case where a single loss function is provided.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.ℓ-Tuple{AbstractGenerator, Nothing, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.ℓ","text":"ℓ(generator::AbstractGenerator, loss::Nothing, ce::AbstractCounterfactualExplanation)\n\nOverloads the ℓ function for the case where no loss function is provided.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.∂h-Tuple{AbstractGradientBasedGenerator, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.∂h","text":"∂h(generator::AbstractGradientBasedGenerator, ce::AbstractCounterfactualExplanation)\n\nThe default method to compute the gradient of the complexity penalty at the current counterfactual state for gradient-based generators. It assumes that Zygote.jl has gradient access. \n\nIf the penalty is not provided, it returns 0.0. By default, Zygote never works out the gradient for constants and instead returns 'nothing', so we need to add a manual step to override this behaviour. See here: https://discourse.julialang.org/t/zygote-gradient/26715.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.∂ℓ-Tuple{AbstractGradientBasedGenerator, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.∂ℓ","text":"∂ℓ(\n generator::AbstractGradientBasedGenerator,\n ce::AbstractCounterfactualExplanation,\n)\n\nThe default method to compute the gradient of the loss function at the current counterfactual state for gradient-based generators. It assumes that Zygote.jl has gradient access.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.∇-Tuple{AbstractGradientBasedGenerator, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.∇","text":"∇(\n generator::AbstractGradientBasedGenerator,\n ce::AbstractCounterfactualExplanation,\n)\n\nThe default method to compute the gradient of the counterfactual search objective for gradient-based generators. It simply computes the weighted sum over partial derivates. It assumes that Zygote.jl has gradient access. If the counterfactual is being generated using Probe, the hinge loss is added to the gradient.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Objectives.NeedsNeighbours","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.NeedsNeighbours","text":"Penalties that need access to neighbors in the target class.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Objectives.NoPenaltyRequirements","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.NoPenaltyRequirements","text":"By default, penalties have no extra requirements.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Objectives.PenaltyRequirements","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.PenaltyRequirements","text":"A base type for a style of process.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Objectives.PenaltyRequirements-Tuple{Type{<:typeof(CounterfactualExplanations.Objectives.distance_from_target)}}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.PenaltyRequirements","text":"The distance_from_target method needs neighbors in the target class.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Objectives.cos_dist-Tuple{Any, Any}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.cos_dist","text":"cos_dist(x,y)\n\nComputes the cosine distance between two vectors.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Objectives.distance_from_target-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.distance_from_target","text":"distance_from_target(\n ce::AbstractCounterfactualExplanation;\n K::Int=50\n)\n\nComputes the distance of the counterfactual from samples in the target main. If choose_randomly is true, the function will randomly sample K neighbours from the target manifold. Otherwise, it will compute the pairwise distances and select the K closest neighbours.\n\nArguments\n\nce::AbstractCounterfactualExplanation: The counterfactual explanation.\nK::Int=50: The number of neighbours to sample.\nchoose_randomly::Bool=true: Whether to sample neighbours randomly.\nkwrgs...: Additional keyword arguments for the distance function.\n\nReturns\n\nΔ::AbstractFloat: The distance from the counterfactual to the target manifold.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Objectives.energy-Tuple{AbstractModel, AbstractArray, Int64}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.energy","text":"energy(M::AbstractModel, x::AbstractArray, t::Int)\n\nComputes the energy of the model at a given state as in Altmeyer et al. (2024): https://scholar.google.com/scholar?cluster=3697701546144846732&hl=en&as_sdt=0,5.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Objectives.energy_constraint-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.energy_constraint","text":"energy_constraint(\n ce::AbstractCounterfactualExplanation;\n agg=mean,\n reg_strength::AbstractFloat=0.0,\n decay::AbstractFloat=0.9,\n kwargs...,\n)\n\nComputes the energy constraint for the counterfactual explanation as in Altmeyer et al. (2024): https://scholar.google.com/scholar?cluster=3697701546144846732&hl=en&as_sdt=0,5. The energy constraint is a regularization term that penalizes the energy of the counterfactuals. The energy is computed as the negative logit of the target class.\n\nArguments\n\nce::AbstractCounterfactualExplanation: The counterfactual explanation.\nagg::Function=mean: The aggregation function (only applicable in case num_counterfactuals > 1). Default is mean.\nreg_strength::AbstractFloat=0.0: The regularization strength.\ndecay::AbstractFloat=0.9: The decay rate for the polynomial decay function (defaults to 0.9). Parameter a is set to 1.0 / ce.generator.opt.eta, such that the initial step size is equal to 1.0, not accounting for b. Parameter b is set to round(Int, max_steps / 20), where max_steps is the maximum number of iterations.\nkwargs...: Additional keyword arguments.\n\nReturns\n\nℒ::AbstractFloat: The energy constraint.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Objectives.model_loss_penalty-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.model_loss_penalty","text":"function model_loss_penalty(\n ce::AbstractCounterfactualExplanation;\n agg=mean\n)\n\nAdditional penalty for ClaPROARGenerator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Objectives.needs_neighbours-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.needs_neighbours","text":"needs_neighbours(ce::AbstractCounterfactualExplanation)\n\nCheck if a counterfactual explanation needs access to neighbors in the target class.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Objectives.needs_neighbours-Tuple{AbstractGenerator}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.needs_neighbours","text":"needs_neighbours(gen::AbstractGenerator)\n\nCheck if a generator needs access to neighbors in the target class.\n\n\n\n\n\n","category":"method"},{"location":"reference/#Extensions","page":"🧐 Reference","title":"Extensions","text":"","category":"section"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"Modules = [\n Base.get_extension(CounterfactualExplanations, :DecisionTreeExt),\n Base.get_extension(CounterfactualExplanations, :JEMExt),\n Base.get_extension(CounterfactualExplanations, :LaplaceReduxExt),\n Base.get_extension(CounterfactualExplanations, :NeuroTreeExt),\n]","category":"page"},{"location":"reference/#DecisionTreeExt.AtomicDecisionTree","page":"🧐 Reference","title":"DecisionTreeExt.AtomicDecisionTree","text":"Type union for DecisionTree decision tree classifiers and regressors.\n\n\n\n\n\n","category":"type"},{"location":"reference/#DecisionTreeExt.AtomicRandomForest","page":"🧐 Reference","title":"DecisionTreeExt.AtomicRandomForest","text":"Type union for DecisionTree random forest classifiers and regressors.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.DecisionTreeModel-Tuple{Union{MLJDecisionTreeInterface.DecisionTreeClassifier, MLJDecisionTreeInterface.DecisionTreeRegressor}}","page":"🧐 Reference","title":"CounterfactualExplanations.DecisionTreeModel","text":"CounterfactualExplanations.DecisionTreeModel(\n model::AtomicDecisionTree; likelihood::Symbol=:classification_binary\n)\n\nOuter constructor for a decision trees.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.TCRExGenerator-Tuple{Union{Int64, AbstractFloat, String, Symbol}, CounterfactualData, AbstractModel}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.TCRExGenerator","text":"(generator::Generators.TCRExGenerator)(\n target::RawTargetType,\n data::DataPreprocessing.CounterfactualData,\n M::Models.AbstractModel\n)\n\nApplies the Generators.TCRExGenerator to a given target and data using the M model. For details see Bewley et al. (2024) [arXiv, PMLR].\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Model-Tuple{CounterfactualData, CounterfactualExplanations.DecisionTreeModel}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Model","text":"(M::Models.Model)(\n data::CounterfactualData,\n type::CounterfactualExplanations.DecisionTreeModel;\n kwargs...,\n)\n\nConstructs a decision tree for the given data. This method is used internally when a decision-tree model is constructed to be trained from scratch (i.e. no pre-trained model is supplied by the user).\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Model-Tuple{CounterfactualData, CounterfactualExplanations.RandomForestModel}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Model","text":"(M::Models.Model)(\n data::CounterfactualData, type::CounterfactualExplanations.RandomForestModel; kwargs...\n)\n\nConstructs a random forest for the given data.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.RandomForestModel-Tuple{Union{MLJDecisionTreeInterface.RandomForestClassifier, MLJDecisionTreeInterface.RandomForestRegressor}}","page":"🧐 Reference","title":"CounterfactualExplanations.RandomForestModel","text":"CounterfactualExplanations.RandomForestModel(\n model::AtomicRandomForest; likelihood::Symbol=:classification_binary\n)\n\nOuter constructor for random forests.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.incompatible-Tuple{FeatureTweakGenerator, CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.incompatible","text":"Generators.incompatible(gen::FeatureTweakGenerator, ce::CounterfactualExplanation)\n\nOverloads the incompatible function for the FeatureTweakGenerator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.propose_state-Tuple{FeatureTweakGenerator, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.propose_state","text":"Generators.propose_state(\n generator::Generators.FeatureTweakGenerator, ce::AbstractCounterfactualExplanation\n)\n\nOverloads the Generators.propose_state method for the FeatureTweakGenerator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.calculate_delta-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"DecisionTreeExt.calculate_delta","text":"calculate_delta(ce::AbstractCounterfactualExplanation, penalty::Vector{Function})\n\nCalculates the penalty for the proposed feature tweak.\n\nArguments\n\nce::AbstractCounterfactualExplanation: The counterfactual explanation object.\n\nReturns\n\ndelta::Float64: The calculated penalty for the proposed feature tweak.\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.classify_prototypes-Tuple{Any, Any, Any}","page":"🧐 Reference","title":"DecisionTreeExt.classify_prototypes","text":"classify_prototypes(prototypes, rule_assignments, bounds)\n\nBuilds the second tree model using the given prototypes as inputs and their corresponding rule_assignments as labels. Split thresholds are restricted to the bounds, which can be computed using partition_bounds(rules). For details see Bewley et al. (2024) [arXiv, PMLR].\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.cre-Tuple{Any, Any, Any}","page":"🧐 Reference","title":"DecisionTreeExt.cre","text":"cre(rules, x, X)\n\nComputes the counterfactual rule explanations (CRE) for a given point x and a set of rules, where the rules correspond to the set of maximal-valid rules for some given target. For details see Bewley et al. (2024) [arXiv, PMLR].\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.esatisfactory_instance-Tuple{FeatureTweakGenerator, AbstractArray, AbstractArray}","page":"🧐 Reference","title":"DecisionTreeExt.esatisfactory_instance","text":"esatisfactory_instance(generator::FeatureTweakGenerator, x::AbstractArray, paths::Dict{String, Dict{String, Any}})\n\nReturns an epsilon-satisfactory counterfactual for x based on the paths provided.\n\nArguments\n\ngenerator::FeatureTweakGenerator: The feature tweak generator.\nx::AbstractArray: The factual instance.\npaths::Dict{String, Dict{String, Any}}: A list of paths to the leaves of the tree to be used for tweaking the feature.\n\nReturns\n\nesatisfactory::AbstractArray: The epsilon-satisfactory instance.\n\nExample\n\nesatisfactory = esatisfactory_instance(generator, x, paths) # returns an epsilon-satisfactory counterfactual for x based on the paths provided\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.extract_leaf_rules-Tuple{Root}","page":"🧐 Reference","title":"DecisionTreeExt.extract_leaf_rules","text":"extract_leaf_rules(root::DT.Root)\n\nExtracts leaf decision rules (i.e. hyperrectangles) from a decision tree (root). For a decision tree with L leaves this results in L hyperrectangles. The rules are returned as a vector of tuples containing 2-element tuples, where each 2-element tuple stores the lower and upper bound imposed by the given rule for a given feature. For details see Bewley et al. (2024) [arXiv, PMLR].\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.extract_leaf_rules-Tuple{Union{Leaf, Node}, AbstractArray, AbstractArray}","page":"🧐 Reference","title":"DecisionTreeExt.extract_leaf_rules","text":"extract_leaf_rules(node::Union{DT.Leaf,DT.Node}, conditions::AbstractArray, decisions::AbstractArray)\n\nSee extract_leaf_rules(root::DT.Root) for details.\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.extract_rules-Tuple{Root}","page":"🧐 Reference","title":"DecisionTreeExt.extract_rules","text":"extract_rules(root::DT.Root)\n\nExtracts decision rules (i.e. hyperrectangles) from a decision tree (root). For a decision tree with L leaves this results in 2L-1 hyperrectangles. The rules are returned as a vector of vectors of 2-element tuples, where each tuple stores the lower and upper bound imposed by the given rule for a given feature. For details see Bewley et al. (2024) [arXiv, PMLR].\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.extract_rules-Tuple{Union{Leaf, Node}, AbstractArray}","page":"🧐 Reference","title":"DecisionTreeExt.extract_rules","text":"extract_rules(node::DT.Node, conditions::AbstractArray)\n\nSee extract_rules(root::DT.Root).\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.get_individual_classifiers-Tuple{CounterfactualExplanations.Models.Model}","page":"🧐 Reference","title":"DecisionTreeExt.get_individual_classifiers","text":"get_individual_classifiers(M::Model)\n\nReturns the individual classifiers in the forest. If the input is a decision tree, the method returns the decision tree itself inside an array.\n\nArguments\n\nM::Model: The model selected by the user.\nmodel::CounterfactualExplanations.D\n\nReturns\n\nclassifiers::AbstractArray: An array of individual classifiers in the forest.\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.grow_surrogate-Tuple{CounterfactualExplanations.Generators.TCRExGenerator, AbstractArray, AbstractArray}","page":"🧐 Reference","title":"DecisionTreeExt.grow_surrogate","text":"grow_surrogate(\n generator::Generators.TCRExGenerator, X::AbstractArray, ŷ::AbstractArray\n)\n\nGrows the tree-based surrogate model for the Generators.TCRExGenerator. For details see Bewley et al. (2024) [arXiv, PMLR].\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.grow_surrogate-Tuple{CounterfactualExplanations.Generators.TCRExGenerator, CounterfactualData, AbstractModel}","page":"🧐 Reference","title":"DecisionTreeExt.grow_surrogate","text":"grow_surrogate(\n generator::Generators.TCRExGenerator, data::CounterfactualData, M::AbstractModel\n)\n\nOverloads the grow_surrogate function to accept a CounterfactualData and a AbstractModel to grow a surrogate model. See grow_surrogate(generator::Generators.TCRExGenerator, X::AbstractArray, ŷ::AbstractArray).\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.induced_grid-Tuple{Any}","page":"🧐 Reference","title":"DecisionTreeExt.induced_grid","text":"induced_grid(rules)\n\nComputes the induced grid of the given rules. For details see Bewley et al. (2024) [arXiv, PMLR]..\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.issubrule-Tuple{Any, Any}","page":"🧐 Reference","title":"DecisionTreeExt.issubrule","text":"issubrule(rule, otherrule)\n\nChecks if the rule hyperrectangle is a subset of the otherrule hyperrectangle. For details see Bewley et al. (2024) [arXiv, PMLR].\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.max_valid-NTuple{5, Any}","page":"🧐 Reference","title":"DecisionTreeExt.max_valid","text":"max_valid(rules, X, fx, target, τ)\n\nReturns the maximal-valid rules for a given target and accuracy threshold τ. For details see Bewley et al. (2024) [arXiv, PMLR].\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.partition_bounds-Tuple{Any, Int64}","page":"🧐 Reference","title":"DecisionTreeExt.partition_bounds","text":"partition_bounds(rules, dim::Int)\n\nComputes the set of (unique) bounds for each rule in rules along the dim-th dimension. For details see Bewley et al. (2024) [arXiv, PMLR].\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.partition_bounds-Tuple{Any}","page":"🧐 Reference","title":"DecisionTreeExt.partition_bounds","text":"partition_bounds(rules)\n\nComputes the set of (unique) bounds for each rule in rules and all dimensions. For details see Bewley et al. (2024) [arXiv, PMLR].\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.prototype-Tuple{Any, Any}","page":"🧐 Reference","title":"DecisionTreeExt.prototype","text":"prototype(rule, X; pick_arbitrary::Bool=true)\n\nPicks an arbitrary point x^C in X (i.e. prototype) from the subet of X that is contained by rule R_i. If pick_arbitrary is set to false, the prototype is instead computed as the average across all samples. For details see Bewley et al. (2024) [arXiv, PMLR].\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.rule_accuracy-NTuple{4, Any}","page":"🧐 Reference","title":"DecisionTreeExt.rule_accuracy","text":"rule_accuracy(rule, X, fx, target)\n\nComputes the accuracy of the rule on the data X for predicted outputs fx and the target. Accuracy is defined as the fraction of points contained by the rule, for which predicted values match the target. For details see Bewley et al. (2024) [arXiv, PMLR].\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.rule_changes-Tuple{Any, Any}","page":"🧐 Reference","title":"DecisionTreeExt.rule_changes","text":"rule_changes(rule, x)\n\nComputes the number of feature changes necessary for x to be contained by rule R_i. For details see Bewley et al. (2024) [arXiv, PMLR].\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.rule_contains-Tuple{Any, Any}","page":"🧐 Reference","title":"DecisionTreeExt.rule_contains","text":"rule_contains(rule, X)\n\nReturns the subet of X that is contained by rule R_i. For details see Bewley et al. (2024) [arXiv, PMLR].\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.rule_cost-Tuple{Any, Any, Any}","page":"🧐 Reference","title":"DecisionTreeExt.rule_cost","text":"rule_cost(rule, x, X)\n\nComputes the cost for x to be contained by rule R_i, where cost is defined as rule_changes(rule, x) - rule_feasibility(rule, X). For details see Bewley et al. (2024) [arXiv, PMLR]. \n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.rule_feasibility-Tuple{Any, Any}","page":"🧐 Reference","title":"DecisionTreeExt.rule_feasibility","text":"rule_feasibility(rule, X)\n\nComputes the feasibility of a rule R_i for a given dataset. Feasibility is defined as fraction of the data points that satisfy the rule. For details see Bewley et al. (2024) [arXiv, PMLR].\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.search_path","page":"🧐 Reference","title":"DecisionTreeExt.search_path","text":"search_path(tree::Union{DT.Leaf, DT.Node}, target::RawTargetType, path::AbstractArray)\n\nReturn a path index list with the inequality symbols, thresholds and feature indices.\n\nArguments\n\ntree::Union{DT.Leaf, DT.Node}: The root node of a decision tree.\ntarget::RawTargetType: The target class.\npath::AbstractArray: A list containing the paths found thus far.\n\nReturns\n\npaths::AbstractArray: A list of paths to the leaves of the tree to be used for tweaking the feature.\n\nExample\n\npaths = search_path(tree, target) # returns a list of paths to the leaves of the tree to be used for tweaking the feature\n\n\n\n\n\n","category":"function"},{"location":"reference/#DecisionTreeExt.wrap_decision_tree","page":"🧐 Reference","title":"DecisionTreeExt.wrap_decision_tree","text":"wrap_decision_tree(node::TreeNode, X, y)\n\nTurns a custom decision tree into a DecisionTree.Root object from the DecisionTree.jl package.\n\n\n\n\n\n","category":"function"},{"location":"reference/#DecisionTreeExt.wrap_decision_tree-Tuple{DecisionTreeExt.TreeNode}","page":"🧐 Reference","title":"DecisionTreeExt.wrap_decision_tree","text":"wrap_decision_tree(node::TreeNode)\n\nSee wrap_decision_tree(node::TreeNode, X, y).\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.JEM-Tuple{JointEnergyClassifier}","page":"🧐 Reference","title":"CounterfactualExplanations.JEM","text":"CounterfactualExplanations.JEM(\n model::JointEnergyModels.JointEnergyClassifier; likelihood::Symbol=:classification_multi\n)\n\nOuter constructor for a neural network with Laplace Approximation from LaplaceRedux.jl.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Model-Tuple{Any, CounterfactualExplanations.JEM}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Model","text":"Models.Model(model, type::CounterfactualExplanations.JEM; likelihood::Symbol=:classification_multi)\n\nOverloaded constructor for Flux models.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Model-Tuple{CounterfactualData, CounterfactualExplanations.JEM}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Model","text":"(M::Model)(data::CounterfactualData, type::JEM; kwargs...)\n\nConstructs a differentiable tree-based model for the given data.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.load_mnist_model-Tuple{CounterfactualExplanations.JEM}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.load_mnist_model","text":"Models.load_mnist_model(type::CounterfactualExplanations.JEM)\n\nOverload for loading a pre-trained model for the JEM model type.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.logits-Tuple{CounterfactualExplanations.Models.Model, CounterfactualExplanations.JEM, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.logits","text":"Models.logits(M::JEM, X::AbstractArray)\n\nCalculates the logit scores output by the model M for the input data X.\n\nArguments\n\nM::JEM: The model selected by the user. Must be a model from the MLJ library.\nX::AbstractArray: The feature vector for which the logit scores are calculated.\n\nReturns\n\nlogits::Matrix: A matrix of logits for each output class for each data point in X.\n\nExample\n\nlogits = Models.logits(M, x) # calculates the logit scores for each output class for the data point x\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.probs-Tuple{CounterfactualExplanations.Models.Model, CounterfactualExplanations.JEM, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.probs","text":"Models.probs(\n M::Models.Model,\n type::CounterfactualExplanations.JEM,\n X::AbstractArray,\n)\n\nOverloads the Models.probs method for NeuroTree models.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.train-Tuple{CounterfactualExplanations.Models.Model, CounterfactualExplanations.JEM, CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.train","text":"train(M::JEM, data::CounterfactualData; kwargs...)\n\nFits the model M to the data in the CounterfactualData object. This method is not called by the user directly.\n\nArguments\n\nM::JEM: The wrapper for an JEM model.\ndata::CounterfactualData: The CounterfactualData object containing the data to be used for training the model.\n\nReturns\n\nM::JEM: The fitted JEM model.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.LaplaceReduxModel-Tuple{Laplace}","page":"🧐 Reference","title":"CounterfactualExplanations.LaplaceReduxModel","text":"CounterfactualExplanations.LaplaceReduxModel(\n model::LaplaceRedux.Laplace; likelihood::Symbol=:classification_binary\n)\n\nOuter constructor for a neural network with Laplace Approximation from LaplaceRedux.jl.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Model-Tuple{CounterfactualData, CounterfactualExplanations.LaplaceReduxModel}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Model","text":"(M::Model)(data::CounterfactualData, type::LaplaceReduxModel; kwargs...)\n\nConstructs a differentiable tree-based model for the given data.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.logits-Tuple{CounterfactualExplanations.Models.Model, CounterfactualExplanations.LaplaceReduxModel, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.logits","text":"logits(M::LaplaceReduxModel, X::AbstractArray)\n\nPredicts the logit scores for the input data X using the model M.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.probs-Tuple{CounterfactualExplanations.Models.Model, CounterfactualExplanations.LaplaceReduxModel, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.probs","text":"probs(M::LaplaceReduxModel, X::AbstractArray)\n\nPredicts the probabilities of the classes for the input data X using the model M.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.train-Tuple{CounterfactualExplanations.Models.Model, CounterfactualExplanations.LaplaceReduxModel, CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.train","text":"train(M::LaplaceReduxModel, data::CounterfactualData; kwargs...)\n\nFits the model M to the data in the CounterfactualData object. This method is not called by the user directly.\n\nArguments\n\nM::LaplaceReduxModel: The wrapper for an LaplaceReduxModel model.\ndata::CounterfactualData: The CounterfactualData object containing the data to be used for training the model.\n\nReturns\n\nM::LaplaceReduxModel: The fitted LaplaceReduxModel model.\n\n\n\n\n\n","category":"method"},{"location":"reference/#NeuroTreeExt.AtomicNeuroTree","page":"🧐 Reference","title":"NeuroTreeExt.AtomicNeuroTree","text":"Type union for NeuroTree classifiers and regressors.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Models.Model-Tuple{CounterfactualData, CounterfactualExplanations.NeuroTreeModel}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Model","text":"(M::Model)(data::CounterfactualData, type::NeuroTreeModel; kwargs...)\n\nConstructs a differentiable tree-based model for the given data.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.NeuroTreeModel-Tuple{Union{NeuroTreeClassifier, NeuroTreeRegressor}}","page":"🧐 Reference","title":"CounterfactualExplanations.NeuroTreeModel","text":"CounterfactualExplanations.NeuroTreeModel(\n model::AtomicNeuroTree; likelihood::Symbol=:classification_binary\n)\n\nOuter constructor for a differentiable tree-based model from NeuroTreeModels.jl.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.logits-Tuple{CounterfactualExplanations.Models.Model, CounterfactualExplanations.NeuroTreeModel, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.logits","text":"Models.logits(M::NeuroTreeModel, X::AbstractArray)\n\nCalculates the logit scores output by the model M for the input data X.\n\nArguments\n\nM::NeuroTreeModel: The model selected by the user. Must be a model from the MLJ library.\nX::AbstractArray: The feature vector for which the logit scores are calculated.\n\nReturns\n\nlogits::Matrix: A matrix of logits for each output class for each data point in X.\n\nExample\n\nlogits = Models.logits(M, x) # calculates the logit scores for each output class for the data point x\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.probs-Tuple{CounterfactualExplanations.Models.Model, CounterfactualExplanations.NeuroTreeModel, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.probs","text":"Models.probs(\n M::Models.Model,\n type::CounterfactualExplanations.NeuroTreeModel,\n X::AbstractArray,\n)\n\nOverloads the probs method for NeuroTree models.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.train-Tuple{CounterfactualExplanations.Models.Model, CounterfactualExplanations.NeuroTreeModel, CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.train","text":"train(M::NeuroTreeModel, data::CounterfactualData; kwargs...)\n\nFits the model M to the data in the CounterfactualData object. This method is not called by the user directly.\n\nArguments\n\nM::NeuroTreeModel: The wrapper for an NeuroTree model.\ndata::CounterfactualData: The CounterfactualData object containing the data to be used for training the model.\n\nReturns\n\nM::NeuroTreeModel: The fitted NeuroTree model.\n\n\n\n\n\n","category":"method"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"tutorials/whistle_stop/#Whistle-Stop-Tour","page":"Whiste-Stop Tour","title":"Whistle-Stop Tour","text":"","category":"section"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"In this tutorial, we will go through a slightly more complex example involving synthetic data. We will generate Counterfactual Explanations using different generators and visualize the results.","category":"page"},{"location":"tutorials/whistle_stop/#Data-and-Classifier","page":"Whiste-Stop Tour","title":"Data and Classifier","text":"","category":"section"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"# Choose some values for data and a model:\nn_dim = 2\nn_classes = 4\nn_samples = 400\nmodel_name = :MLP","category":"page"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"The code chunk below generates synthetic data and uses it to fit a classifier. The outcome variable counterfactual_data.y consists of 4 classes. The input data counterfactual_data.X consists of 2 features. We generate a total of 400 samples. On the model side, we have specified model_name = :MLP. The fit_model can be used to fit a number of default models.","category":"page"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"data = TaijaData.load_multi_class(n_samples)\ncounterfactual_data = DataPreprocessing.CounterfactualData(data...)\nM = fit_model(counterfactual_data, model_name)","category":"page"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"The chart below visualizes our data along with the model predictions. In particular, the contour indicates the predicted probabilities generated by our classifier. By default, these are the predicted probabilities for y=1, the first label. For multi-dimensional input data is compressed into two dimensions and the decision boundary is approximated using Nearest Neighbors (this is still somewhat experimental).","category":"page"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"plot(M, counterfactual_data)","category":"page"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"(Image: )","category":"page"},{"location":"tutorials/whistle_stop/#Counterfactual-Explanation","page":"Whiste-Stop Tour","title":"Counterfactual Explanation","text":"","category":"section"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"Next, we begin by specifying our target and factual label. We then draw a random sample from the non-target (factual) class.","category":"page"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"# Factual and target:\ntarget = 2\nfactual = 4\nchosen = rand(findall(predict_label(M, counterfactual_data) .== factual))\nx = select_factual(counterfactual_data, chosen)","category":"page"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"This sets the baseline for our counterfactual search: we plan to perturb the factual x to change the predicted label from y=4 to our target label target=2.","category":"page"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"Counterfactual generators accept several default parameters that can be used to adjust the counterfactual search at a high level: for example, a Flux.jl optimizer can be supplied to define how exactly gradient steps are performed. Importantly, one can also define the threshold probability at which the counterfactual search will converge. This relates to the probability predicted by the underlying black-box model, that the counterfactual belongs to the target class. A higher decision threshold typically prolongs the counterfactual search.","category":"page"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"# Search params:\ndecision_threshold = 0.75\nnum_counterfactuals = 3","category":"page"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"The code below runs the counterfactual search for each generator available in the generator_catalogue. In each case, we also call the generic plot() method on the generated instance of type CounterfactualExplanation. This generates a simple plot that visualizes the entire counterfactual path. The chart below shows the results for all counterfactual generators: Factual: 4 → Target: 2.","category":"page"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"ces = Dict()\nplts = []\nplottable_generators = filter(((k,v),) -> k ∉ [:growing_spheres, :feature_tweak], generator_catalogue)\n# Search:\nfor (key, Generator) in plottable_generators\n generator = Generator()\n ce = generate_counterfactual(\n x, target, counterfactual_data, M, generator;\n num_counterfactuals = num_counterfactuals,\n convergence=GeneratorConditionsConvergence(\n decision_threshold=decision_threshold\n )\n )\n ces[key] = ce\n plts = [plts..., plot(ce; title=key, colorbar=false)]\nend","category":"page"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"(Image: )","category":"page"},{"location":"how_to_guides/custom_models/","page":"... add custom models","title":"... add custom models","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"how_to_guides/custom_models/#How-to-add-Custom-Models","page":"... add custom models","title":"How to add Custom Models","text":"","category":"section"},{"location":"how_to_guides/custom_models/","page":"... add custom models","title":"... add custom models","text":"Adding custom models is possible and relatively straightforward, as we will demonstrate in this guide.","category":"page"},{"location":"how_to_guides/custom_models/#Custom-Models","page":"... add custom models","title":"Custom Models","text":"","category":"section"},{"location":"how_to_guides/custom_models/","page":"... add custom models","title":"... add custom models","text":"Apart from the default models you can use any arbitrary (differentiable) model and generate recourse in the same way as before. Only two steps are necessary to make your own Julia model compatible with this package:","category":"page"},{"location":"how_to_guides/custom_models/","page":"... add custom models","title":"... add custom models","text":"The model needs to be declared as a subtype of <:CounterfactualExplanations.Models.AbstractModel.\nYou need to extend the functions CounterfactualExplanations.Models.logits and CounterfactualExplanations.Models.probs for your custom model.","category":"page"},{"location":"how_to_guides/custom_models/#How-FluxModel-was-added","page":"... add custom models","title":"How FluxModel was added","text":"","category":"section"},{"location":"how_to_guides/custom_models/","page":"... add custom models","title":"... add custom models","text":"To demonstrate how this can be done in practice, we will reiterate here how native support for Flux.jl models was enabled (Innes 2018). Once again we use synthetic data for an illustrative example. The code below loads the data and builds a simple model architecture that can be used for a multi-class prediction task. Note how outputs from the final layer are not passed through a softmax activation function, since the counterfactual loss is evaluated with respect to logits. The model is trained with dropout.","category":"page"},{"location":"how_to_guides/custom_models/","page":"... add custom models","title":"... add custom models","text":"# Data:\nN = 200\ndata = TaijaData.load_blobs(N; centers=4, cluster_std=0.5)\ncounterfactual_data = DataPreprocessing.CounterfactualData(data...)\ny = counterfactual_data.y\nX = counterfactual_data.X\n\n# Flux model setup: \nusing Flux\ndata = Flux.DataLoader((X,y), batchsize=1)\nn_hidden = 32\noutput_dim = size(y,1)\ninput_dim = 2\nactivation = σ\nmodel = Chain(\n Dense(input_dim, n_hidden, activation),\n Dropout(0.1),\n Dense(n_hidden, output_dim)\n) \nloss(x, y) = Flux.Losses.logitcrossentropy(model(x), y)\n\n# Flux model training:\nusing Flux.Optimise: update!, Adam\nopt = Adam()\nepochs = 50\nfor epoch = 1:epochs\n for d in data\n gs = gradient(Flux.params(model)) do\n l = loss(d...)\n end\n update!(opt, Flux.params(model), gs)\n end\nend","category":"page"},{"location":"how_to_guides/custom_models/","page":"... add custom models","title":"... add custom models","text":"The code below implements the two steps that were necessary to make Flux models compatible with the package. We first declare our new struct as a subtype of <:AbstractDifferentiableModel, which itself is an abstract subtype of <:AbstractModel. Computing logits amounts to just calling the model on inputs. Predicted probabilities for labels can in this case be computed by passing predicted logits through the softmax function. Finally, we just instantiate our model in the same way as always.","category":"page"},{"location":"how_to_guides/custom_models/","page":"... add custom models","title":"... add custom models","text":"# Step 1)\nstruct MyFluxModel <: AbstractDifferentiableModel\n model::Any\n likelihood::Symbol\nend\n\n# Step 2)\n# import functions in order to extend\nimport CounterfactualExplanations.Models: logits\nimport CounterfactualExplanations.Models: probs \nlogits(M::MyFluxModel, X::AbstractArray) = M.model(X)\nprobs(M::MyFluxModel, X::AbstractArray) = softmax(logits(M, X))\nM = MyFluxModel(model, :classification_multi)","category":"page"},{"location":"how_to_guides/custom_models/","page":"... add custom models","title":"... add custom models","text":"The code below implements the counterfactual search and plots the results:","category":"page"},{"location":"how_to_guides/custom_models/","page":"... add custom models","title":"... add custom models","text":"factual_label = 4\ntarget = 2\nchosen = rand(findall(predict_label(M, counterfactual_data) .== factual_label))\nx = select_factual(counterfactual_data, chosen) \n\n# Counterfactual search:\ngenerator = GenericGenerator()\nce = generate_counterfactual(x, target, counterfactual_data, M, generator)\nplot(ce)","category":"page"},{"location":"how_to_guides/custom_models/","page":"... add custom models","title":"... add custom models","text":"(Image: )","category":"page"},{"location":"how_to_guides/custom_models/#References","page":"... add custom models","title":"References","text":"","category":"section"},{"location":"how_to_guides/custom_models/","page":"... add custom models","title":"... add custom models","text":"Innes, Mike. 2018. “Flux: Elegant Machine Learning with Julia.” Journal of Open Source Software 3 (25): 602. https://doi.org/10.21105/joss.00602.","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/categorical/#Categorical-Features","page":"Categorical Features","title":"Categorical Features","text":"","category":"section"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"To illustrate how data is preprocessed under the hood, we consider a simple toy dataset with three categorical features (name, grade and sex) and one continuous feature (age):","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"X = (\n name=categorical([\"Danesh\", \"Lee\", \"Mary\", \"John\"]),\n grade=categorical([\"A\", \"B\", \"A\", \"C\"], ordered=true),\n sex=categorical([\"male\",\"female\",\"male\",\"male\"]),\n height=[1.85, 1.67, 1.5, 1.67],\n)\nschema(X)","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"Categorical features are expected to be one-hot or dummy encoded. To this end, we could use MLJ, for example:","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"hot = OneHotEncoder()\nmach = fit!(machine(hot, X))\nW = transform(mach, X)\nschema(W)","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"┌──────────────┬────────────┬─────────┐\n│ names │ scitypes │ types │\n├──────────────┼────────────┼─────────┤\n│ name__Danesh │ Continuous │ Float64 │\n│ name__John │ Continuous │ Float64 │\n│ name__Lee │ Continuous │ Float64 │\n│ name__Mary │ Continuous │ Float64 │\n│ grade__A │ Continuous │ Float64 │\n│ grade__B │ Continuous │ Float64 │\n│ grade__C │ Continuous │ Float64 │\n│ sex__female │ Continuous │ Float64 │\n│ sex__male │ Continuous │ Float64 │\n│ height │ Continuous │ Float64 │\n└──────────────┴────────────┴─────────┘","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"The matrix that will be perturbed during the counterfactual search looks as follows:","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"X = permutedims(MLJBase.matrix(W))","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"10×4 Matrix{Float64}:\n 1.0 0.0 0.0 0.0\n 0.0 0.0 0.0 1.0\n 0.0 1.0 0.0 0.0\n 0.0 0.0 1.0 0.0\n 1.0 0.0 1.0 0.0\n 0.0 1.0 0.0 0.0\n 0.0 0.0 0.0 1.0\n 0.0 1.0 0.0 0.0\n 1.0 0.0 1.0 1.0\n 1.85 1.67 1.5 1.67","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"The CounterfactualData constructor takes two optional arguments that can be used to specify the indices of categorical and continuous features. If nothing is supplied, all features are assumed to be continuous. For categorical features, the constructor expects and array of arrays of integers (Vector{Vector{Int}}) where each subarray includes the indices of a all one-hot encoded rows related to a single categorical feature. In the example above, the name feature is one-hot encoded across rows 1, 2 and 3 of X.","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"features_categorical = [\n [1,2,3,4], # name\n [5,6,7], # grade\n [8,9] # sex\n]\nfeatures_continuous = [10]","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"We propose the following simple logic for reconstructing categorical encodings after perturbations:","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"For one-hot encoded features with multiple classes, choose the maximum.\nFor binary features, clip the perturbed value to fall into 01 and round to the nearest of the two integers.","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"function reconstruct_cat_encoding(x)\n map(features_categorical) do cat_group_index\n if length(cat_group_index) > 1\n x[cat_group_index] = Int.(x[cat_group_index] .== maximum(x[cat_group_index]))\n if sum(x[cat_group_index]) > 1\n ties = findall(x[cat_group_index] .== 1)\n _x = zeros(length(x[cat_group_index]))\n winner = rand(ties,1)[1]\n _x[winner] = 1\n x[cat_group_index] = _x\n end\n else\n x[cat_group_index] = [round(clamp(x[cat_group_index][1],0,1))]\n end\n end\n return x\nend","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"Let’s look at a few simple examples to see how this function works. Firstly, consider the case of perturbing a single element:","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"x = X[:,1]\nx[1] = 1.1\nx","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"10-element Vector{Float64}:\n 1.1\n 0.0\n 0.0\n 0.0\n 1.0\n 0.0\n 0.0\n 0.0\n 1.0\n 1.85","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"The reconstructed one-hot-encoded vector will look like this:","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"reconstruct_cat_encoding(x)","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"10-element Vector{Float64}:\n 1.0\n 0.0\n 0.0\n 0.0\n 1.0\n 0.0\n 0.0\n 0.0\n 1.0\n 1.85","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"Next, consider the case of perturbing multiple elements:","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"x[2] = 1.1\nx[3] = -1.2\nx","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"10-element Vector{Float64}:\n 1.0\n 1.1\n -1.2\n 0.0\n 1.0\n 0.0\n 0.0\n 0.0\n 1.0\n 1.85","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"The reconstructed one-hot-encoded vector will look like this:","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"reconstruct_cat_encoding(x)","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"10-element Vector{Float64}:\n 0.0\n 1.0\n 0.0\n 0.0\n 1.0\n 0.0\n 0.0\n 0.0\n 1.0\n 1.85","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"Finally, let’s introduce a tie:","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"x[1] = 1.0\nx","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"10-element Vector{Float64}:\n 1.0\n 1.0\n 0.0\n 0.0\n 1.0\n 0.0\n 0.0\n 0.0\n 1.0\n 1.85","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"The reconstructed one-hot-encoded vector will look like this:","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"reconstruct_cat_encoding(x)","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"10-element Vector{Float64}:\n 0.0\n 1.0\n 0.0\n 0.0\n 1.0\n 0.0\n 0.0\n 0.0\n 1.0\n 1.85","category":"page"},{"location":"explanation/evaluation/overview/","page":"Overview","title":"Overview","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/evaluation/overview/#Evaluation","page":"Overview","title":"Evaluation","text":"","category":"section"},{"location":"explanation/evaluation/overview/","page":"Overview","title":"Overview","text":"Evaluation of counterfactual explanations is an integral part of the counterfactual explanation process. It is important to evaluate the quality of the generated counterfactual explanations to ensure that they are meaningful and useful. The tutorial provides an overview of the evaluation metrics and methods that can be used to evaluate counterfactual explanations. In this part of the documentation, we dive deeper into specific evaluation metrics and methods that can be used to evaluate counterfactual explanations.","category":"page"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/generators/dice/#DiCEGenerator","page":"DiCE","title":"DiCEGenerator","text":"","category":"section"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"The DiCEGenerator can be used to generate multiple diverse counterfactuals for a single factual.","category":"page"},{"location":"explanation/generators/dice/#Description","page":"DiCE","title":"Description","text":"","category":"section"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"Counterfactual Explanations are not unique and there are therefore many different ways through which valid counterfactuals can be generated. In the context of Algorithmic Recourse this can be leveraged to offer individuals not one, but possibly many different ways to change a negative outcome into a positive one. One might argue that it makes sense for those different options to be as diverse as possible. This idea is at the core of DiCE, a counterfactual generator introduce by Mothilal, Sharma, and Tan (2020) that generate a diverse set of counterfactual explanations.","category":"page"},{"location":"explanation/generators/dice/#Defining-Diversity","page":"DiCE","title":"Defining Diversity","text":"","category":"section"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"To ensure that the generated counterfactuals are diverse, Mothilal, Sharma, and Tan (2020) add a diversity constraint to the counterfactual search objective. In particular, diversity is explicitly proxied via Determinantal Point Processes (DDP).","category":"page"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"We can implement DDP in Julia as follows:[1]","category":"page"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"using LinearAlgebra\nfunction ddp_diversity(X::AbstractArray{<:Real, 3})\n xs = eachslice(X, dims = ndims(X))\n K = [1/(1 + norm(x .- y)) for x in xs, y in xs]\n return det(K)\nend","category":"page"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"Below we generate some random points in mathbbR^2 and apply gradient ascent on this function evaluated at the whole array of points. As we can see in the animation below, the points are sent away from each other. In other words, diversity across the array of points increases as we ascend the ddp_diversity function.","category":"page"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"lims = 5\nN = 5\nX = rand(2,1,N)\nT = 50\nη = 0.1\nanim = @animate for t in 1:T\n X .+= gradient(ddp_diversity, X)[1]\n Z = reshape(X,2,N)\n scatter(\n Z[1,:],Z[2,:],ms=25, \n xlims=(-lims,lims),ylims=(-lims,lims),\n label=\"\",colour=1:N,\n size=(500,500),\n title=\"Diverse Counterfactuals\"\n )\nend\ngif(anim, joinpath(www_path, \"dice_intro.gif\"))","category":"page"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"(Image: )","category":"page"},{"location":"explanation/generators/dice/#Usage","page":"DiCE","title":"Usage","text":"","category":"section"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"The approach can be used in our package as follows:","category":"page"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"generator = DiCEGenerator()\nconv = CounterfactualExplanations.Convergence.GeneratorConditionsConvergence()\nce = generate_counterfactual(\n x, target, counterfactual_data, M, generator; \n num_counterfactuals=5, convergence=conv\n)\nplot(ce)","category":"page"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"(Image: )","category":"page"},{"location":"explanation/generators/dice/#Effect-of-Penalty","page":"DiCE","title":"Effect of Penalty","text":"","category":"section"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"Λ₂ = [0.1, 1.0, 5.0]\nces = []\nn_cf = 5\nusing Flux\nfor λ₂ ∈ Λ₂ \n λ = [0.00, λ₂]\n generator = DiCEGenerator(λ=λ)\n ces = vcat(\n ces...,\n generate_counterfactual(\n x, target, counterfactual_data, M, generator; \n num_counterfactuals=n_cf, convergence=conv\n )\n )\nend","category":"page"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"The figure below shows the resulting counterfactual paths. As expected, the resulting counterfactuals are more dispersed across the feature domain for higher choices of lambda_2","category":"page"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"(Image: )","category":"page"},{"location":"explanation/generators/dice/#References","page":"DiCE","title":"References","text":"","category":"section"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"Mothilal, Ramaravind K, Amit Sharma, and Chenhao Tan. 2020. “Explaining Machine Learning Classifiers Through Diverse Counterfactual Explanations.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 607–17. https://doi.org/10.1145/3351095.3372850.","category":"page"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"[1] With thanks to the respondents on Discourse","category":"page"},{"location":"how_to_guides/","page":"Overview","title":"Overview","text":"CurrentModule = CounterfactualExplanations","category":"page"},{"location":"how_to_guides/#How-To-Guides","page":"Overview","title":"How-To Guides","text":"","category":"section"},{"location":"how_to_guides/","page":"Overview","title":"Overview","text":"In this section, you will find a series of how-to-guides that showcase specific use cases of counterfactual explanations (CE).","category":"page"},{"location":"how_to_guides/","page":"Overview","title":"Overview","text":"How-to guides are directions that take the reader through the steps required to solve a real-world problem. How-to guides are goal-oriented.— Diátaxis","category":"page"},{"location":"how_to_guides/","page":"Overview","title":"Overview","text":"In other words, you come here because you may have some particular problem in mind, would like to see how it can be solved using CE and then most likely head off again 🫡.","category":"page"},{"location":"explanation/optimisers/jsma/","page":"JSMA","title":"JSMA","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/optimisers/jsma/#Jacobian-based-Saliency-Map-Attack","page":"JSMA","title":"Jacobian-based Saliency Map Attack","text":"","category":"section"},{"location":"explanation/optimisers/jsma/","page":"JSMA","title":"JSMA","text":"To search counterfactuals, Schut et al. (2021) propose to use a Jacobian-Based Saliency Map Attack (JSMA) inspired by the literature on adversarial attacks. It works by moving in the direction of the most salient feature at a fixed step size in each iteration. Schut et al. (2021) use this optimisation rule in the context of Bayesian classifiers and demonstrate good results in terms of plausibility — how realistic counterfactuals are — and redundancy — how sparse the proposed feature changes are.","category":"page"},{"location":"explanation/optimisers/jsma/#JSMADescent","page":"JSMA","title":"JSMADescent","text":"","category":"section"},{"location":"explanation/optimisers/jsma/","page":"JSMA","title":"JSMA","text":"To implement this approach in a reusable manner, we have added JSMA as a Flux optimiser. In particular, we have added a class JSMADescent<:Flux.Optimise.AbstractOptimiser, for which we have overloaded the Flux.Optimise.apply! method. This makes it possible to reuse JSMADescent as an optimiser in composable generators.","category":"page"},{"location":"explanation/optimisers/jsma/","page":"JSMA","title":"JSMA","text":"The optimiser can be used with with any generator as follows:","category":"page"},{"location":"explanation/optimisers/jsma/","page":"JSMA","title":"JSMA","text":"using CounterfactualExplanations.Generators: JSMADescent\ngenerator = GenericGenerator() |>\n gen -> @with_optimiser(gen,JSMADescent(;η=0.1))\nce = generate_counterfactual(x, target, counterfactual_data, M, generator)","category":"page"},{"location":"explanation/optimisers/jsma/","page":"JSMA","title":"JSMA","text":"The figure below compares the resulting counterfactual search outcome to the corresponding outcome with generic Descent.","category":"page"},{"location":"explanation/optimisers/jsma/","page":"JSMA","title":"JSMA","text":"plot(p1,p2,size=(1000,400))","category":"page"},{"location":"explanation/optimisers/jsma/","page":"JSMA","title":"JSMA","text":"(Image: )","category":"page"},{"location":"explanation/optimisers/jsma/","page":"JSMA","title":"JSMA","text":"Schut, Lisa, Oscar Key, Rory Mc Grath, Luca Costabello, Bogdan Sacaleanu, Yarin Gal, et al. 2021. “Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties.” In International Conference on Artificial Intelligence and Statistics, 1756–64. PMLR.","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"tutorials/data_catalogue/#Data-Catalogue","page":"Data Catalogue","title":"Data Catalogue","text":"","category":"section"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"To allow researchers and practitioners to test and compare counterfactual generators, the TAIJA environment includes the package TaijaData.jl which comes with pre-processed synthetic and real-world benchmark datasets from different domains. This page explains how to use TaijaData.jl in tandem with CounterfactualExplanations.jl.","category":"page"},{"location":"tutorials/data_catalogue/#Synthetic-Data","page":"Data Catalogue","title":"Synthetic Data","text":"","category":"section"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"The following dictionary can be used to inspect the available methods to generate synthetic datasets where the key indicates the name of the data and the value is the corresponding method:","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"TaijaData.data_catalogue[:synthetic]","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"Dict{Symbol, Function} with 6 entries:\n :overlapping => load_overlapping\n :linearly_separable => load_linearly_separable\n :blobs => load_blobs\n :moons => load_moons\n :circles => load_circles\n :multi_class => load_multi_class","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"The chart below shows the generated data using default parameters:","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"plts = []\n_height = 200\n_n = length(keys(data_catalogue[:synthetic]))\nfor (key, fun) in data_catalogue[:synthetic]\n data = fun()\n counterfactual_data = DataPreprocessing.CounterfactualData(data...)\n plt = plot()\n scatter!(counterfactual_data, title=key)\n plts = [plts..., plt]\nend\nplot(plts..., size=(_n * _height, _height), layout=(1, _n))","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"(Image: )","category":"page"},{"location":"tutorials/data_catalogue/#Real-World-Data","page":"Data Catalogue","title":"Real-World Data","text":"","category":"section"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"As for real-world data, the same dictionary can be used to inspect the available data from different domains.","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"TaijaData.data_catalogue[:tabular]","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"Dict{Symbol, Function} with 5 entries:\n :german_credit => load_german_credit\n :california_housing => load_california_housing\n :credit_default => load_credit_default\n :adult => load_uci_adult\n :gmsc => load_gmsc","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"TaijaData.data_catalogue[:vision]","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"Dict{Symbol, Function} with 3 entries:\n :fashion_mnist => load_fashion_mnist\n :mnist => load_mnist\n :cifar_10 => load_cifar_10","category":"page"},{"location":"tutorials/data_catalogue/#Loading-Data","page":"Data Catalogue","title":"Loading Data","text":"","category":"section"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"To load or generate any of the datasets listed above, you can just use the corresponding method, for example:","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"data = TaijaData.load_linearly_separable()\ncounterfactual_data = DataPreprocessing.CounterfactualData(data...)","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"Optionally, you can specify how many samples you want to generate like so:","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"n = 100\ndata = TaijaData.load_overlapping(n)\ncounterfactual_data = DataPreprocessing.CounterfactualData(data...)","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"This also applies to real-world datasets, which by default are loaded in their entirety. If n is supplied, the dataset will be randomly undersampled:","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"data = TaijaData.load_mnist(n)\ncounterfactual_data = DataPreprocessing.CounterfactualData(data...)","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"The undersampled dataset is automatically balanced:","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"sum(counterfactual_data.y; dims=2)","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"10×1 Matrix{Int64}:\n 10\n 10\n 10\n 10\n 10\n 10\n 10\n 10\n 10\n 10","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"We can also use a helper function to split the data into train and test sets:","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"train_data, test_data = \n CounterfactualExplanations.DataPreprocessing.train_test_split(counterfactual_data)","category":"page"},{"location":"explanation/generators/clap_roar/","page":"ClaPROAR","title":"ClaPROAR","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/generators/clap_roar/#ClaPROARGenerator","page":"ClaPROAR","title":"ClaPROARGenerator","text":"","category":"section"},{"location":"explanation/generators/clap_roar/","page":"ClaPROAR","title":"ClaPROAR","text":"The ClaPROARGenerator was introduced in Altmeyer et al. (2023).","category":"page"},{"location":"explanation/generators/clap_roar/#Description","page":"ClaPROAR","title":"Description","text":"","category":"section"},{"location":"explanation/generators/clap_roar/","page":"ClaPROAR","title":"ClaPROAR","text":"The acronym Clap stands for classifier-preserving. The approach is loosely inspired by ROAR (Upadhyay, Joshi, and Lakkaraju 2021). Altmeyer et al. (2023) propose to explicitly penalize the loss incurred by the classifer when evaluated on the counterfactual x^prime at given parameter values. Formally, we have","category":"page"},{"location":"explanation/generators/clap_roar/","page":"ClaPROAR","title":"ClaPROAR","text":"beginaligned\ntextextcost(f(mathbfs^prime)) = l(M(f(mathbfs^prime))y^prime)\nendaligned","category":"page"},{"location":"explanation/generators/clap_roar/","page":"ClaPROAR","title":"ClaPROAR","text":"for each counterfactual k where l denotes the loss function used to train M. This approach is based on the intuition that (endogenous) model shifts will be triggered by counterfactuals that increase classifier loss (Altmeyer et al. 2023).","category":"page"},{"location":"explanation/generators/clap_roar/#Usage","page":"ClaPROAR","title":"Usage","text":"","category":"section"},{"location":"explanation/generators/clap_roar/","page":"ClaPROAR","title":"ClaPROAR","text":"The approach can be used in our package as follows:","category":"page"},{"location":"explanation/generators/clap_roar/","page":"ClaPROAR","title":"ClaPROAR","text":"generator = ClaPROARGenerator()\nce = generate_counterfactual(x, target, counterfactual_data, M, generator)\nplot(ce)","category":"page"},{"location":"explanation/generators/clap_roar/","page":"ClaPROAR","title":"ClaPROAR","text":"(Image: )","category":"page"},{"location":"explanation/generators/clap_roar/#Comparison-to-GenericGenerator","page":"ClaPROAR","title":"Comparison to GenericGenerator","text":"","category":"section"},{"location":"explanation/generators/clap_roar/","page":"ClaPROAR","title":"ClaPROAR","text":"The figure below compares the outcome for the GenericGenerator and the ClaPROARGenerator.","category":"page"},{"location":"explanation/generators/clap_roar/","page":"ClaPROAR","title":"ClaPROAR","text":"(Image: )","category":"page"},{"location":"explanation/generators/clap_roar/#References","page":"ClaPROAR","title":"References","text":"","category":"section"},{"location":"explanation/generators/clap_roar/","page":"ClaPROAR","title":"ClaPROAR","text":"Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia Liem. 2023. “Endogenous Macrodynamics in Algorithmic Recourse.” In First IEEE Conference on Secure and Trustworthy Machine Learning. https://doi.org/10.1109/satml54575.2023.00036.","category":"page"},{"location":"explanation/generators/clap_roar/","page":"ClaPROAR","title":"ClaPROAR","text":"Upadhyay, Sohini, Shalmali Joshi, and Himabindu Lakkaraju. 2021. “Towards Robust and Reliable Algorithmic Recourse.” https://arxiv.org/abs/2102.13620.","category":"page"},{"location":"explanation/","page":"Overview","title":"Overview","text":"CurrentModule = CounterfactualExplanations","category":"page"},{"location":"explanation/#Explanation","page":"Overview","title":"Explanation","text":"","category":"section"},{"location":"explanation/","page":"Overview","title":"Overview","text":"In this section you will find detailed explanations about the methodology and code.","category":"page"},{"location":"explanation/","page":"Overview","title":"Overview","text":"Explanation clarifies, deepens and broadens the reader’s understanding of a subject.— Diátaxis","category":"page"},{"location":"explanation/","page":"Overview","title":"Overview","text":"In other words, you come here because you are interested in understanding how all of this actually works 🤓.","category":"page"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/generators/feature_tweak/#FeatureTweakGenerator","page":"FeatureTweak","title":"FeatureTweakGenerator","text":"","category":"section"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"warning: Moved to extension\nAs of version 1.1.6, the functionality of the FeatureTweakGenerator has been moved to the DecisionTreeExt extension. This means it is lazily loaded only if the DecisionTree.jl package is loaded by the user, since the FeatureTweakGenerator is only compatible with tree-based models. ","category":"page"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"Feature Tweak refers to the generator introduced by Tolomei et al. (2017). Our implementation takes inspiration from the featureTweakPy library.","category":"page"},{"location":"explanation/generators/feature_tweak/#Description","page":"FeatureTweak","title":"Description","text":"","category":"section"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"Feature Tweak is a powerful recourse algorithm for ensembles of tree-based classifiers such as random forests. Though the problem of understanding how an input to an ensemble model could be transformed in such a way that the model changes its original prediction has been proven to be NP-hard (Tolomei et al. 2017), Feature Tweak provides an algorithm that manages to tractably solve this problem in multiple real-world applications. An example of a problem Feature Tweak is able to efficiently solve, explored in depth in Tolomei et al. (2017) is the problem of transforming an advertisement that has been classified by the ensemble model as a low-quality advertisement to a high-quality one through small changes to its features. With the help of Feature Tweak, advertisers can both learn about the reasons a particular ad was marked to have a low quality, as well as receive actionable suggestions about how to convert a low-quality ad into a high-quality one.","category":"page"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"Though Feature Tweak is a powerful way of avoiding brute-force search in an exponential search space, it does not come without disadvantages. The primary limitations of the approach are that it’s currently only applicable to tree-based classifiers and works only in the setting of binary classification. Another problem is that though the algorithm avoids exponential-time search, it is often still computationally expensive. The algorithm may be improved in the future to tackle all of these shortcomings.","category":"page"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"The following equation displays how a true negative instance x can be transformed into a positively predicted instance x’. To be more precise, x’ is the best possible transformation among all transformations **x***, computed with a cost function δ.","category":"page"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"beginaligned\nmathbfx^prime = arg_mathbfx^* min delta(mathbfx mathbfx^*) hatf(mathbfx) = -1 wedge hatf(mathbfx^*) = +1 \nendaligned","category":"page"},{"location":"explanation/generators/feature_tweak/#Example","page":"FeatureTweak","title":"Example","text":"","category":"section"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"To make use of the FeatureTweakGenerator, you need to have the DecisionTree.jl package installed. Loading the package will load the functionality of the FeatureTweakGenerator through the DecisionTreeExt extension:","category":"page"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"using DecisionTree","category":"page"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"In this example we apply the Feature Tweak algorithm to a decision tree and a random forest trained on the moons dataset. We first load the data and fit the models:","category":"page"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"n = 500\ncounterfactual_data = CounterfactualData(TaijaData.load_moons(n)...)\n\n# Classifiers\ndecision_tree = CounterfactualExplanations.Models.fit_model(\n counterfactual_data, :DecisionTree; max_depth=5, min_samples_leaf=3\n)\nforest = Models.fit_model(counterfactual_data, :RandomForest)","category":"page"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"Next, we select a point to explain and a target class to transform the point to. We then search for counterfactuals using the FeatureTweakGenerator:","category":"page"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"# Select a point to explain:\nx = float32.([1, -0.5])[:,:]\nfactual = Models.predict_label(forest, counterfactual_data, x)\ntarget = counterfactual_data.y_levels[findall(counterfactual_data.y_levels != factual)][1]\n\n# Search for counterfactuals:\ngenerator = FeatureTweakGenerator(ϵ=0.1)\ntree_counterfactual = generate_counterfactual(\n x, target, counterfactual_data, decision_tree, generator\n)\nforest_counterfactual = generate_counterfactual(\n x, target, counterfactual_data, forest, generator\n)","category":"page"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"The resulting counterfactuals are shown below:","category":"page"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"p1 = plot(\n tree_counterfactual;\n colorbar=false,\n title=\"Decision Tree\",\n)\n\np2 = plot(\n forest_counterfactual; title=\"Random Forest\",\n colorbar=false,\n)\n\ndisplay(plot(p1, p2; size=(800, 400)))","category":"page"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"(Image: )","category":"page"},{"location":"explanation/generators/feature_tweak/#References","page":"FeatureTweak","title":"References","text":"","category":"section"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"Tolomei, Gabriele, Fabrizio Silvestri, Andrew Haines, and Mounia Lalmas. 2017. “Interpretable Predictions of Tree-Based Ensembles via Actionable Feature Tweaking.” In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 465–74. https://doi.org/10.1145/3097983.3098039.","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"tutorials/model_catalogue/#Model-Catalogue","page":"Model Catalogue","title":"Model Catalogue","text":"","category":"section"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"While in general it is assumed that users will use this package to explain their pre-trained models, we provide out-of-the-box functionality to train various simple default models. In this tutorial, we will see how these models can be fitted to CounterfactualData.","category":"page"},{"location":"tutorials/model_catalogue/#Available-Models","page":"Model Catalogue","title":"Available Models","text":"","category":"section"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"The standard_models_catalogue can be used to inspect the available default models:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"standard_models_catalogue","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"Dict{Symbol, DataType} with 3 entries:\n :Linear => Linear\n :DeepEnsemble => FluxEnsemble\n :MLP => FluxModel","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"The dictionary keys correspond to the model names. In this case, the dictionary values are constructors that can be used called on instances of type CounterfactualData to fit the corresponding model. In most cases, users will find it most convenient to use the fit_model API call instead.","category":"page"},{"location":"tutorials/model_catalogue/#Fitting-Models","page":"Model Catalogue","title":"Fitting Models","text":"","category":"section"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"Models from the standard model catalogue are a core part of the package and thus compatible with all offered counterfactual generators and functionalities.","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"The all_models_catalogue can be used to inspect all models offered by the package:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"all_models_catalogue","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"However, when using models not included in the standard_models_catalogue, additional caution is advised: they might not be supported by all counterfactual generators or they might not be models native to Julia. Thus, a more thorough reading of their documentation may be necessary to make sure that they are used correctly.","category":"page"},{"location":"tutorials/model_catalogue/#Fitting-Flux-Models","page":"Model Catalogue","title":"Fitting Flux Models","text":"","category":"section"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"First, let’s load one of the synthetic datasets. For this, we’ll first need to import the TaijaData.jl package:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"n = 500\ndata = TaijaData.load_multi_class(n)\ncounterfactual_data = DataPreprocessing.CounterfactualData(data...)","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"We could use a Deep Ensemble (Lakshminarayanan, Pritzel, and Blundell 2017) as follows:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"M = fit_model(counterfactual_data, :DeepEnsemble)","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"The returned object is an instance of type FluxEnsemble <: AbstractModel and can be used in downstream tasks without further ado. For example, the resulting fit can be visualised using the generic plot() method as:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"plts = []\nfor target in counterfactual_data.y_levels\n plt = plot(M, counterfactual_data; target=target, title=\"p(y=$(target)|x,θ)\")\n plts = [plts..., plt]\nend\nplot(plts...)","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"(Image: )","category":"page"},{"location":"tutorials/model_catalogue/#Importing-PyTorch-models","page":"Model Catalogue","title":"Importing PyTorch models","text":"","category":"section"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"The package supports generating counterfactuals for any neural network that has been previously defined and trained using PyTorch, regardless of the specific architectural details of the model. To generate counterfactuals for a PyTorch model, save the model inside a .pt file and call the following function:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"model_loaded = TaijaInteroperability.pytorch_model_loader(\n \"$(pwd())/docs/src/tutorials/miscellaneous\",\n \"neural_network_class\",\n \"NeuralNetwork\",\n \"$(pwd())/docs/src/tutorials/miscellaneous/pretrained_model.pt\"\n)","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"The method pytorch_model_loader requires four arguments:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"The path to the folder with a .py file where the PyTorch model is defined\nThe name of the file where the PyTorch model is defined\nThe name of the class of the PyTorch model\nThe path to the Pickle file that holds the model weights","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"In the above case:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"The file defining the model is inside $(pwd())/docs/src/tutorials/miscellaneous\nThe name of the .py file holding the model definition is neural_network_class\nThe name of the model class is NeuralNetwork\nThe Pickle file is located at $(pwd())/docs/src/tutorials/miscellaneous/pretrained_model.pt","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"Though the model file and Pickle file are inside the same directory in this tutorial, this does not necessarily have to be the case.","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"The reason why the model file and Pickle file have to be provided separately is that the package expects an already trained PyTorch model as input. It is also possible to define new PyTorch models within the package, but since this is not the expected use of our package, special support is not offered for that. A guide for defining Python and PyTorch classes in Julia through PythonCall.jl can be found here.","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"Once the PyTorch model has been loaded into the package, wrap it inside the PyTorchModel class:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"model_pytorch = TaijaInteroperability.PyTorchModel(model_loaded, counterfactual_data.likelihood)","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"This model can now be passed into the generators like any other.","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"Please note that the functionality for generating counterfactuals for Python models is only available if your Julia version is 1.8 or above. For Julia 1.7 users, we recommend upgrading the version to 1.8 or 1.9 before loading a PyTorch model into the package.","category":"page"},{"location":"tutorials/model_catalogue/#Importing-R-torch-models","page":"Model Catalogue","title":"Importing R torch models","text":"","category":"section"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"warning: Not fully tested\nPlease note that due to the incompatibility between RCall and PythonCall, it is not feasible to test both PyTorch and RTorch implementations within the same pipeline. While the RTorch implementation has been manually tested, we cannot ensure its consistent functionality as it is inherently susceptible to bugs.","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"The CounterfactualExplanations package supports generating counterfactuals for neural networks that have been defined and trained using R torch. Regardless of the specific architectural details of the model, you can easily generate counterfactual explanations by following these steps.","category":"page"},{"location":"tutorials/model_catalogue/#Saving-the-R-torch-model","page":"Model Catalogue","title":"Saving the R torch model","text":"","category":"section"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"First, save your trained R torch model as a .pt file using the torch_save() function provided by the R torch library. This function allows you to serialize the model and save it to a file. For example:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"torch_save(model, file = \"$(pwd())/docs/src/tutorials/miscellaneous/r_model.pt\")","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"Make sure to specify the correct file path where you want to save the model.","category":"page"},{"location":"tutorials/model_catalogue/#Loading-the-R-torch-model","page":"Model Catalogue","title":"Loading the R torch model","text":"","category":"section"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"To import the R torch model into the CounterfactualExplanations package, use the rtorch_model_loader() function. This function loads the model from the previously saved .pt file. Here is an example of how to load the R torch model:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"model_loaded = TaijaInteroperability.rtorch_model_loader(\"$(pwd())/docs/src/tutorials/miscellaneous/r_model.pt\")","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"The rtorch_model_loader() function requires only one argument:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"model_path: The path to the .pt file that contains the trained R torch model.","category":"page"},{"location":"tutorials/model_catalogue/#Wrapping-the-R-torch-model","page":"Model Catalogue","title":"Wrapping the R torch model","text":"","category":"section"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"Once the R torch model has been loaded into the package, wrap it inside the RTorchModel class. This step prepares the model to be used by the counterfactual generators. Here is an example:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"model_R = TaijaInteroperability.RTorchModel(model_loaded, counterfactual_data.likelihood)","category":"page"},{"location":"tutorials/model_catalogue/#Generating-counterfactuals-with-the-R-torch-model","page":"Model Catalogue","title":"Generating counterfactuals with the R torch model","text":"","category":"section"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"Now that the R torch model has been wrapped inside the RTorchModel class, you can pass it into the counterfactual generators as you would with any other model.","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"Please note that RCall is not fully compatible with PythonCall. Therefore, it is advisable not to import both R torch and PyTorch models within the same Julia session. Additionally, it’s worth mentioning that the R torch integration is still untested in the CounterfactualExplanations package.","category":"page"},{"location":"tutorials/model_catalogue/#Tuning-Flux-Models","page":"Model Catalogue","title":"Tuning Flux Models","text":"","category":"section"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"By default, model architectures are very simple. Through optional arguments, users have some control over the neural network architecture and can choose to impose regularization through dropout. Let’s tackle a more challenging dataset: MNIST (LeCun 1998).","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"data = TaijaData.load_mnist(10000)\ncounterfactual_data = DataPreprocessing.CounterfactualData(data...)\ntrain_data, test_data = \n CounterfactualExplanations.DataPreprocessing.train_test_split(counterfactual_data)","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"(Image: )","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"In this case, we will use a Multi-Layer Perceptron (MLP) but we will adjust the model and training hyperparameters. Parameters related to training of Flux.jl models are currently stored in a mutable container:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"flux_training_params","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"CounterfactualExplanations.FluxModelParams(:logitbinarycrossentropy, :Adam, 100, 1, false)","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"In cases like this one, where model training can be expected to take a few moments, it can be useful to activate verbosity, so let’s set the corresponding field value to true. We’ll also impose mini-batch training:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"flux_training_params.verbose = true\nflux_training_params.batchsize = round(size(train_data.X,2)/10)","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"To account for the fact that this is a slightly more challenging task, we will use an appropriate number of hidden neurons per layer. We will also activate dropout regularization. To scale networks up further, it is also possible to adjust the number of hidden layers, which we will not do here.","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"model_params = (\n n_hidden = 32,\n dropout = true\n)","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"The model_params can be supplied to the familiar API call:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"M = fit_model(train_data, :MLP; model_params...)","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"CounterfactualExplanations.Models.Model(Chain(Dense(784 => 32, relu), Dropout(0.25, active=false), Dense(32 => 10)), :classification_multi, Chain(Dense(784 => 32, relu), Dropout(0.25, active=false), Dense(32 => 10)), MLP())","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"The model performance on our test set can be evaluated as follows:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"model_evaluation(M, test_data)","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"1-element Vector{Float64}:\n 0.9185","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"Finally, let’s restore the default training parameters:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"CounterfactualExplanations.reset!(flux_training_params)","category":"page"},{"location":"tutorials/model_catalogue/#References","page":"Model Catalogue","title":"References","text":"","category":"section"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"Lakshminarayanan, Balaji, Alexander Pritzel, and Charles Blundell. 2017. “Simple and Scalable Predictive Uncertainty Estimation Using Deep Ensembles.” Advances in Neural Information Processing Systems 30.","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"LeCun, Yann. 1998. “The MNIST Database of Handwritten Digits.” http://yann.lecun.com/exdb/mnist/.","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"tutorials/evaluation/#evaluation","page":"Evaluating Explanations","title":"Evaluation","text":"","category":"section"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"Now that we know how to generate counterfactual explanations in Julia, you may have a few follow-up questions: How do I know if the counterfactual search has been successful? How good is my counterfactual explanation? What does ‘good’ even mean in this context? In this tutorial, we will see how counterfactual explanations can be evaluated with respect to their performance.","category":"page"},{"location":"tutorials/evaluation/#Default-Measures","page":"Evaluating Explanations","title":"Default Measures","text":"","category":"section"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"Numerous evaluation measures for counterfactual explanations have been proposed. In what follows, we will cover some of the most important measures.","category":"page"},{"location":"tutorials/evaluation/#Single-Measure,-Single-Counterfactual","page":"Evaluating Explanations","title":"Single Measure, Single Counterfactual","text":"","category":"section"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"One of the most important measures is validity, which simply determines whether or not a counterfactual explanation x^prime is valid in the sense that it yields the target prediction: M(x^prime)=t. We can evaluate the validity of a single counterfactual explanation ce using the Evaluation.evaluate function as follows:","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"using CounterfactualExplanations.Evaluation: evaluate, validity\nevaluate(ce; measure=validity)","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"1-element Vector{Vector{Float64}}:\n [1.0]","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"For a single counterfactual explanation, this evaluation measure can only take two values: it is either equal to 1, if the explanation is valid or 0 otherwise. Another important measure is distance, which relates to the distance between the factual x and the counterfactual x^prime. In the context of Algorithmic Recourse, higher distances are typically associated with higher costs to individuals seeking recourse.","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"using CounterfactualExplanations.Objectives: distance\nevaluate(ce; measure=distance)","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"1-element Vector{Vector{Float32}}:\n [3.2160978]","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"By default, distance computes the L2 (Euclidean) distance.","category":"page"},{"location":"tutorials/evaluation/#Multiple-Measures,-Single-Counterfactual","page":"Evaluating Explanations","title":"Multiple Measures, Single Counterfactual","text":"","category":"section"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"You might be interested in computing not just the L2 distance, but various LP norms. This can be done by supplying a vector of functions to the measure key argument. For convenience, all default distance measures have already been collected in a vector:","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"using CounterfactualExplanations.Evaluation: distance_measures\ndistance_measures","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"4-element Vector{Function}:\n distance_l0 (generic function with 1 method)\n distance_l1 (generic function with 1 method)\n distance_l2 (generic function with 1 method)\n distance_linf (generic function with 1 method)","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"We can use this vector of evaluation measures as follows:","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"evaluate(ce; measure=distance_measures)","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"4-element Vector{Vector{Float32}}:\n [2.0]\n [3.2160978]\n [2.782144]\n [2.7413368]","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"If no measure is specified, the evaluate method will return all default measures,","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"evaluate(ce)","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"3-element Vector{Vector}:\n [1.0]\n Float32[3.2160978]\n [0.0]","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"which include:","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"CounterfactualExplanations.Evaluation.default_measures","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"3-element Vector{Function}:\n validity (generic function with 1 method)\n distance (generic function with 1 method)\n redundancy (generic function with 1 method)","category":"page"},{"location":"tutorials/evaluation/#Multiple-Measures-and-Counterfactuals","page":"Evaluating Explanations","title":"Multiple Measures and Counterfactuals","text":"","category":"section"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"We can also evaluate multiple counterfactual explanations at once:","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"generator = DiCEGenerator()\nces = generate_counterfactual(x, target, counterfactual_data, M, generator; num_counterfactuals=5)\nevaluate(ces)","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"3-element Vector{Vector}:\n [1.0]\n Float32[3.2186122]\n [[0.0, 0.0, 0.0, 0.0, 0.0]]","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"By default, each evaluation measure is aggregated across all counterfactual explanations. To return individual measures for each counterfactual explanation you can specify report_each=true","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"evaluate(ces; report_each=true)","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"3-element Vector{Vector}:\n BitVector[[1, 1, 1, 1, 1]]\n Vector{Float32}[[3.2230358, 3.1825113, 3.2527277, 3.2267833, 3.208004]]\n [[0.0, 0.0, 0.0, 0.0, 0.0]]","category":"page"},{"location":"tutorials/evaluation/#Custom-Measures","page":"Evaluating Explanations","title":"Custom Measures","text":"","category":"section"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"A measure is just a method that takes a CounterfactualExplanation as its only positional argument and agg::Function as a key argument specifying how measures should be aggregated across counterfactuals. Defining custom measures is therefore straightforward. For example, we could define a measure to compute the inverse target probability as follows:","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"my_measure(ce::CounterfactualExplanation; agg=mean) = agg(1 .- CounterfactualExplanations.target_probs(ce))\nevaluate(ce; measure=my_measure)","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"1-element Vector{Vector{Float32}}:\n [0.40882105]","category":"page"},{"location":"tutorials/evaluation/#Tidy-Output","page":"Evaluating Explanations","title":"Tidy Output","text":"","category":"section"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"By default, evaluate returns vectors of evaluation measures. The optional key argument output_format::Symbol can be used to post-process the output in two ways: firstly, to return the output as a dictionary, specify output_format=:Dict:","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"evaluate(ces; output_format=:Dict, report_each=true)","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"Dict{Symbol, Vector} with 3 entries:\n :validity => BitVector[[1, 1, 1, 1, 1]]\n :redundancy => [[0.0, 0.0, 0.0, 0.0, 0.0]]\n :distance => Vector{Float32}[[3.22304, 3.18251, 3.25273, 3.22678, 3.208]]","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"Secondly, to return the output as a data frame, specify output_format=:DataFrame.","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"evaluate(ces; output_format=:DataFrame, report_each=true)","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"By default, data frames are pivoted to long format using individual counterfactuals as the id column. This behaviour can be suppressed by specifying pivot_longer=false.","category":"page"},{"location":"tutorials/evaluation/#Multiple-Counterfactual-Explanations","page":"Evaluating Explanations","title":"Multiple Counterfactual Explanations","text":"","category":"section"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"It may be necessary to generate counterfactual explanations for multiple individuals.","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"Below, for example, we first select multiple samples (5) from the non-target class and then generate counterfactual explanations for all of them.","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"This can be done using broadcasting:","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"# Factual and target:\nids = rand(findall(predict_label(M, counterfactual_data) .== factual), n_individuals)\nxs = select_factual(counterfactual_data, ids)\nces = generate_counterfactual(xs, target, counterfactual_data, M, generator; num_counterfactuals=5)\nevaluation = evaluate.(ces)","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"5-element Vector{Vector{Vector}}:\n [[0.8], Float32[3.2487042], [[0.0, 0.0, 0.0, 0.0, 0.0]]]\n [[0.8], Float32[4.185718], [[0.0, 0.0, 0.0, 0.0, 0.0]]]\n [[1.0], Float32[4.0083566], [[0.0, 0.0, 0.0, 0.0, 0.0]]]\n [[1.0], Float32[2.9578466], [[0.0, 0.0, 0.0, 0.0, 0.0]]]\n [[0.8], Float32[2.6089585], [[0.0, 0.0, 0.0, 0.0, 0.0]]]\n\nVector{Vector}[[[0.8], Float32[3.2487042], [[0.0, 0.0, 0.0, 0.0, 0.0]]], [[0.8], Float32[4.185718], [[0.0, 0.0, 0.0, 0.0, 0.0]]], [[1.0], Float32[4.0083566], [[0.0, 0.0, 0.0, 0.0, 0.0]]], [[1.0], Float32[2.9578466], [[0.0, 0.0, 0.0, 0.0, 0.0]]], [[0.8], Float32[2.6089585], [[0.0, 0.0, 0.0, 0.0, 0.0]]]]","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"This leads us to our next topic: Performance Benchmarks.","category":"page"},{"location":"extensions/","page":"Overview","title":"Overview","text":"CurrentModule = CounterfactualExplanations","category":"page"},{"location":"extensions/#Extensions","page":"Overview","title":"⛓️ Extensions","text":"","category":"section"},{"location":"extensions/","page":"Overview","title":"Overview","text":"In this section, you will find information about package extensions of the CounterfactualExplanations package. Extensions are a relatively new feature of Julia that allows users to conditionally load code based on the presence of other packages. This is useful for creating packages that extend the functionality of other packages, without requiring the user to install the package being extended.","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/evaluation/faithfulness/#Faithfulness-and-Plausibility","page":"Plausibility and Faithfulness","title":"Faithfulness and Plausibility","text":"","category":"section"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"warning: Warning\nThe implementation of our faithfulness and plausibility metrics is based on our AAAI 2024 paper. There is no consensus on the best way to measure faithfulness and plausibility and we are still conducting research on this. This tutorial is therefore also a work in progress. Current limitations are discussed below.","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"We begin by loading some dependencies:","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"# Packages\nusing CounterfactualExplanations\nusing CounterfactualExplanations.Evaluation\nusing CounterfactualExplanations.Convergence\nusing CounterfactualExplanations.Models\nusing Flux\nusing JointEnergyModels\nusing MLJFlux\nusing EnergySamplers: PMC, SGLD, ImproperSGLD\nusing TaijaData","category":"page"},{"location":"explanation/evaluation/faithfulness/#Sample-Based-Metrics","page":"Plausibility and Faithfulness","title":"Sample-Based Metrics","text":"","category":"section"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"In Altmeyer et al. (2024), we defined two sample-based metrics for plausibility and faithfulness. The metrics rely on the premise of comparing the counterfactual to samples drawn from some target distribution. To assess plausibility, we compare the counterfactual to samples drawn from the training data that fall into the target class. To assess faithfulness, we compare the counterfactual to samples drawn from the model posterior conditional through Stochastic Gradient Langevin Dynamics (SGLD). For details specific to posterior sampling, please consult our documentation Taija’s EnergySamplers.jl. For broader details on this topic, please consult Altmeyer et al. (2024).","category":"page"},{"location":"explanation/evaluation/faithfulness/#Simple-Example","page":"Plausibility and Faithfulness","title":"Simple Example","text":"","category":"section"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"Below we generate a simple synthetic dataset with two output classes, both Gaussian clusters with different centers. We then train a joint energy-based model (JEM) using Taija’s JointEnergyModels.jl package to both discriminate between output classes and generate inputs.","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"n_obs = 1000\nX, y = TaijaData.load_blobs(n_obs; cluster_std=0.1, center_box=(-1. => 1.))\ndata = CounterfactualData(X, y)\n\nn_hidden = 16\n_batch_size = Int(round(n_obs/10))\nepochs = 100\nM = Models.fit_model(\n data,:JEM;\n builder=MLJFlux.MLP(\n hidden=(n_hidden, n_hidden, n_hidden), \n σ=Flux.swish\n ),\n batch_size=_batch_size,\n finaliser=Flux.softmax,\n loss=Flux.Losses.crossentropy,\n jem_training_params=(\n α=[1.0,1.0,1e-1],\n verbosity=10,\n ),\n epochs=epochs,\n sampling_steps=30,\n)","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"Next, we generate counterfactuals for a randomly drawn sampler using two different generators: firstly, the GenericGenerator and, secondly, the ECCoGenerator. The latter was proposed in Altmeyer et al. (2024) to generate faithful counterfactuals by constraining their energy with respect to the model. In both cases, we generate multiple counterfactuals for the same factual. Each time the search is initialized by adding a small random perturbation to the features following (Slack et al. 2021). For both generators, we then compute the average plausibility and faithfulness of the generated counterfactuals as defined above and plot the counterfactual paths in the figure below. The estimated values for the plausibility and faithfulness are shown in the plot titles and indicate that the ECCoGenerator performs better in both regards.","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"To better understand why the ECCoGenerator generates more faithful counterfactuals, we have also plotted samples drawn from the model posterior p_theta(Xy=1) in green: these largely overlap with training data in the target distribution, which indicates that the JEM has succeeded on both tasks—discriminating and generating—for this simple data set. The energy constraint of the ECCoGenerator ensures that counterfactuals remain anchored by the learned model posterior conditional distribution. As demonstrated in Altmeyer et al. (2024), faithful counterfactuals will also be plausible if the underlying model has learned plausible explanations for the data as in this case. For the GenericGenerator, counterfactuals end up outside of that target distribution, because the distance penalty pulls counterfactuals back to their original starting values.","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"using Measures\n\n# Select a factual instance:\ntarget = 1\nfactual = 2\nchosen = rand(findall(predict_label(M, data) .== factual))\nx = select_factual(data, chosen)\n\n# Search parameters:\nopt = Adam(0.005)\nconv = GeneratorConditionsConvergence()\n\n# Generic Generator:\nλ₁ = 0.1\ngenerator = GenericGenerator(opt=opt, λ=λ₁)\nce = generate_counterfactual(x, target, data, M, generator; convergence=conv, num_counterfactuals=5)\nfaith = Evaluation.faithfulness(ce)\nplaus = Evaluation.plausibility(ce)\np1 = plot(ce; zoom=-1, target=target)\nX̂ = ce.search[:energy_sampler][ce.target].posterior\ntitle = \"Generic Generator\\nplaus.: $(round(plaus, digits=2)); faith.: $(round(faith, digits=2))\"\nscatter!(X̂[1, :], X̂[2, :]; label=\"X|y=$target\", shape=:star5, ms=10, title=title, color=3, alpha=0.1)\nscatter!(ce.x′[1,:], ce.x′[2,:]; label=\"Counterfactual\", shape=:star1, ms=20, color=4)\n\n# Search:\nλ₂ = 1.0\ngenerator = ECCoGenerator(opt=opt; λ=[λ₁, λ₂])\nce = generate_counterfactual(x, target, data, M, generator; convergence=conv, num_counterfactuals=5)\nfaith = Evaluation.faithfulness(ce)\nplaus = Evaluation.plausibility(ce)\np2 = plot(ce; zoom=-1, target=target)\nX̂ = ce.search[:energy_sampler][ce.target].posterior\ntitle = \"ECCo Generator\\nplaus.: $(round(plaus, digits=2)); faith.: $(round(faith, digits=2))\"\nscatter!(X̂[1, :], X̂[2, :]; label=\"X|y=$target\", shape=:star5, ms=10, title=title, color=3, alpha=0.1)\nscatter!(ce.x′[1,:], ce.x′[2,:]; label=\"Counterfactual\", shape=:star1, ms=20, color=4)\n\nplot(p1, p2; size=(1000, 400), topmargin=5mm)","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"(Image: )","category":"page"},{"location":"explanation/evaluation/faithfulness/#Current-Limitations","page":"Plausibility and Faithfulness","title":"Current Limitations","text":"","category":"section"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"But things do not always turn out this well. Our next example demonstrates an important shortcoming of the framework proposed in Altmeyer et al. (2024). Instead of training a JEM, we now train a simpler, purely discriminative model:","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"n_obs = 1000\nX, y = TaijaData.load_blobs(n_obs; cluster_std=0.1, center_box=(-1. => 1.))\ndata = CounterfactualData(X, y)\nflux_training_params.n_epochs = 1\nM = Models.fit_model(data,:DeepEnsemble)\nCounterfactualExplanations.reset!(flux_training_params)","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"Next, we repeat the same process above for generating counterfactuals. This time we can observe in the figure below that the GenericGenerator produces much more plausible though apparently less faithful counterfactuals than the ECCoGenerator. Looking at the top row only, it is not obvious why the counterfactual produced by the GenericGenerator should be considered as less faithful to the model: conditional samples drawn from p_theta(Xy=1) through SGLD are just scattered all across the target domain on the expected side of the decision boundary. When zooming out (bottom row), it becomes clear that the learned posterior conditional is far away from the observed training data in the target class. Our definition and measure of faithfulness is in that sense very strict, quite possibly too strict in some cases.","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"# Select a factual instance:\ntarget = 2\nfactual = 1\nchosen = rand(findall(predict_label(M, data) .== factual))\nx = select_factual(data, chosen)\n\n# Search parameters:\nopt = Adam(0.1)\nconv = GeneratorConditionsConvergence()\n\n# Generic Generator:\ngenerator = GenericGenerator(opt=opt)\nce = generate_counterfactual(x, target, data, M, generator; convergence=conv, initialization=:identity)\nplaus = Evaluation.plausibility(ce)\nfaith = Evaluation.faithfulness(ce)\nX̂ = ce.search[:energy_sampler][ce.target].posterior\ntitle = \"Generic Generator\\nplaus.: $(round(plaus, digits=2)); faith.: $(round(faith, digits=2))\"\np1 = plot(ce, zoom=-1, target=target)\nscatter!(X̂[1, :], X̂[2, :]; label=\"X|y=$target\", shape=:star5, ms=10, title=title, color=3, alpha=0.2)\nscatter!(ce.x′[1,:], ce.x′[2,:]; label=\"Counterfactual\", shape=:star1, ms=20, color=4)\n_lim = maximum(abs.(X̂))\nxlims, ylims = (-_lim, _lim), (-_lim, _lim)\np3 = plot(ce; xlims=xlims, ylims=ylims, target=target)\nscatter!(X̂[1, :], X̂[2, :]; label=\"X|y=$target\", shape=:star5, ms=10, title=title, color=3, alpha=0.2)\nscatter!(ce.x′[1,:], ce.x′[2,:]; label=\"Counterfactual\", shape=:star1, ms=20, color=4)\n\n# Search:\ngenerator = ECCoGenerator(opt=opt; λ=[0.1, 1.0])\nce = generate_counterfactual(x, target, data, M, generator; convergence=conv, initialization=:identity)\nplaus = Evaluation.plausibility(ce)\nfaith = Evaluation.faithfulness(ce)\nX̂ = ce.search[:energy_sampler][ce.target].posterior\ntitle = \"ECCo Generator\\nplaus.: $(round(plaus, digits=2)); faith.: $(round(faith, digits=2))\"\np2 = plot(ce, zoom=-1, target=target)\nscatter!(X̂[1, :], X̂[2, :]; label=\"X|y=$target\", shape=:star5, ms=10, title=title, color=3, alpha=0.2)\nscatter!(ce.x′[1,:], ce.x′[2,:]; label=\"Counterfactual\", shape=:star1, ms=20, color=4)\n_lim = maximum(abs.(X̂))\nxlims, ylims = (-_lim, _lim), (-_lim, _lim)\np4 = plot(ce; xlims=xlims, ylims=ylims, target=target)\nscatter!(X̂[1, :], X̂[2, :]; label=\"X|y=$target\", shape=:star5, ms=10, title=title, color=3, alpha=0.2)\nscatter!(ce.x′[1,:], ce.x′[2,:]; label=\"Counterfactual\", shape=:star1, ms=20, color=4)\n\nplot(p1, p2, p3, p4; size=(1000, 800), topmargin=5mm)","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"(Image: )","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"Looking at a different domain like images demonstrates another limitation of the sample-based metrics. Below we generate counterfactuals for turning an 8 into a 3 using our two generators from above for a simple MNIST (LeCun 1998) classifier. Looking at the figure below, arguably the ECCoGenerator generates a more plausible counterfactual in this case. Unfortunately, according to the sample-based plausibility metric, this is not the case.","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"_nrow = 3\n\nRandom.seed!(42)\nX, y = TaijaData.load_mnist()\ndata = CounterfactualData(X, y)\n\nusing CounterfactualExplanations.Models: load_mnist_model\nusing CounterfactualExplanations: JEM\nM = load_mnist_model(MLP())\n\n# Select a factual instance:\ntarget = 3\nfactual = 8\nchosen = rand(findall(predict_label(M, data) .== factual))\nx = select_factual(data, chosen)\n\n# Search parameters:\nopt = Adam(0.1)\nconv = GeneratorConditionsConvergence()\nλ₁ = 0.0\nλ₂ = 0.5\n\n# Factual:\nfactual = convert2image(MNIST, reshape(x, 28, 28))\np1 = plot(factual; title=\"\\nFactual\", axis=([], false))\n\n# Generic Generator:\ngenerator = GenericGenerator(opt=opt; λ=λ₁)\nce = generate_counterfactual(x, target, data, M, generator; convergence=conv, initialization=:identity)\nfaith = Evaluation.faithfulness(ce; nsamples=_nrow^2, niter_final=10000)\nplaus = Evaluation.plausibility(ce)\nimg = convert2image(MNIST, reshape(ce.x′, 28, 28))\ntitle = \"Generic Generator\\nplaus.: $(round(plaus, digits=2))\\nfaith.: $(round(faith, digits=2))\"\np2 = plot(img, title=title, axis=([], false))\n\n# Search:\ngenerator = ECCoGenerator(opt=opt; λ=[λ₁, λ₂])\nce = generate_counterfactual(x, target, data, M, generator; convergence=conv, initialization=:identity)\nfaith = Evaluation.faithfulness(ce; nsamples=_nrow^2, niter_final=10000)\nplaus = Evaluation.plausibility(ce)\nimg = convert2image(MNIST, reshape(ce.x′, 28, 28))\ntitle = \"ECCo Generator\\nplaus.: $(round(plaus, digits=2))\\nfaith.: $(round(faith, digits=2))\"\np3 = plot(img, title=title, axis=([], false))\n\nplot(p1, p2, p3; size=(600, 200), layout=(1, 3), topmargin=15mm)","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"(Image: )","category":"page"},{"location":"explanation/evaluation/faithfulness/#References","page":"Plausibility and Faithfulness","title":"References","text":"","category":"section"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"Altmeyer, Patrick, Mojtaba Farmanbar, Arie van Deursen, and Cynthia CS Liem. 2024. “Faithful Model Explanations Through Energy-Constrained Conformal Counterfactuals.” In Proceedings of the AAAI Conference on Artificial Intelligence, 38:10829–37. 10.","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"LeCun, Yann. 1998. “The MNIST Database of Handwritten Digits.” http://yann.lecun.com/exdb/mnist/.","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"Slack, Dylan, Anna Hilgard, Himabindu Lakkaraju, and Sameer Singh. 2021. “Counterfactual Explanations Can Be Manipulated.” Advances in Neural Information Processing Systems 34.","category":"page"},{"location":"explanation/generators/generic/","page":"Generic","title":"Generic","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/generators/generic/#GenericGenerator","page":"Generic","title":"GenericGenerator","text":"","category":"section"},{"location":"explanation/generators/generic/","page":"Generic","title":"Generic","text":"We use the term generic to relate to the basic counterfactual generator proposed by Wachter, Mittelstadt, and Russell (2017) with L1-norm regularization. There is also a variant of this generator that uses the distance metric proposed in Wachter, Mittelstadt, and Russell (2017), which we call WachterGenerator.","category":"page"},{"location":"explanation/generators/generic/#Description","page":"Generic","title":"Description","text":"","category":"section"},{"location":"explanation/generators/generic/","page":"Generic","title":"Generic","text":"As the term indicates, this approach is simple: it forms the baseline approach for gradient-based counterfactual generators. Wachter, Mittelstadt, and Russell (2017) were among the first to realise that","category":"page"},{"location":"explanation/generators/generic/","page":"Generic","title":"Generic","text":"[…] explanations can, in principle, be offered without opening the “black box.”— Wachter, Mittelstadt, and Russell (2017)","category":"page"},{"location":"explanation/generators/generic/","page":"Generic","title":"Generic","text":"Gradient descent is performed directly in the feature space. Concerning the cost heuristic, the authors choose to penalize the distance of counterfactuals from the factual value. This is based on the intuitive notion that larger feature perturbations require greater effort.","category":"page"},{"location":"explanation/generators/generic/#Usage","page":"Generic","title":"Usage","text":"","category":"section"},{"location":"explanation/generators/generic/","page":"Generic","title":"Generic","text":"The approach can be used in our package as follows:","category":"page"},{"location":"explanation/generators/generic/","page":"Generic","title":"Generic","text":"generator = GenericGenerator()\nce = generate_counterfactual(x, target, counterfactual_data, M, generator)\nplot(ce)","category":"page"},{"location":"explanation/generators/generic/","page":"Generic","title":"Generic","text":"(Image: )","category":"page"},{"location":"explanation/generators/generic/#References","page":"Generic","title":"References","text":"","category":"section"},{"location":"explanation/generators/generic/","page":"Generic","title":"Generic","text":"Wachter, Sandra, Brent Mittelstadt, and Chris Russell. 2017. “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR.” Harv. JL & Tech. 31: 841. https://doi.org/10.2139/ssrn.3063289.","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"tutorials/benchmarking/#Performance-Benchmarks","page":"Benchmarking Explanations","title":"Performance Benchmarks","text":"","category":"section"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"In the previous tutorial, we have seen how counterfactual explanations can be evaluated. An important follow-up task is to compare the performance of different counterfactual generators is an important task. Researchers can use benchmarks to test new ideas they want to implement. Practitioners can find the right counterfactual generator for their specific use case through benchmarks. In this tutorial, we will see how to run benchmarks for counterfactual generators.","category":"page"},{"location":"tutorials/benchmarking/#Post-Hoc-Benchmarking","page":"Benchmarking Explanations","title":"Post Hoc Benchmarking","text":"","category":"section"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"We begin by continuing the discussion from the previous tutorial: suppose you have generated multiple counterfactual explanations for multiple individuals, like below:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"# Factual and target:\nn_individuals = 5\nids = rand(findall(predict_label(M, counterfactual_data) .== factual), n_individuals)\nxs = select_factual(counterfactual_data, ids)\nces = generate_counterfactual(xs, target, counterfactual_data, M, generator; num_counterfactuals=5)","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"You may be interested in comparing the outcomes across individuals. To benchmark the various counterfactual explanations using default evaluation measures, you can simply proceed as follows:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"bmk = benchmark(ces)","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"Under the hood, the benchmark(counterfactual_explanations::Vector{CounterfactualExplanation}) uses CounterfactualExplanations.Evaluation.evaluate(ce::CounterfactualExplanation) to generate a Benchmark object, which contains the evaluation in its most granular form as a DataFrame.","category":"page"},{"location":"tutorials/benchmarking/#Working-with-Benchmarks","page":"Benchmarking Explanations","title":"Working with Benchmarks","text":"","category":"section"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"For convenience, the DataFrame containing the evaluation can be returned by simply calling the Benchmark object. By default, the aggregated evaluation measures across id (in line with the default behaviour of evaluate).","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"bmk()","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"15×7 DataFrame\n Row │ sample variable value generator ⋯\n │ Base.UUID String Float64 Symbol ⋯\n─────┼──────────────────────────────────────────────────────────────────────────\n 1 │ 239104d0-f59f-11ee-3d0c-d1db071927ff distance 3.17243 GradientBase ⋯\n 2 │ 239104d0-f59f-11ee-3d0c-d1db071927ff redundancy 0.0 GradientBase\n 3 │ 239104d0-f59f-11ee-3d0c-d1db071927ff validity 1.0 GradientBase\n 4 │ 2398b3e2-f59f-11ee-3323-13d53fb7e75b distance 3.07148 GradientBase\n 5 │ 2398b3e2-f59f-11ee-3323-13d53fb7e75b redundancy 0.0 GradientBase ⋯\n 6 │ 2398b3e2-f59f-11ee-3323-13d53fb7e75b validity 1.0 GradientBase\n 7 │ 2398b916-f59f-11ee-3f13-bd00858a39af distance 3.62159 GradientBase\n 8 │ 2398b916-f59f-11ee-3f13-bd00858a39af redundancy 0.0 GradientBase\n 9 │ 2398b916-f59f-11ee-3f13-bd00858a39af validity 1.0 GradientBase ⋯\n 10 │ 2398bce8-f59f-11ee-37c1-ef7c6de27b6b distance 2.62783 GradientBase\n 11 │ 2398bce8-f59f-11ee-37c1-ef7c6de27b6b redundancy 0.0 GradientBase\n 12 │ 2398bce8-f59f-11ee-37c1-ef7c6de27b6b validity 1.0 GradientBase\n 13 │ 2398c08a-f59f-11ee-175b-81c155750752 distance 2.91985 GradientBase ⋯\n 14 │ 2398c08a-f59f-11ee-175b-81c155750752 redundancy 0.0 GradientBase\n 15 │ 2398c08a-f59f-11ee-175b-81c155750752 validity 1.0 GradientBase\n 4 columns omitted","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"To retrieve the granular dataset, simply do:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"bmk(agg=nothing)","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"75×8 DataFrame\n Row │ sample num_counterfactual variable v ⋯\n │ Base.UUID Int64 String F ⋯\n─────┼──────────────────────────────────────────────────────────────────────────\n 1 │ 239104d0-f59f-11ee-3d0c-d1db071927ff 1 distance 3 ⋯\n 2 │ 239104d0-f59f-11ee-3d0c-d1db071927ff 2 distance 3\n 3 │ 239104d0-f59f-11ee-3d0c-d1db071927ff 3 distance 3\n 4 │ 239104d0-f59f-11ee-3d0c-d1db071927ff 4 distance 3\n 5 │ 239104d0-f59f-11ee-3d0c-d1db071927ff 5 distance 3 ⋯\n 6 │ 239104d0-f59f-11ee-3d0c-d1db071927ff 1 redundancy 0\n 7 │ 239104d0-f59f-11ee-3d0c-d1db071927ff 2 redundancy 0\n 8 │ 239104d0-f59f-11ee-3d0c-d1db071927ff 3 redundancy 0\n 9 │ 239104d0-f59f-11ee-3d0c-d1db071927ff 4 redundancy 0 ⋯\n 10 │ 239104d0-f59f-11ee-3d0c-d1db071927ff 5 redundancy 0\n 11 │ 239104d0-f59f-11ee-3d0c-d1db071927ff 1 validity 1\n ⋮ │ ⋮ ⋮ ⋮ ⋱\n 66 │ 2398c08a-f59f-11ee-175b-81c155750752 1 redundancy 0\n 67 │ 2398c08a-f59f-11ee-175b-81c155750752 2 redundancy 0 ⋯\n 68 │ 2398c08a-f59f-11ee-175b-81c155750752 3 redundancy 0\n 69 │ 2398c08a-f59f-11ee-175b-81c155750752 4 redundancy 0\n 70 │ 2398c08a-f59f-11ee-175b-81c155750752 5 redundancy 0\n 71 │ 2398c08a-f59f-11ee-175b-81c155750752 1 validity 1 ⋯\n 72 │ 2398c08a-f59f-11ee-175b-81c155750752 2 validity 1\n 73 │ 2398c08a-f59f-11ee-175b-81c155750752 3 validity 1\n 74 │ 2398c08a-f59f-11ee-175b-81c155750752 4 validity 1\n 75 │ 2398c08a-f59f-11ee-175b-81c155750752 5 validity 1 ⋯\n 5 columns and 54 rows omitted","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"Since benchmarks return a DataFrame object on call, post-processing is straightforward. For example, we could use Tidier.jl:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"using Tidier\n@chain bmk() begin\n @filter(variable == \"distance\")\n @select(sample, variable, value)\nend","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"5×3 DataFrame\n Row │ sample variable value \n │ Base.UUID String Float64 \n─────┼─────────────────────────────────────────────────────────\n 1 │ 239104d0-f59f-11ee-3d0c-d1db071927ff distance 3.17243\n 2 │ 2398b3e2-f59f-11ee-3323-13d53fb7e75b distance 3.07148\n 3 │ 2398b916-f59f-11ee-3f13-bd00858a39af distance 3.62159\n 4 │ 2398bce8-f59f-11ee-37c1-ef7c6de27b6b distance 2.62783\n 5 │ 2398c08a-f59f-11ee-175b-81c155750752 distance 2.91985","category":"page"},{"location":"tutorials/benchmarking/#Metadata-for-Counterfactual-Explanations","page":"Benchmarking Explanations","title":"Metadata for Counterfactual Explanations","text":"","category":"section"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"Benchmarks always report metadata for each counterfactual explanation, which is automatically inferred by default. The default metadata concerns the explained model and the employed generator. In the current example, we used the same model and generator for each individual:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"@chain bmk() begin\n @group_by(sample)\n @select(sample, model, generator)\n @summarize(model=first(model),generator=first(generator))\n @ungroup\nend","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"5×3 DataFrame\n Row │ sample model ⋯\n │ Base.UUID Symbol ⋯\n─────┼──────────────────────────────────────────────────────────────────────────\n 1 │ 239104d0-f59f-11ee-3d0c-d1db071927ff FluxModel(Chain(Dense(2 => 2)), … ⋯\n 2 │ 2398b3e2-f59f-11ee-3323-13d53fb7e75b FluxModel(Chain(Dense(2 => 2)), …\n 3 │ 2398b916-f59f-11ee-3f13-bd00858a39af FluxModel(Chain(Dense(2 => 2)), …\n 4 │ 2398bce8-f59f-11ee-37c1-ef7c6de27b6b FluxModel(Chain(Dense(2 => 2)), …\n 5 │ 2398c08a-f59f-11ee-175b-81c155750752 FluxModel(Chain(Dense(2 => 2)), … ⋯\n 1 column omitted","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"Metadata can also be provided as an optional key argument.","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"meta_data = Dict(\n :generator => \"Generic\",\n :model => \"MLP\",\n)\nmeta_data = [meta_data for i in 1:length(ces)]\nbmk = benchmark(ces; meta_data=meta_data)\n@chain bmk() begin\n @group_by(sample)\n @select(sample, model, generator)\n @summarize(model=first(model),generator=first(generator))\n @ungroup\nend","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"5×3 DataFrame\n Row │ sample model generator \n │ Base.UUID String String \n─────┼─────────────────────────────────────────────────────────\n 1 │ 27fae496-f59f-11ee-2c30-f35d1025a6d4 MLP Generic\n 2 │ 27fdcc6a-f59f-11ee-030b-152c9794c5f1 MLP Generic\n 3 │ 27fdd04a-f59f-11ee-2010-e1732ff5d8d2 MLP Generic\n 4 │ 27fdd340-f59f-11ee-1d20-050a69dcacef MLP Generic\n 5 │ 27fdd5fc-f59f-11ee-02e8-d198e436abb3 MLP Generic","category":"page"},{"location":"tutorials/benchmarking/#Ad-Hoc-Benchmarking","page":"Benchmarking Explanations","title":"Ad Hoc Benchmarking","text":"","category":"section"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"So far we have assumed the following workflow:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"Fit some machine learning model.\nGenerate counterfactual explanations for some individual(s) (generate_counterfactual).\nEvaluate and benchmark them (benchmark(ces::Vector{CounterfactualExplanation})).","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"In many cases, it may be preferable to combine these steps. To this end, we have added support for two scenarios of Ad Hoc Benchmarking.","category":"page"},{"location":"tutorials/benchmarking/#Pre-trained-Models","page":"Benchmarking Explanations","title":"Pre-trained Models","text":"","category":"section"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"In the first scenario, it is assumed that the machine learning models have been pre-trained and so the workflow can be summarized as follows:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"Fit some machine learning model(s).\nGenerate counterfactual explanations and benchmark them.","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"We suspect that this is the most common workflow for practitioners who are interested in benchmarking counterfactual explanations for the pre-trained machine learning models. Let’s go through this workflow using a simple example. We first train some models and store them in a dictionary:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"models = Dict(\n :MLP => fit_model(counterfactual_data, :MLP),\n :Linear => fit_model(counterfactual_data, :Linear),\n)","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"Next, we store the counterfactual generators of interest in a dictionary as well:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"generators = Dict(\n :Generic => GenericGenerator(),\n :Gravitational => GravitationalGenerator(),\n :Wachter => WachterGenerator(),\n :ClaPROAR => ClaPROARGenerator(),\n)","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"Then we can run a benchmark for individual(s) x, a pre-specified target and counterfactual_data as follows:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"bmk = benchmark(x, target, counterfactual_data; models=models, generators=generators)","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"In this case, metadata is automatically inferred from the dictionaries:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"@chain bmk() begin\n @filter(variable == \"distance\")\n @select(sample, variable, value, model, generator)\nend","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"8×5 DataFrame\n Row │ sample variable value model ⋯\n │ Base.UUID String Float64 Tuple… ⋯\n─────┼──────────────────────────────────────────────────────────────────────────\n 1 │ 2cba5eee-f59f-11ee-1844-cbc7a8372a38 distance 4.38877 (:Linear, Flux ⋯\n 2 │ 2cd740fe-f59f-11ee-35c3-1157eb1b7583 distance 4.17021 (:Linear, Flux\n 3 │ 2cd741e2-f59f-11ee-2b09-0d55ef9892b9 distance 4.31145 (:Linear, Flux\n 4 │ 2cd7420c-f59f-11ee-1996-6fa75e23bb57 distance 4.17035 (:Linear, Flux\n 5 │ 2cd74234-f59f-11ee-0ad0-9f21949f5932 distance 5.73182 (:MLP, FluxMod ⋯\n 6 │ 2cd7425c-f59f-11ee-3eb4-af34f85ffd3d distance 5.50606 (:MLP, FluxMod\n 7 │ 2cd7427a-f59f-11ee-10d3-a1df6c8dc125 distance 5.2114 (:MLP, FluxMod\n 8 │ 2cd74298-f59f-11ee-32d1-f501c104fea8 distance 5.3623 (:MLP, FluxMod\n 2 columns omitted","category":"page"},{"location":"tutorials/benchmarking/#Everything-at-once","page":"Benchmarking Explanations","title":"Everything at once","text":"","category":"section"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"Researchers, in particular, may be interested in combining all steps into one. This is the second scenario of Ad Hoc Benchmarking:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"Fit some machine learning model(s), generate counterfactual explanations and benchmark them.","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"It involves calling benchmark directly on counterfactual data (the only positional argument):","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"bmk = benchmark(counterfactual_data)","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"This will use the default models from standard_models_catalogue and train them on the data. All available generators from generator_catalogue will also be used:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"@chain bmk() begin\n @filter(variable == \"validity\")\n @select(sample, variable, value, model, generator)\nend","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"200×5 DataFrame\n Row │ sample variable value model genera ⋯\n │ Base.UUID String Float64 Symbol Symbol ⋯\n─────┼──────────────────────────────────────────────────────────────────────────\n 1 │ 32d1817e-f59f-11ee-152f-a30b18c2e6f7 validity 1.0 Linear gravit ⋯\n 2 │ 32d1817e-f59f-11ee-152f-a30b18c2e6f7 validity 1.0 Linear growin\n 3 │ 32d1817e-f59f-11ee-152f-a30b18c2e6f7 validity 1.0 Linear revise\n 4 │ 32d1817e-f59f-11ee-152f-a30b18c2e6f7 validity 1.0 Linear clue\n 5 │ 32d1817e-f59f-11ee-152f-a30b18c2e6f7 validity 1.0 Linear probe ⋯\n 6 │ 32d1817e-f59f-11ee-152f-a30b18c2e6f7 validity 1.0 Linear dice\n 7 │ 32d1817e-f59f-11ee-152f-a30b18c2e6f7 validity 1.0 Linear clapro\n 8 │ 32d1817e-f59f-11ee-152f-a30b18c2e6f7 validity 1.0 Linear wachte\n 9 │ 32d1817e-f59f-11ee-152f-a30b18c2e6f7 validity 1.0 Linear generi ⋯\n 10 │ 32d1817e-f59f-11ee-152f-a30b18c2e6f7 validity 1.0 Linear greedy\n 11 │ 32d255e8-f59f-11ee-3e8d-a9e9f6e23ea8 validity 1.0 Linear gravit\n ⋮ │ ⋮ ⋮ ⋮ ⋮ ⋱\n 191 │ 3382d08a-f59f-11ee-10b3-f7d18cf7d3b5 validity 1.0 MLP gravit\n 192 │ 3382d08a-f59f-11ee-10b3-f7d18cf7d3b5 validity 1.0 MLP growin ⋯\n 193 │ 3382d08a-f59f-11ee-10b3-f7d18cf7d3b5 validity 1.0 MLP revise\n 194 │ 3382d08a-f59f-11ee-10b3-f7d18cf7d3b5 validity 1.0 MLP clue\n 195 │ 3382d08a-f59f-11ee-10b3-f7d18cf7d3b5 validity 1.0 MLP probe\n 196 │ 3382d08a-f59f-11ee-10b3-f7d18cf7d3b5 validity 1.0 MLP dice ⋯\n 197 │ 3382d08a-f59f-11ee-10b3-f7d18cf7d3b5 validity 1.0 MLP clapro\n 198 │ 3382d08a-f59f-11ee-10b3-f7d18cf7d3b5 validity 1.0 MLP wachte\n 199 │ 3382d08a-f59f-11ee-10b3-f7d18cf7d3b5 validity 1.0 MLP generi\n 200 │ 3382d08a-f59f-11ee-10b3-f7d18cf7d3b5 validity 1.0 MLP greedy ⋯\n 1 column and 179 rows omitted","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"Optionally, you can instead provide a dictionary of models and generators as before. Each value in the models dictionary should be one of two things:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"Either be an object M of type AbstractModel that implements the Models.train method.\nOr a DataType that can be called on CounterfactualData to create an object M as in (a).","category":"page"},{"location":"tutorials/benchmarking/#Multiple-Datasets","page":"Benchmarking Explanations","title":"Multiple Datasets","text":"","category":"section"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"Benchmarks are run on single instances of type CounterfactualData. This is our design choice for two reasons:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"We want to avoid the loops inside the benchmark method(s) from getting too nested and convoluted.\nWhile it is straightforward to infer metadata for models and generators, this is not the case for datasets.","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"Fortunately, it is very easy to run benchmarks for multiple datasets anyway, since Benchmark instances can be concatenated. To see how, let’s consider an example involving multiple datasets, models and generators:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"# Data:\ndatasets = Dict(\n :moons => CounterfactualData(load_moons()...),\n :circles => CounterfactualData(load_circles()...),\n)\n\n# Models:\nmodels = Dict(\n :MLP => FluxModel,\n :Linear => Linear,\n)\n\n# Generators:\ngenerators = Dict(\n :Generic => GenericGenerator(),\n :Greedy => GreedyGenerator(),\n)","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"Then we can simply loop over the datasets and eventually concatenate the results like so:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"using CounterfactualExplanations.Evaluation: distance_measures\nbmks = []\nfor (dataname, dataset) in datasets\n bmk = benchmark(dataset; models=models, generators=generators, measure=distance_measures)\n push!(bmks, bmk)\nend\nbmk = vcat(bmks[1], bmks[2]; ids=collect(keys(datasets)))","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"When ids are supplied, then a new id column is added to the evaluation data frame that contains unique identifiers for the different benchmarks. The optional idcol_name argument can be used to specify the name for that indicator column (defaults to \"dataset\"):","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"@chain bmk() begin\n @group_by(dataset, generator)\n @filter(model == :MLP)\n @filter(variable == \"distance_l1\")\n @summarize(L1_norm=mean(value))\n @ungroup\nend","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"4×3 DataFrame\n Row │ dataset generator L1_norm \n │ Symbol Symbol Float32 \n─────┼──────────────────────────────\n 1 │ moons Generic 1.56555\n 2 │ moons Greedy 0.819269\n 3 │ circles Generic 1.83524\n 4 │ circles Greedy 0.498953","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"tutorials/models/#Handling-Models","page":"Handling Models","title":"Handling Models","text":"","category":"section"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"The typical use-case for Counterfactual Explanations and Algorithmic Recourse is as follows: users have trained some supervised model that is not inherently interpretable and are looking for a way to explain it. In this tutorial, we will see how pre-trained models can be used with this package.","category":"page"},{"location":"tutorials/models/#Models-trained-in-Flux.jl","page":"Handling Models","title":"Models trained in Flux.jl","text":"","category":"section"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"We will train a simple binary classifier in Flux.jl on the popular Moons dataset:","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"n = 500\ndata = TaijaData.load_moons(n)\ncounterfactual_data = DataPreprocessing.CounterfactualData(data...)\nX = counterfactual_data.X\ny = counterfactual_data.y\nplt = plot()\nscatter!(counterfactual_data)","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"(Image: )","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"The following code chunk sets up a Deep Neural Network for the task at hand:","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"data = Flux.DataLoader((X,y),batchsize=1)\ninput_dim = size(X,1)\nn_hidden = 32\nactivation = relu\noutput_dim = 2\nnn = Chain(\n Dense(input_dim, n_hidden, activation),\n Dropout(0.1),\n Dense(n_hidden, output_dim)\n)\nloss(yhat, y) = Flux.Losses.logitcrossentropy(nn(yhat), y)","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"Next, we fit the network to the data:","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"using Flux.Optimise: update!, Adam\nopt = Adam()\nepochs = 100\navg_loss(data) = mean(map(d -> loss(d[1],d[2]), data))\nshow_every = epochs/5\n# Training:\nfor epoch = 1:epochs\n for d in data\n gs = gradient(Flux.params(nn)) do\n l = loss(d...)\n end\n update!(opt, Flux.params(nn), gs)\n end\n if epoch % show_every == 0\n println(\"Epoch \" * string(epoch))\n @show avg_loss(data)\n end\nend","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"Epoch 20\navg_loss(data) = 0.1407434f0\nEpoch 40\navg_loss(data) = 0.11345118f0\nEpoch 60\navg_loss(data) = 0.046319224f0\nEpoch 80\navg_loss(data) = 0.011847609f0\nEpoch 100\navg_loss(data) = 0.007242911f0","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"To prepare the fitted model for use with our package, we need to wrap it inside a container. For plain-vanilla models trained in Flux.jl, the corresponding constructor is called MLP. There is also a separate constructor called DeepEnsemble, which applies to Deep Ensembles. Deep Ensembles are a popular approach to approximate Bayesian Deep Learning and have been shown to generate good predictive uncertainty estimates (Lakshminarayanan, Pritzel, and Blundell 2017).","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"The appropriate API call to wrap our simple network in a container follows below:","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"M = MLP(nn)","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"CounterfactualExplanations.Models.Model(Chain(Dense(2 => 32, relu), Dropout(0.1, active=false), Dense(32 => 2)), :classification_binary, Chain(Dense(2 => 32, relu), Dropout(0.1, active=false), Dense(32 => 2)), MLP())","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"The likelihood function of the output variable is automatically inferred from the data. The generic plot() method can be called on the model and data to visualise the results:","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"plot(M, counterfactual_data)","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"(Image: )","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"Our model M is now ready for use with the package.","category":"page"},{"location":"tutorials/models/#References","page":"Handling Models","title":"References","text":"","category":"section"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"Lakshminarayanan, Balaji, Alexander Pritzel, and Charles Blundell. 2017. “Simple and Scalable Predictive Uncertainty Estimation Using Deep Ensembles.” Advances in Neural Information Processing Systems 30.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"CurrentModule = CounterfactualExplanations","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"(Image: )","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Documentation for CounterfactualExplanations.jl.","category":"page"},{"location":"#CounterfactualExplanations","page":"🏠 Home","title":"CounterfactualExplanations","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Counterfactual Explanations and Algorithmic Recourse in Julia.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"(Image: Stable) (Image: Dev) (Image: Build Status) (Image: Coverage) (Image: Code Style: Blue) (Image: License) (Image: Package Downloads) (Image: Aqua QA)","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"CounterfactualExplanations.jl is a package for generating Counterfactual Explanations (CE) and Algorithmic Recourse (AR) for black-box algorithms. Both CE and AR are related tools for explainable artificial intelligence (XAI). While the package is written purely in Julia, it can be used to explain machine learning algorithms developed and trained in other popular programming languages like Python and R. See below for a short introduction and other resources or dive straight into the docs.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"There is also a corresponding paper, Explaining Black-Box Models through Counterfactuals, which has been published in JuliaCon Proceedings. Please consider citing the paper, if you use this package in your work:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"(Image: DOI) (Image: DOI)","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"@article{Altmeyer2023,\n doi = {10.21105/jcon.00130},\n url = {https://doi.org/10.21105/jcon.00130},\n year = {2023},\n publisher = {The Open Journal},\n volume = {1},\n number = {1},\n pages = {130},\n author = {Patrick Altmeyer and Arie van Deursen and Cynthia C. S. Liem},\n title = {Explaining Black-Box Models through Counterfactuals},\n journal = {Proceedings of the JuliaCon Conferences}\n}","category":"page"},{"location":"#Installation","page":"🏠 Home","title":"🚩 Installation","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"You can install the stable release from Julia’s General Registry as follows:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"using Pkg\nPkg.add(\"CounterfactualExplanations\")","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"CounterfactualExplanations.jl is under active development. To install the development version of the package you can run the following command:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"using Pkg\nPkg.add(url=\"https://github.com/juliatrustworthyai/CounterfactualExplanations.jl\")","category":"page"},{"location":"#Background-and-Motivation","page":"🏠 Home","title":"🤔 Background and Motivation","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Machine learning models like Deep Neural Networks have become so complex, opaque and underspecified in the data that they are generally considered Black Boxes. Nonetheless, such models often play a key role in data-driven decision-making systems. This creates the following problem: human operators in charge of such systems have to rely on them blindly, while those individuals subject to them generally have no way of challenging an undesirable outcome:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"“You cannot appeal to (algorithms). They do not listen. Nor do they bend.”— Cathy O’Neil in Weapons of Math Destruction, 2016","category":"page"},{"location":"#Enter:-Counterfactual-Explanations","page":"🏠 Home","title":"🔮 Enter: Counterfactual Explanations","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Counterfactual Explanations can help human stakeholders make sense of the systems they develop, use or endure: they explain how inputs into a system need to change for it to produce different decisions. Explainability benefits internal as well as external quality assurance.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Counterfactual Explanations have a few properties that are desirable in the context of Explainable Artificial Intelligence (XAI). These include:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Full fidelity to the black-box model, since no proxy is involved.\nNo need for (reasonably) interpretable features as opposed to LIME and SHAP.\nClear link to Algorithmic Recourse and Causal Inference.\nLess susceptible to adversarial attacks than LIME and SHAP.","category":"page"},{"location":"#Simple-Usage-Example","page":"🏠 Home","title":"Simple Usage Example","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"To get started, try out this simple usage example with synthetic data:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"using CounterfactualExplanations\nusing CounterfactualExplanations.Models\nusing Plots\nusing TaijaData\nusing TaijaPlotting\n\n# Data and Model:\ndata = load_linearly_separable()\ncounterfactual_data = CounterfactualData(data...)\nM = fit_model(counterfactual_data, :Linear)\n\n# Choose factual:\ntarget = 2\nfactual = 1\nchosen = findall(predict_label(M, counterfactual_data) .== factual) |>\n rand\nx = select_factual(counterfactual_data, chosen)\n\n# Generate counterfactuals\ngenerator = WachterGenerator()\nce = generate_counterfactual(\n x, # factual\n target, # target\n counterfactual_data, # data\n M, # model\n generator # counterfactual generator\n)\nplot(ce)","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"(Image: )","category":"page"},{"location":"#Example:-Give-Me-Some-Credit","page":"🏠 Home","title":"Example: Give Me Some Credit","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Consider the following real-world scenario: a retail bank is using a black-box model trained on their clients’ credit history to decide whether they will provide credit to new applicants. To simulate this scenario, we have pre-trained a binary classifier on the publicly available Give Me Some Credit dataset that ships with this package (Kaggle 2011).","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"The figure below shows counterfactuals for 10 randomly chosen individuals that would have been denied credit initially.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"(Image: )","category":"page"},{"location":"#Example:-MNIST","page":"🏠 Home","title":"Example: MNIST","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"The figure below shows a counterfactual generated for an image classifier trained on MNIST: in particular, it demonstrates which pixels need to change in order for the classifier to predict 3 instead of 8.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Since v0.1.9 counterfactual generators are fully composable. Here we have composed a generator that combines ideas from Wachter, Mittelstadt, and Russell (2017) and Altmeyer et al. (2023):","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"# Compose generator:\nusing CounterfactualExplanations.Objectives: distance_mad, distance_from_target\ngenerator = GradientBasedGenerator()\n@chain generator begin\n @objective logitcrossentropy + 0.2distance_mad + 0.1distance_from_target\n @with_optimiser Adam(0.1) \nend","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"(Image: )","category":"page"},{"location":"#Usage-example","page":"🏠 Home","title":"🔍 Usage example","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Generating counterfactuals will typically look like follows. Below we first fit a simple model to a synthetic dataset with linearly separable features and then draw a random sample:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"# Data and Classifier:\ncounterfactual_data = CounterfactualData(load_linearly_separable()...)\nM = fit_model(counterfactual_data, :Linear)\n\n# Select random sample:\ntarget = 2\nfactual = 1\nchosen = rand(findall(predict_label(M, counterfactual_data) .== factual))\nx = select_factual(counterfactual_data, chosen)","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"To this end, we specify a counterfactual generator of our choice:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"# Counterfactual search:\ngenerator = DiCEGenerator(λ=[0.1,0.3])","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Here, we have chosen to use the GradientBasedGenerator to move the individual from its factual label 1 to the target label 2.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"With all of our ingredients specified, we finally generate counterfactuals using a simple API call:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"conv = conv = CounterfactualExplanations.Convergence.GeneratorConditionsConvergence()\nce = generate_counterfactual(\n x, target, counterfactual_data, M, generator; \n num_counterfactuals=3, convergence=conv,\n)","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"The plot below shows the resulting counterfactual path:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"(Image: )","category":"page"},{"location":"#Implemented-Counterfactual-Generators","page":"🏠 Home","title":"☑️ Implemented Counterfactual Generators","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Currently, the following counterfactual generators are implemented:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"ClaPROAR (Altmeyer et al. 2023)\nCLUE (Antorán et al. 2020)\nDiCE (Mothilal, Sharma, and Tan 2020)\nECCCo (Altmeyer et al. 2024)\nFeatureTweak (Tolomei et al. 2017)\nGeneric\nGravitationalGenerator (Altmeyer et al. 2023)\nGreedy (Schut et al. 2021)\nGrowingSpheres (Laugel et al. 2017)\nMINT (Karimi et al. 2020) (causal CE)\nPROBE (Pawelczyk et al. 2023)\nREVISE (Joshi et al. 2019)\nT-CREx (Bewley et al. 2024) (global CE)\nWachter (Wachter, Mittelstadt, and Russell 2017)","category":"page"},{"location":"#Goals-and-limitations","page":"🏠 Home","title":"🎯 Goals and limitations","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"The goal of this library is to contribute to efforts towards trustworthy machine learning in Julia. The Julia language has an edge when it comes to trustworthiness: it is very transparent. Packages like this one are generally written in pure Julia, which makes it easy for users and developers to understand and contribute to open-source code. Eventually, this project aims to offer a one-stop-shop of counterfactual explanations.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Our ambition is to enhance the package through the following features:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Support for all supervised machine learning models trained in MLJ.jl.\nSupport for regression models.","category":"page"},{"location":"#Contribute","page":"🏠 Home","title":"🛠 Contribute","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Contributions of any kind are very much welcome! Take a look at the issue to see what things we are currently working on. If you have an idea for a new feature or want to report a bug, please open a new issue.","category":"page"},{"location":"#Development","page":"🏠 Home","title":"Development","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"If your looking to contribute code, it may be helpful to check out the Explanation section of the docs.","category":"page"},{"location":"#Testing","page":"🏠 Home","title":"Testing","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Please always make sure to add tests for any new features or changes.","category":"page"},{"location":"#Documentation","page":"🏠 Home","title":"Documentation","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"If you add new features or change existing ones, please make sure to update the documentation accordingly. The documentation is written in Documenter.jl and is located in the docs/src folder.","category":"page"},{"location":"#Log-Changes","page":"🏠 Home","title":"Log Changes","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"As of version 1.1.1, we have tried to be more stringent about logging changes. Please make sure to add a note to the CHANGELOG.md file for any changes you make. It is sufficient to add a note under the Unreleased section.","category":"page"},{"location":"#General-Pointers","page":"🏠 Home","title":"General Pointers","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"There are also some general pointers for people looking to contribute to any of our Taija packages here.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Please follow the SciML ColPrac guide.","category":"page"},{"location":"#Citation","page":"🏠 Home","title":"🎓 Citation","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"If you want to use this codebase, please consider citing the corresponding paper:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"@article{Altmeyer2023,\n doi = {10.21105/jcon.00130},\n url = {https://doi.org/10.21105/jcon.00130},\n year = {2023},\n publisher = {The Open Journal},\n volume = {1},\n number = {1},\n pages = {130},\n author = {Patrick Altmeyer and Arie van Deursen and Cynthia C. s. Liem},\n title = {Explaining Black-Box Models through Counterfactuals},\n journal = {Proceedings of the JuliaCon Conferences}\n}","category":"page"},{"location":"#References","page":"🏠 Home","title":"📚 References","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia CS Liem. 2023. “Endogenous Macrodynamics in Algorithmic Recourse.” In 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML), 418–31. IEEE.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Altmeyer, Patrick, Mojtaba Farmanbar, Arie van Deursen, and Cynthia CS Liem. 2024. “Faithful Model Explanations Through Energy-Constrained Conformal Counterfactuals.” In Proceedings of the AAAI Conference on Artificial Intelligence, 38:10829–37. 10.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Antorán, Javier, Umang Bhatt, Tameem Adel, Adrian Weller, and José Miguel Hernández-Lobato. 2020. “Getting a Clue: A Method for Explaining Uncertainty Estimates.” https://arxiv.org/abs/2006.06848.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Bewley, Tom, Salim I. Amoukou, Saumitra Mishra, Daniele Magazzeni, and Manuela Veloso. 2024. “Counterfactual Metarules for Local and Global Recourse.” https://arxiv.org/abs/2405.18875.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Joshi, Shalmali, Oluwasanmi Koyejo, Warut Vijitbenjaronk, Been Kim, and Joydeep Ghosh. 2019. “Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems.” https://arxiv.org/abs/1907.09615.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Kaggle. 2011. “Give Me Some Credit, Improve on the State of the Art in Credit Scoring by Predicting the Probability That Somebody Will Experience Financial Distress in the Next Two Years.” https://www.kaggle.com/c/GiveMeSomeCredit; Kaggle. https://www.kaggle.com/c/GiveMeSomeCredit.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Karimi, Amir-Hossein, Julius Von Kügelgen, Bernhard Schölkopf, and Isabel Valera. 2020. “Algorithmic Recourse Under Imperfect Causal Knowledge: A Probabilistic Approach.” https://arxiv.org/abs/2006.06831.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Laugel, Thibault, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, and Marcin Detyniecki. 2017. “Inverse Classification for Comparison-Based Interpretability in Machine Learning.” https://arxiv.org/abs/1712.08443.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Mothilal, Ramaravind K, Amit Sharma, and Chenhao Tan. 2020. “Explaining Machine Learning Classifiers Through Diverse Counterfactual Explanations.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 607–17. https://doi.org/10.1145/3351095.3372850.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Pawelczyk, Martin, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci, and Himabindu Lakkaraju. 2023. “Probabilistically Robust Recourse: Navigating the Trade-Offs Between Costs and Robustness in Algorithmic Recourse.” https://arxiv.org/abs/2203.06768.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Schut, Lisa, Oscar Key, Rory Mc Grath, Luca Costabello, Bogdan Sacaleanu, Yarin Gal, et al. 2021. “Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties.” In International Conference on Artificial Intelligence and Statistics, 1756–64. PMLR.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Tolomei, Gabriele, Fabrizio Silvestri, Andrew Haines, and Mounia Lalmas. 2017. “Interpretable Predictions of Tree-Based Ensembles via Actionable Feature Tweaking.” In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 465–74. https://doi.org/10.1145/3097983.3098039.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Wachter, Sandra, Brent Mittelstadt, and Chris Russell. 2017. “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR.” Harv. JL & Tech. 31: 841. https://doi.org/10.2139/ssrn.3063289.","category":"page"},{"location":"tutorials/","page":"Overview","title":"Overview","text":"CurrentModule = CounterfactualExplanation","category":"page"},{"location":"tutorials/#Tutorials","page":"Overview","title":"Tutorials","text":"","category":"section"},{"location":"tutorials/","page":"Overview","title":"Overview","text":"In this section, you will find a series of tutorials that should help you gain a basic understanding of Counterfactual Explanations and how to apply it in Julia using this package.","category":"page"},{"location":"tutorials/","page":"Overview","title":"Overview","text":"Tutorials are lessons that take the reader by the hand through a series of steps to complete a project of some kind. Tutorials are learning-oriented.— Diátaxis","category":"page"},{"location":"tutorials/","page":"Overview","title":"Overview","text":"In other words, you come here because you are new to this topic and are looking for a first peek at the methodology and code 🫣.","category":"page"},{"location":"explanation/generators/growing_spheres/","page":"GrowingSpheres","title":"GrowingSpheres","text":"CurrentModule = CounterfactualExplanations","category":"page"},{"location":"explanation/generators/growing_spheres/#GrowingSpheres","page":"GrowingSpheres","title":"GrowingSpheres","text":"","category":"section"},{"location":"explanation/generators/growing_spheres/","page":"GrowingSpheres","title":"GrowingSpheres","text":"Growing Spheres refers to the generator introduced by Laugel et al. (2017). Our implementation takes inspiration from the CARLA library.","category":"page"},{"location":"explanation/generators/growing_spheres/#Principle-of-the-Proposed-Approach","page":"GrowingSpheres","title":"Principle of the Proposed Approach","text":"","category":"section"},{"location":"explanation/generators/growing_spheres/","page":"GrowingSpheres","title":"GrowingSpheres","text":"In order to interpret a prediction through comparison, the Growing Spheres algorithm focuses on finding an observation belonging to the other class and answers the question: “Considering an observation and a classifier, what is the minimal change we need to apply in order to change the prediction of this observation?”. This problem is similar to inverse classification but applied to interpretability.","category":"page"},{"location":"explanation/generators/growing_spheres/","page":"GrowingSpheres","title":"GrowingSpheres","text":"Explaining how to change a prediction can help the user understand what the model considers as locally important. The Growing Spheres approach provides insights into the classifier’s behavior without claiming any causal knowledge. It differs from other interpretability approaches and is not concerned with the global behavior of the model. Instead, it aims to provide local insights into the classifier’s decision-making process.","category":"page"},{"location":"explanation/generators/growing_spheres/","page":"GrowingSpheres","title":"GrowingSpheres","text":"The algorithm finds the closest “ennemy” observation, which is an observation classified into the other class than the input observation. The final explanation is the difference vector between the input observation and the ennemy.","category":"page"},{"location":"explanation/generators/growing_spheres/#Finding-the-Closest-Ennemy","page":"GrowingSpheres","title":"Finding the Closest Ennemy","text":"","category":"section"},{"location":"explanation/generators/growing_spheres/","page":"GrowingSpheres","title":"GrowingSpheres","text":"The algorithm solves the following minimization problem to find the closest ennemy:","category":"page"},{"location":"explanation/generators/growing_spheres/","page":"GrowingSpheres","title":"GrowingSpheres","text":"e^* = arg min_e in X c(x e) f(e) neq f(x) ","category":"page"},{"location":"explanation/generators/growing_spheres/","page":"GrowingSpheres","title":"GrowingSpheres","text":"The cost function c(x, e) is defined as:","category":"page"},{"location":"explanation/generators/growing_spheres/","page":"GrowingSpheres","title":"GrowingSpheres","text":"c(x e) = x - e_2 + gamma x - e_0","category":"page"},{"location":"explanation/generators/growing_spheres/","page":"GrowingSpheres","title":"GrowingSpheres","text":"where ||.||_2 is the Euclidean norm and ||.||_0 is the sparsity measure. The weight gamma balances the importance of sparsity in the cost function.","category":"page"},{"location":"explanation/generators/growing_spheres/","page":"GrowingSpheres","title":"GrowingSpheres","text":"To approximate the solution, the Growing Spheres algorithm uses a two-step heuristic approach. The first step is the Generation phase, where observations are generated in spherical layers around the input observation. The second step is the Feature Selection phase, where the generated observation with the smallest change in each feature is selected.","category":"page"},{"location":"explanation/generators/growing_spheres/#Example","page":"GrowingSpheres","title":"Example","text":"","category":"section"},{"location":"explanation/generators/growing_spheres/","page":"GrowingSpheres","title":"GrowingSpheres","text":"generator = GrowingSpheresGenerator()\nM = fit_model(counterfactual_data, :DeepEnsemble)\nce = generate_counterfactual(\n x, target, counterfactual_data, M, generator)\nplot(ce)","category":"page"},{"location":"explanation/generators/growing_spheres/","page":"GrowingSpheres","title":"GrowingSpheres","text":"(Image: )","category":"page"},{"location":"explanation/generators/growing_spheres/#References","page":"GrowingSpheres","title":"References","text":"","category":"section"},{"location":"explanation/generators/growing_spheres/","page":"GrowingSpheres","title":"GrowingSpheres","text":"Laugel, Thibault, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, and Marcin Detyniecki. 2017. “Inverse Classification for Comparison-Based Interpretability in Machine Learning.” arXiv. https://doi.org/10.48550/arXiv.1712.08443.","category":"page"},{"location":"contribute/#Contribute","page":"🛠 Contribute","title":"🛠 Contribute","text":"","category":"section"},{"location":"contribute/","page":"🛠 Contribute","title":"🛠 Contribute","text":"Contributions of any kind are very much welcome! Take a look at the issue to see what things we are currently working on. If you have an idea for a new feature or want to report a bug, please open a new issue.","category":"page"},{"location":"contribute/#Development","page":"🛠 Contribute","title":"Development","text":"","category":"section"},{"location":"contribute/","page":"🛠 Contribute","title":"🛠 Contribute","text":"If your looking to contribute code, it may be helpful to check out the Explanation section of the docs.","category":"page"},{"location":"contribute/#Testing","page":"🛠 Contribute","title":"Testing","text":"","category":"section"},{"location":"contribute/","page":"🛠 Contribute","title":"🛠 Contribute","text":"Please always make sure to add tests for any new features or changes.","category":"page"},{"location":"contribute/#Documentation","page":"🛠 Contribute","title":"Documentation","text":"","category":"section"},{"location":"contribute/","page":"🛠 Contribute","title":"🛠 Contribute","text":"If you add new features or change existing ones, please make sure to update the documentation accordingly. The documentation is written in Documenter.jl and is located in the docs/src folder.","category":"page"},{"location":"contribute/#Log-Changes","page":"🛠 Contribute","title":"Log Changes","text":"","category":"section"},{"location":"contribute/","page":"🛠 Contribute","title":"🛠 Contribute","text":"As of version 1.1.1, we have tried to be more stringent about logging changes. Please make sure to add a note to the CHANGELOG.md file for any changes you make. It is sufficient to add a note under the Unreleased section.","category":"page"},{"location":"contribute/#General-Pointers","page":"🛠 Contribute","title":"General Pointers","text":"","category":"section"},{"location":"contribute/","page":"🛠 Contribute","title":"🛠 Contribute","text":"There are also some general pointers for people looking to contribute to any of our Taija packages here.","category":"page"},{"location":"contribute/","page":"🛠 Contribute","title":"🛠 Contribute","text":"Please follow the SciML ColPrac guide.","category":"page"},{"location":"explanation/architecture/","page":"Package Architecture","title":"Package Architecture","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/architecture/#Package-Architecture","page":"Package Architecture","title":"Package Architecture","text":"","category":"section"},{"location":"explanation/architecture/","page":"Package Architecture","title":"Package Architecture","text":"The diagram below provides an overview of the package architecture. It is built around two core modules that are designed to be as extensible as possible through dispatch: 1) Models is concerned with making any arbitrary model compatible with the package; 2) Generators is used to implement arbitrary counterfactual search algorithms.[1]","category":"page"},{"location":"explanation/architecture/","page":"Package Architecture","title":"Package Architecture","text":"The core function of the package, generate_counterfactual, uses an instance of type AbstractModel produced by the Models module and an instance of type AbstractGenerator produced by the Generators module.","category":"page"},{"location":"explanation/architecture/","page":"Package Architecture","title":"Package Architecture","text":"Metapackages from the Taija ecosystem provide additional functionality such as datasets, language interoperability, parallelization, and plotting. The CounterfactualExplanations package is designed to be used in conjunction with these metapackages, but can also be used as a standalone package.","category":"page"},{"location":"explanation/architecture/","page":"Package Architecture","title":"Package Architecture","text":"(Image: )","category":"page"},{"location":"explanation/architecture/","page":"Package Architecture","title":"Package Architecture","text":"[1] We have made an effort to keep the code base a flexible and extensible as possible, but cannot guarantee at this point that any counterfactual generator can be implemented without further adaptation.","category":"page"},{"location":"explanation/optimisers/overview/","page":"Overview","title":"Overview","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/optimisers/overview/#Optimisation-Rules","page":"Overview","title":"Optimisation Rules","text":"","category":"section"},{"location":"explanation/optimisers/overview/","page":"Overview","title":"Overview","text":"Counterfactual search is an optimization problem. Consequently, the choice of the optimisation rule affects the generated counterfactuals. In the short term, we aim to enable users to choose any of the available Flux optimisers. This has not been sufficiently tested yet, and you may run into issues.","category":"page"},{"location":"explanation/optimisers/overview/#Custom-Optimisation-Rules","page":"Overview","title":"Custom Optimisation Rules","text":"","category":"section"},{"location":"explanation/optimisers/overview/","page":"Overview","title":"Overview","text":"Flux optimisers are specifically designed for deep learning, and in particular, for learning model parameters. In counterfactual search, the features are the free parameters that we are optimising over. To this end, some custom optimisation rules are necessary to incorporate ideas presented in the literature. In the following, we introduce those rules.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"tutorials/data_preprocessing/#Handling-Data","page":"Handling Data","title":"Handling Data","text":"","category":"section"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"The package works with custom data containers that contain the input and output data as well as information about the type and mutability of features. In this tutorial, we will see how data can be prepared for use with the package.","category":"page"},{"location":"tutorials/data_preprocessing/#Basic-Functionality","page":"Handling Data","title":"Basic Functionality","text":"","category":"section"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"To demonstrate the basic way to prepare data, let’s look at a standard benchmark dataset: Fisher’s classic iris dataset. We can use MLDatasets to load this data.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"dataset = Iris()","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"Our data constructor CounterfactualData needs at least two inputs: features X and targets y.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"X = dataset.features\ny = dataset.targets","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"Next, we convert the input data to a Tables.MatrixTable (following MLJ.jl) convention. Concerning the target variable, we just assign grab the first column of the data frame.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"X = table(Tables.matrix(X))\ny = y[:,1]","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"Now we can feed these two ingredients to our constructor:","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"counterfactual_data = CounterfactualData(X, y)","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"Under the hood, the constructor performs basic preprocessing steps. For example, the output variable y is automatically one-hot encoded:","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"counterfactual_data.y","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"3×150 Matrix{Bool}:\n 1 1 1 1 1 1 1 1 1 1 1 1 1 … 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"Similarly, a transformer used to scale continuous input features is automatically fitted:","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"counterfactual_data.input_encoder","category":"page"},{"location":"tutorials/data_preprocessing/#Categorical-Features","page":"Handling Data","title":"Categorical Features","text":"","category":"section"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"For the counterfactual search, it is important to distinguish between continuous and categorical features. This is because categorical features cannot be perturbed arbitrarily: they can take specific discrete values, but not just any value on the real line.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"Consider the following example:","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"y = rand([1,0],4)\nX = (\n name=categorical([\"Danesh\", \"Lee\", \"Mary\", \"John\"]),\n grade=categorical([\"A\", \"B\", \"A\", \"C\"], ordered=true),\n sex=categorical([\"male\",\"female\",\"male\",\"male\"]),\n height=[1.85, 1.67, 1.5, 1.67],\n)\nschema(X)","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"┌────────┬──────────────────┬──────────────────────────────────┐\n│ names │ scitypes │ types │\n├────────┼──────────────────┼──────────────────────────────────┤\n│ name │ Multiclass{4} │ CategoricalValue{String, UInt32} │\n│ grade │ OrderedFactor{3} │ CategoricalValue{String, UInt32} │\n│ sex │ Multiclass{2} │ CategoricalValue{String, UInt32} │\n│ height │ Continuous │ Float64 │\n└────────┴──────────────────┴──────────────────────────────────┘","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"Typically, in the context of Unserpervised Learning, categorical features are one-hot or dummy encoded. To this end, we could use MLJ, for example:","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"hot = OneHotEncoder()\nmach = MLJBase.fit!(machine(hot, X))\nW = MLJBase.transform(mach, X)\nX = permutedims(MLJBase.matrix(W))","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"In all likelihood, this pre-processing step already happens at the stage, when the supervised model is trained. Since our counterfactual generators need to work in the same feature domain as the model they are intended to explain, we assume that categorical features are already encoded.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"The CounterfactualData constructor takes two optional arguments that can be used to specify the indices of categorical and continuous features. By default, all features are assumed to be continuous. For categorical features, the constructor expects an array of arrays of integers (Vector{Vector{Int}}) where each subarray includes the indices of all one-hot encoded rows related to a single categorical feature. In the example above, the name feature is one-hot encoded across rows 1, 2, 3 and 4 of X, the grade feature is encoded across the following three rows, etc.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"schema(W)","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"┌──────────────┬────────────┬─────────┐\n│ names │ scitypes │ types │\n├──────────────┼────────────┼─────────┤\n│ name__Danesh │ Continuous │ Float64 │\n│ name__John │ Continuous │ Float64 │\n│ name__Lee │ Continuous │ Float64 │\n│ name__Mary │ Continuous │ Float64 │\n│ grade__A │ Continuous │ Float64 │\n│ grade__B │ Continuous │ Float64 │\n│ grade__C │ Continuous │ Float64 │\n│ sex__female │ Continuous │ Float64 │\n│ sex__male │ Continuous │ Float64 │\n│ height │ Continuous │ Float64 │\n└──────────────┴────────────┴─────────┘","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"The code chunk below assigns the categorical and continuous feature indices:","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"features_categorical = [\n [1,2,3,4], # name\n [5,6,7], # grade\n [8,9] # sex\n]\nfeatures_continuous = [10]","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"When instantiating the data container, these indices just need to be supplied as keyword arguments:","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"counterfactual_data = CounterfactualData(\n X,y;\n features_categorical = features_categorical,\n features_continuous = features_continuous\n)","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"This will ensure that the discrete domain of categorical features is respected in the counterfactual search. We achieve this through a form of Projected Gradient Descent and it works for any of our counterfactual generators.","category":"page"},{"location":"tutorials/data_preprocessing/#Example","page":"Handling Data","title":"Example","text":"","category":"section"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"To see this in action, let’s load some synthetic data using MLJ:","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"N = 1000\nX, ys = MLJBase.make_blobs(N, 2; centers=2, as_table=false, center_box=(-5 => 5), cluster_std=0.5)\nys .= ys.==2","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"Next, we generate a synthetic categorical feature based on the output variable. First, we define the discrete levels:","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"cat_values = [\"X\",\"Y\",\"Z\"]","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"Next, we impose that the categorical feature is most likely to take the first discrete level, namely X, whenever y is equal to 1.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"xcat = map(ys) do y\n if y==1\n x = sample(cat_values, Weights([0.8,0.1,0.1]))\n else\n x = sample(cat_values, Weights([0.1,0.1,0.8]))\n end\nend\nxcat = categorical(xcat)\nX = (\n x1 = X[:,1],\n x2 = X[:,2],\n x3 = xcat\n)\nschema(X)","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"As above, we use a OneHotEncoder to transform the data:","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"hot = OneHotEncoder()\nmach = MLJBase.fit!(machine(hot, X))\nW = MLJBase.transform(mach, X)\nschema(W)\nX = permutedims(MLJBase.matrix(W))","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"Finally, we assign the categorical indices and instantiate our data container:","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"features_categorical = [collect(3:size(X,1))]\ncounterfactual_data = CounterfactualData(\n X,ys';\n features_categorical = features_categorical,\n)","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"With the data pre-processed we can use the fit_model function to train a simple classifier:","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"M = fit_model(counterfactual_data, :Linear)","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"Now it is finally time to generate counterfactuals. We first define 1 as our target and then choose a random sample from the non-target class:","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"target = 1\nfactual = 0\nchosen = rand(findall(predict_label(M, counterfactual_data) .== factual))\nx = select_factual(counterfactual_data, chosen) ","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"5×1 Matrix{Float32}:\n -3.879591\n 1.7199689\n 0.0\n 0.0\n 1.0","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"The factual x belongs to group Z.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"We generate a counterfactual for x using the standard API call:","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"generator = GenericGenerator()\nce = generate_counterfactual(x, target, counterfactual_data, M, generator)","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"CounterfactualExplanation\nConvergence: ✅ after 1 steps.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"The search yields the following counterfactual:","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"x′ = counterfactual(ce)","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"5-element Vector{Float32}:\n -3.89187\n 0.25591564\n 1.0\n 0.0\n 0.0","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"It belongs to group X.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"This is intuitive because by construction the categorical variable is most likely to take that value when y is equal to the target outcome.","category":"page"},{"location":"tutorials/data_preprocessing/#Immutable-Features","page":"Handling Data","title":"Immutable Features","text":"","category":"section"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"In practice, features usually cannot be perturbed arbitrarily. Suppose, for example, that one of the features used by a bank to predict the creditworthiness of its clients is gender. If a counterfactual explanation for the prediction model indicates that female clients should change their gender to improve their creditworthiness, then this is an interesting insight (it reveals gender bias), but it is not usually an actionable transformation in practice. In such cases, we may want to constrain the mutability of features to ensure actionable and realistic recourse.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"To illustrate how this can be implemented in CounterfactualExplanations.jl we will continue to work with the synthetic data from the previous section. Mutability of features can be defined in terms of four different options: 1) the feature is mutable in both directions, 2) the feature can only increase (e.g. age), 3) the feature can only decrease (e.g. time left until your next deadline) and 4) the feature is not mutable (e.g. skin colour, ethnicity, …). To specify which category a feature belongs to, you can pass a vector of symbols containing the mutability constraints at the pre-processing stage. For each feature you can choose from these four options: :both (mutable in both directions), :increase (only up), :decrease (only down) and :none (immutable). By default, nothing is passed to that keyword argument and it is assumed that all features are mutable in both directions.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"Below we impose that the second feature is immutable.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"counterfactual_data = CounterfactualData(load_linearly_separable()...)\nM = fit_model(counterfactual_data, :Linear)\ncounterfactual_data.mutability = [:both, :none]","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"target = 2\nfactual = 1\nchosen = rand(findall(predict_label(M, counterfactual_data) .== factual))\nx = select_factual(counterfactual_data, chosen) \nce = generate_counterfactual(x, target, counterfactual_data, M, generator)","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"The resulting counterfactual path is shown in the chart below. Since only the first feature can be perturbed, the sample can only move along the horizontal axis.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"plot(ce)","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"(Image: )","category":"page"},{"location":"tutorials/data_preprocessing/#Domain-constraints","page":"Handling Data","title":"Domain constraints","text":"","category":"section"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"In some cases, we may also want to constrain the domain of some feature. For example, age as a feature is constrained to a range from 0 to some upper bound corresponding perhaps to the average life expectancy of humans. Below, for example, we impose a lower bound of 05 for our two features.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"counterfactual_data.mutability = [:both, :both]\ncounterfactual_data.domain = [(0.5,Inf) for var in counterfactual_data.features_continuous]","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"This results in the counterfactual path shown below: since features are not allowed to be perturbed beyond the upper bound, the resulting counterfactual falls just short of the threshold probability gamma.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"ce = generate_counterfactual(x, target, counterfactual_data, M, generator)\nplot(ce)","category":"page"},{"location":"assets/resources/#Further-Resources","page":"📚 Additional Resources","title":"Further Resources","text":"","category":"section"},{"location":"assets/resources/#JuliaCon-2022","page":"📚 Additional Resources","title":"JuliaCon 2022","text":"","category":"section"},{"location":"assets/resources/","page":"📚 Additional Resources","title":"📚 Additional Resources","text":"Slides: link","category":"page"},{"location":"assets/resources/#JuliaCon-Proceedings-Paper","page":"📚 Additional Resources","title":"JuliaCon Proceedings Paper","text":"","category":"section"},{"location":"assets/resources/","page":"📚 Additional Resources","title":"📚 Additional Resources","text":"TBD","category":"page"},{"location":"explanation/generators/greedy/","page":"Greedy","title":"Greedy","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/generators/greedy/#GreedyGenerator","page":"Greedy","title":"GreedyGenerator","text":"","category":"section"},{"location":"explanation/generators/greedy/","page":"Greedy","title":"Greedy","text":"We use the term greedy to describe the counterfactual generator introduced by Schut et al. (2021).","category":"page"},{"location":"explanation/generators/greedy/#Description","page":"Greedy","title":"Description","text":"","category":"section"},{"location":"explanation/generators/greedy/","page":"Greedy","title":"Greedy","text":"The Greedy generator works under the premise of generating realistic counterfactuals by minimizing predictive uncertainty. Schut et al. (2021) show that for models that incorporates predictive uncertainty in their predictions, maximizing the predictive probability corresponds to minimizing the predictive uncertainty: by construction, the generated counterfactual will therefore be realistic (low epistemic uncertainty) and unambiguous (low aleatoric uncertainty).","category":"page"},{"location":"explanation/generators/greedy/","page":"Greedy","title":"Greedy","text":"For the counterfactual search Schut et al. (2021) propose using a Jacobian-based Saliency Map Attack(JSMA). It is greedy in the sense that it is an “iterative algorithm that updates the most salient feature, i.e. the feature that has the largest influence on the classification, by delta at each step” (Schut et al. 2021).","category":"page"},{"location":"explanation/generators/greedy/#Usage","page":"Greedy","title":"Usage","text":"","category":"section"},{"location":"explanation/generators/greedy/","page":"Greedy","title":"Greedy","text":"The approach can be used in our package as follows:","category":"page"},{"location":"explanation/generators/greedy/","page":"Greedy","title":"Greedy","text":"M = fit_model(counterfactual_data, :DeepEnsemble)\ngenerator = GreedyGenerator()\nce = generate_counterfactual(x, target, counterfactual_data, M, generator)\nplot(ce)","category":"page"},{"location":"explanation/generators/greedy/","page":"Greedy","title":"Greedy","text":"(Image: )","category":"page"},{"location":"explanation/generators/greedy/#References","page":"Greedy","title":"References","text":"","category":"section"},{"location":"explanation/generators/greedy/","page":"Greedy","title":"Greedy","text":"Schut, Lisa, Oscar Key, Rory Mc Grath, Luca Costabello, Bogdan Sacaleanu, Yarin Gal, et al. 2021. “Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties.” In International Conference on Artificial Intelligence and Statistics, 1756–64. PMLR.","category":"page"},{"location":"explanation/generators/probe/","page":"PROBE","title":"PROBE","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/generators/probe/#ProbeGenerator","page":"PROBE","title":"ProbeGenerator","text":"","category":"section"},{"location":"explanation/generators/probe/","page":"PROBE","title":"PROBE","text":"The ProbeGenerator is designed to navigate the trade-offs between costs and robustness in Algorithmic Recourse (Pawelczyk et al. 2022).","category":"page"},{"location":"explanation/generators/probe/#Description","page":"PROBE","title":"Description","text":"","category":"section"},{"location":"explanation/generators/probe/","page":"PROBE","title":"PROBE","text":"The goal of ProbeGenerator is to find a recourse x’ whose prediction at any point y within some set around x’ belongs to the positive class with probability 1 - r, where r is the recourse invalidation rate. It minimizes the gap between the achieved and desired recourse invalidation rates, minimizes recourse costs, and also ensures that the resulting recourse achieves a positive model prediction.","category":"page"},{"location":"explanation/generators/probe/#Explanation","page":"PROBE","title":"Explanation","text":"","category":"section"},{"location":"explanation/generators/probe/","page":"PROBE","title":"PROBE","text":"The loss function this generator is defined below. R is a hinge loss parameter which helps control for robustness. The loss and penalty functions can still be chosen freely.","category":"page"},{"location":"explanation/generators/probe/","page":"PROBE","title":"PROBE","text":"beginaligned\nR(x sigma^2 I) + l(f(x) s) + lambda d_c(x x)\nendaligned","category":"page"},{"location":"explanation/generators/probe/","page":"PROBE","title":"PROBE","text":"R uses the following formula to control for noise. It generates small perturbations and checks how often the counterfactual explanation flips back to a factual one, when small amounts of noise are added to it.","category":"page"},{"location":"explanation/generators/probe/","page":"PROBE","title":"PROBE","text":"beginaligned\nDelta(x^hatE) = E_varepsilonh(x^hatE) - h(x^hatE + varepsilon)\nendaligned","category":"page"},{"location":"explanation/generators/probe/","page":"PROBE","title":"PROBE","text":"The above formula is not differentiable. For this reason the generator uses the closed form version of the formula below.","category":"page"},{"location":"explanation/generators/probe/","page":"PROBE","title":"PROBE","text":"beginequation\nDelta tilde(x^hatE sigma^2 I) = 1 - Phi left(fracsqrtf(x^hatE)sqrtnabla f(x^hatE)^T sigma^2 I nabla f(x^hatE)right) \nendequation","category":"page"},{"location":"explanation/generators/probe/#Usage","page":"PROBE","title":"Usage","text":"","category":"section"},{"location":"explanation/generators/probe/","page":"PROBE","title":"PROBE","text":"Generating a counterfactual with the data loaded and generator chosen works as follows:","category":"page"},{"location":"explanation/generators/probe/","page":"PROBE","title":"PROBE","text":"Note: It is important to set the convergence to “:invalidation_rate” here.","category":"page"},{"location":"explanation/generators/probe/","page":"PROBE","title":"PROBE","text":"M = fit_model(counterfactual_data, :DeepEnsemble)\nopt = Descent(0.01)\ngenerator = CounterfactualExplanations.Generators.ProbeGenerator(opt=opt)\nconv = CounterfactualExplanations.Convergence.InvalidationRateConvergence(;invalidation_rate=0.5)\nce = generate_counterfactual(x, target, counterfactual_data, M, generator, convergence=conv)\nplot(ce)","category":"page"},{"location":"explanation/generators/probe/","page":"PROBE","title":"PROBE","text":"Choosing different invalidation rates makes the counterfactual more or less robust. The following plot shows the counterfactuals generated for different invalidation rates.","category":"page"},{"location":"explanation/generators/probe/","page":"PROBE","title":"PROBE","text":"(Image: )","category":"page"},{"location":"explanation/generators/probe/#References","page":"PROBE","title":"References","text":"","category":"section"},{"location":"explanation/generators/probe/","page":"PROBE","title":"PROBE","text":"Pawelczyk, Martin, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci, and Himabindu Lakkaraju. 2022. “Probabilistically Robust Recourse: Navigating the Trade-Offs Between Costs and Robustness in Algorithmic Recourse.” arXiv Preprint arXiv:2203.06768.","category":"page"},{"location":"extensions/neurotree/","page":"NeuroTrees","title":"NeuroTrees","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"extensions/neurotree/#[NeuroTreeModels.jl](https://evovest.github.io/NeuroTreeModels.jl/dev/)","page":"NeuroTrees","title":"NeuroTreeModels.jl","text":"","category":"section"},{"location":"extensions/neurotree/","page":"NeuroTrees","title":"NeuroTrees","text":"NeuroTreeModels.jl is a package that provides a framework for training differentiable tree-based models. This is relevant to the work on counterfactual explanations (CE), which often assumes that the underlying black-box model is differentiable with respect to its input. The literature on CE therefore regularly focuses exclusively on explaining deep learning models. This is at odds with the fact that the literature also typically focuses on tabular data, which is often best modeled by tree-based models (Grinsztajn, Oyallon, and Varoquaux 2022). The extension for NeuroTreeModels.jl provides a way to bridge this gap by allowing users to apply existing gradient-based CE methods to differentiable tree-based models.","category":"page"},{"location":"extensions/neurotree/","page":"NeuroTrees","title":"NeuroTrees","text":"warning: Experimental Feature\nPlease note that this extension is still experimental. Neither the behaviour of differentiable tree-based models nor their interplay with counterfactual explanations is well understood at this point. If you encounter any issues, please report them to the package maintainers. Your feedback is highly appreciated.Please also note that this extension is only tested on Julia 1.9 and higher, due to compatibility issues.","category":"page"},{"location":"extensions/neurotree/#Example","page":"NeuroTrees","title":"Example","text":"","category":"section"},{"location":"extensions/neurotree/","page":"NeuroTrees","title":"NeuroTrees","text":"The extension will be loaded automatically when loading the NeuroTreeModels package (assuming the CounterfactualExplanations package is also loaded).","category":"page"},{"location":"extensions/neurotree/","page":"NeuroTrees","title":"NeuroTrees","text":"using NeuroTreeModels","category":"page"},{"location":"extensions/neurotree/","page":"NeuroTrees","title":"NeuroTrees","text":"Next, we will fit a NeuroTree model to the moons dataset using our standard package API for doing so.","category":"page"},{"location":"extensions/neurotree/","page":"NeuroTrees","title":"NeuroTrees","text":"# Fit model to data:\ndata = CounterfactualData(load_moons()...)\nM = fit_model(\n data, :NeuroTree; \n depth=2, lr=5e-2, nrounds=50, batchsize=10\n)","category":"page"},{"location":"extensions/neurotree/","page":"NeuroTrees","title":"NeuroTrees","text":"NeuroTreeExt.NeuroTreeModel(NeuroTreeRegressor(loss = mlogloss, …), :classification_multi, NeuroTreeModels.NeuroTreeModel{NeuroTreeModels.MLogLoss, Chain{Tuple{BatchNorm{typeof(identity), Vector{Float32}, Float32, Vector{Float32}}, NeuroTreeModels.StackTree}}}(NeuroTreeModels.MLogLoss, Chain(BatchNorm(2, active=false), NeuroTreeModels.StackTree(NeuroTree[NeuroTree{Matrix{Float32}, Vector{Float32}, Array{Float32, 3}}(Float32[1.8824593 -0.28222033; -2.680499 0.67347014; … ; -1.0722864 1.3651229; -2.0926774 1.63557], Float32[-3.4070241, 4.545113, 1.0882677, -0.3497498, -2.766766, 1.9072449, -0.9736261, 3.9750721, 1.726214, 3.7279263 … -0.0664266, -0.4214582, -2.3816268, -3.1371245, 0.76548636, 2.636373, 2.4558601, 0.893434, -1.9484522, 4.793434], Float32[3.44271 -6.334693 -0.6308845 3.385659; -3.4316056 6.297003 0.7254221 -3.3283486;;; -3.7011054 -0.17596768 0.15429471 2.270125; 3.4926674 0.026218029 -0.19753197 -2.2337704;;; 1.1795454 -4.315231 0.28486454 1.9995956; -0.9651108 4.0999455 -0.05312265 -1.8039354;;; … ;;; 2.5076811 -0.46358463 -3.5438805 0.0686823; -2.592356 0.47884527 3.781507 -0.022692114;;; -0.59115165 -3.234046 0.09896194 2.375202; 0.5592871 3.3082843 -0.014032216 -2.1876256;;; 2.039389 -0.10134532 2.6637273 -4.999703; -2.0289893 0.3368772 -2.5739825 5.069934], tanh)])), Dict{Symbol, Any}(:feature_names => [:x1, :x2], :nrounds => 50, :device => :cpu)))","category":"page"},{"location":"extensions/neurotree/","page":"NeuroTrees","title":"NeuroTrees","text":"Finally, we select a factual instance and generate a counterfactual explanation for it using the generic gradient-based CE method.","category":"page"},{"location":"extensions/neurotree/","page":"NeuroTrees","title":"NeuroTrees","text":"# Select a factual instance:\ntarget = 1\nfactual = 0\nchosen = rand(findall(predict_label(M, data) .== factual))\nx = select_factual(data, chosen)\n\n# Generate counterfactual explanation:\nη = 0.01\ngenerator = GenericGenerator(; opt=Descent(η), λ=0.01)\nconv = CounterfactualExplanations.Convergence.DecisionThresholdConvergence(;\n decision_threshold=0.9, max_iter=100\n)\nce = generate_counterfactual(x, target, data, M, generator; convergence=conv)\nplot(ce, alpha=0.1)","category":"page"},{"location":"extensions/neurotree/","page":"NeuroTrees","title":"NeuroTrees","text":"(Image: )","category":"page"},{"location":"extensions/neurotree/#References","page":"NeuroTrees","title":"References","text":"","category":"section"},{"location":"extensions/neurotree/","page":"NeuroTrees","title":"NeuroTrees","text":"Grinsztajn, Léo, Edouard Oyallon, and Gaël Varoquaux. 2022. “Why Do Tree-Based Models Still Outperform Deep Learning on Tabular Data?” https://arxiv.org/abs/2207.08815.","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"tutorials/generators/#Handling-Generators","page":"Handling Generators","title":"Handling Generators","text":"","category":"section"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"Generating Counterfactual Explanations can be seen as a generative modelling task because it involves generating samples in the input space: x sim mathcalX. In this tutorial, we will introduce how Counterfactual GradientBasedGenerators are used. They are discussed in more detail in the explanatory section of the documentation.","category":"page"},{"location":"tutorials/generators/#Composable-Generators","page":"Handling Generators","title":"Composable Generators","text":"","category":"section"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"warning: Breaking Changes Expected\nWork on this feature is still in its very early stages and breaking changes should be expected. ","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"One of the key objectives for this package is Composability. It turns out that many of the various counterfactual generators that have been proposed in the literature, essentially do the same thing: they optimize an objective function. Formally we have,","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"\nbeginaligned\nmathbfs^prime = arg min_mathbfs^prime in mathcalS left textyloss(M(f(mathbfs^prime))y^*)+ lambda textcost(f(mathbfs^prime)) right \nendaligned \n qquad(1)","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"where textyloss denotes the main loss function and textcost is a penalty term (Altmeyer et al. 2023).","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"Without going into further detail here, the important thing to mention is that Equation 1 very closely describes how counterfactual search is actually implemented in the package. In other words, all off-the-shelf generators currently implemented work with that same objective. They just vary in the way that penalties are defined, for example. This gives rise to an interesting idea:","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"Why not compose generators that combine ideas from different off-the-shelf generators?","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"The GradientBasedGenerator class provides a straightforward way to do this, without requiring users to build custom GradientBasedGenerators from scratch. It can be instantiated as follows:","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"generator = GradientBasedGenerator()","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"By default, this creates a generator that simply performs gradient descent without any penalties. To modify the behaviour of the generator, you can define the counterfactual search objective function using the @objective macro:","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"@objective(generator, logitbinarycrossentropy + 0.1distance_l2 + 1.0ddp_diversity)","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"Here we have essentially created a version of the DiCEGenerator:","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"ce = generate_counterfactual(x, target, counterfactual_data, M, generator; num_counterfactuals=5)\nplot(ce)","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"(Image: )","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"Multiple macros can be chained using Chains.jl making it easy to create entirely new flavours of counterfactual generators. The following generator, for example, combines ideas from DiCE (Mothilal, Sharma, and Tan 2020) and REVISE (Joshi et al. 2019):","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"@chain generator begin\n @objective logitcrossentropy + 1.0ddp_diversity # DiCE (Mothilal et al. 2020)\n @with_optimiser Flux.Adam(0.1) \n @search_latent_space # REVISE (Joshi et al. 2019)\nend","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"Let’s take this generator to our MNIST dataset and generate a counterfactual explanation for turning a 0 into a 8.","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"(Image: )","category":"page"},{"location":"tutorials/generators/#Off-the-Shelf-Generators","page":"Handling Generators","title":"Off-the-Shelf Generators","text":"","category":"section"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"Off-the-shelf generators are just default recipes for counterfactual generators. Currently, the following off-the-shelf counterfactual generators are implemented in the package:","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"generator_catalogue","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"Dict{Symbol, Any} with 11 entries:\n :gravitational => GravitationalGenerator\n :growing_spheres => GrowingSpheresGenerator\n :revise => REVISEGenerator\n :clue => CLUEGenerator\n :probe => ProbeGenerator\n :dice => DiCEGenerator\n :feature_tweak => FeatureTweakGenerator\n :claproar => ClaPROARGenerator\n :wachter => WachterGenerator\n :generic => GenericGenerator\n :greedy => GreedyGenerator","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"To specify the type of generator you want to use, you can simply instantiate it:","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"# Search:\ngenerator = GenericGenerator()\nce = generate_counterfactual(x, target, counterfactual_data, M, generator)\nplot(ce)","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"(Image: )","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"We generally make an effort to follow the literature as closely as possible when implementing off-the-shelf generators.","category":"page"},{"location":"tutorials/generators/#References","page":"Handling Generators","title":"References","text":"","category":"section"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia Liem. 2023. “Endogenous Macrodynamics in Algorithmic Recourse.” In First IEEE Conference on Secure and Trustworthy Machine Learning. https://doi.org/10.1109/satml54575.2023.00036.","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"Joshi, Shalmali, Oluwasanmi Koyejo, Warut Vijitbenjaronk, Been Kim, and Joydeep Ghosh. 2019. “Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems.” https://arxiv.org/abs/1907.09615.","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"Mothilal, Ramaravind K, Amit Sharma, and Chenhao Tan. 2020. “Explaining Machine Learning Classifiers Through Diverse Counterfactual Explanations.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 607–17. https://doi.org/10.1145/3351095.3372850.","category":"page"},{"location":"explanation/generators/mint/","page":"MINT","title":"MINT","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/generators/mint/#MINT-Generator","page":"MINT","title":"MINT Generator","text":"","category":"section"},{"location":"explanation/generators/mint/","page":"MINT","title":"MINT","text":"In this tutorial, we introduce the MINT generator, a counterfactual generator based on the Recourse through Minimal Intervention (MINT) method proposed by Karimi, Schölkopf, and Valera (2021).","category":"page"},{"location":"explanation/generators/mint/","page":"MINT","title":"MINT","text":"note: Note\nThere is currently no custom type for this generator, because we anticipate changes to the API for composable generators. This tutorial explains how counterfactuals can nonetheless be generated consistently with the MINT framework.","category":"page"},{"location":"explanation/generators/mint/#Description","page":"MINT","title":"Description","text":"","category":"section"},{"location":"explanation/generators/mint/","page":"MINT","title":"MINT","text":"The MINT generator incorporates causal reasoning in algorithm recourse to achieve minimal interventions when generating a counterfactual explanation. In this sense, the main ideia is that just perturbating a black box model without taking into account the causal relations in the data can guide to misleading recommendations. Here we now shift to a perspective where every action/pertubation is an intervetion in the causal graph of the problem, thus the change is not made just in the intervened upon variable, but also in its childs in the causal structure. The generator utilizes a Structural Causal Model(SCM) to encode the variables in a way that causal effects are propagated and uses a generic gradient-based generator to create the search path, that is, any gradient-base generator (ECCo, REVISE, Watcher, …) can be used with the MNIT SCM encoder to generate counterfactual samples in latent space for minimal intervetions algorithm recourse.","category":"page"},{"location":"explanation/generators/mint/","page":"MINT","title":"MINT","text":"The MNIT algorithm minimizes a loss function that combines the causal constraints of the SCM and the distance between the generated counterfactual and the original input. Since we want a gradient-based generator, we need to pass the constrained optimizaiton problem into an unconstrained one and we do this by using the Lagrangian. Initially, as defined in Karimi, Schölkopf, and Valera (2021), we aim to aim to find the minimal cost set of actions A (in the form of structural interventions) that results in a counterfactual instance yielding the favorable output from h,","category":"page"},{"location":"explanation/generators/mint/","page":"MINT","title":"MINT","text":"beginaligned\nA^* in argmin_A textcost(A mathbfx_F)\ntextrmst quad h(mathbfx_SCF) neq h(mathbfx_F)\nendaligned","category":"page"},{"location":"explanation/generators/mint/","page":"MINT","title":"MINT","text":"where mathbfx_F is the original input, mathbfx_SCF is the counterfactual instance, and h is the black-box model. We use the mathbfx_SCF terminology because the counterfactual is derived from the SCM,","category":"page"},{"location":"explanation/generators/mint/","page":"MINT","title":"MINT","text":"beginequation\n\nx_SCF_i = \nbegincases\nx_F_i + delta_i textif i in I \nx_F_i + f_i(textpa_SCF_i) - f_i(textpa_F_i) textif i notin I text\nendcases \n\nendequation","category":"page"},{"location":"explanation/generators/mint/","page":"MINT","title":"MINT","text":"where I is the set of intervened upon variables, f_i is the function that generates the value of the variable i given its parents, and textpa_SCF_i and textpa_F_i are the parents of the variable i in the counterfactual and original instance, respectively. This closed formula for the decision variable mathbfx_SCF is what makes possible to use a gradient-based generator, since the lagrangian is differentiable,","category":"page"},{"location":"explanation/generators/mint/","page":"MINT","title":"MINT","text":"beginequation\nmathcalL_textttMINT(mathbfx_SCF) = lambda textcost(mathbfx_SCF mathbfx_F) + textyloss(mathbfx_SCFy^*) text\nendequation","category":"page"},{"location":"explanation/generators/mint/#Usage","page":"MINT","title":"Usage","text":"","category":"section"},{"location":"explanation/generators/mint/","page":"MINT","title":"MINT","text":"As we already stated, the MINT generator is not yet implemented as a custom type in the package. However, the MINT algorithm can be implemented using the generic generator and the SCM encoder, that we implement using CausalInference.jl package. The following code snippet shows how to use the MINT algorithm to generate counterfactuals using any gradient-based generator:","category":"page"},{"location":"explanation/generators/mint/","page":"MINT","title":"MINT","text":"using CausalInference\nusing CounterfactualExplanations\nusing CounterfactualExplanations.DataPreprocessing: fit_transformer\n\nN = 2000\ndf = (\n x = randn(N), \n v = randn(N) .^ 2 + randn(N) * 0.25, \n w = cos.(randn(N)) + randn(N) * 0.25, \n z = randn(N) .^ 2 + cos.(randn(N)) + randn(N) * 0.25 + randn(N) * 0.25, \n s = sin.(randn(N) .^ 2 + cos.(randn(N)) + randn(N) * 0.25 + randn(N) * 0.25) + randn(N) * 0.25\n)\ny_lab = rand(0:2, N)\ncounterfactual_data_scm = CounterfactualData(Tables.matrix(df; transpose=true), y_lab)\n\nM = fit_model(counterfactual_data_scm, :Linear)\nchosen = rand(findall(predict_label(M, counterfactual_data_scm) .== 1))\nx = select_factual(counterfactual_data_scm, chosen)\n\ndata_scm = deepcopy(counterfactual_data_scm)\ndata_scm.input_encoder = fit_transformer(data_scm, CausalInference.SCM)\n\nce = generate_counterfactual(x, 2, data_scm, M, GenericGenerator(); initialization=:identity)","category":"page"},{"location":"explanation/generators/mint/","page":"MINT","title":"MINT","text":"CounterfactualExplanation\nConvergence: ❌ after 100 steps.","category":"page"},{"location":"explanation/generators/mint/","page":"MINT","title":"MINT","text":"note: Note\nThe above documentation is based on the information provided in the MINT paper. Please refer to the original paper for more detailed explanations and implementation specifics.","category":"page"},{"location":"explanation/generators/mint/#References","page":"MINT","title":"References","text":"","category":"section"},{"location":"explanation/generators/mint/","page":"MINT","title":"MINT","text":"Karimi, Amir-Hossein, Bernhard Schölkopf, and Isabel Valera. 2021. “Algorithmic Recourse: From Counterfactual Explanations to Interventions.” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 353–62.","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"tutorials/parallelization/#Parallelization","page":"Parallelization","title":"Parallelization","text":"","category":"section"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"Version 0.1.15 adds support for parallelization through multi-processing. Currently, the only available backend for parallelization is MPI.jl.","category":"page"},{"location":"tutorials/parallelization/#Available-functions","page":"Parallelization","title":"Available functions","text":"","category":"section"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"Parallelization is only available for certain functions. To check if a function is parallelizable, you can use parallelizable function:","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"using CounterfactualExplanations.Evaluation: evaluate, benchmark\nprintln(parallelizable(generate_counterfactual))\nprintln(parallelizable(evaluate))\nprintln(parallelizable(predict_label))","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"true\ntrue\nfalse","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"In the following, we will generate multiple counterfactuals and evaluate them in parallel:","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"chosen = rand(findall(predict_label(M, counterfactual_data) .== factual), 1000)\nxs = select_factual(counterfactual_data, chosen)","category":"page"},{"location":"tutorials/parallelization/#Multi-threading","page":"Parallelization","title":"Multi-threading","text":"","category":"section"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"We first instantiate an ThreadParallelizer object:","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"parallelizer = ThreadsParallelizer()","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"ThreadsParallelizer()","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"To generate counterfactuals in parallel, we use the parallelize function:","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"ces = @with_parallelizer parallelizer begin\n generate_counterfactual(\n xs,\n target,\n counterfactual_data,\n M,\n generator\n )\nend","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"Generating counterfactuals ... 0%| | ETA: 0:01:29 (89.14 ms/it)Generating counterfactuals ... 100%|███████| Time: 0:00:01 ( 1.59 ms/it)\n\n1000-element Vector{AbstractCounterfactualExplanation}:\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 8 steps.\n CounterfactualExplanation\nConvergence: ✅ after 6 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 8 steps.\n CounterfactualExplanation\nConvergence: ✅ after 8 steps.\n CounterfactualExplanation\nConvergence: ✅ after 8 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 8 steps.\n CounterfactualExplanation\nConvergence: ✅ after 6 steps.\n ⋮\n CounterfactualExplanation\nConvergence: ✅ after 9 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 6 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 8 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 8 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 8 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"To evaluate counterfactuals in parallel, we again use the parallelize function:","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"@with_parallelizer parallelizer evaluate(ces)","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"Evaluating counterfactuals ... 0%| | ETA: 0:07:03 ( 0.42 s/it)Evaluating counterfactuals ... 100%|███████| Time: 0:00:00 ( 0.86 ms/it)\n\n1000-element Vector{Any}:\n Vector[[1.0], Float32[3.2939816], [0.0]]\n Vector[[1.0], Float32[3.019046], [0.0]]\n Vector[[1.0], Float32[3.701171], [0.0]]\n Vector[[1.0], Float32[2.5611918], [0.0]]\n Vector[[1.0], Float32[2.9027307], [0.0]]\n Vector[[1.0], Float32[3.7893882], [0.0]]\n Vector[[1.0], Float32[3.5026522], [0.0]]\n Vector[[1.0], Float32[3.6317568], [0.0]]\n Vector[[1.0], Float32[3.084984], [0.0]]\n Vector[[1.0], Float32[3.2268934], [0.0]]\n Vector[[1.0], Float32[2.834947], [0.0]]\n Vector[[1.0], Float32[3.656587], [0.0]]\n Vector[[1.0], Float32[2.5985842], [0.0]]\n ⋮\n Vector[[1.0], Float32[4.067538], [0.0]]\n Vector[[1.0], Float32[3.02231], [0.0]]\n Vector[[1.0], Float32[2.748292], [0.0]]\n Vector[[1.0], Float32[2.9483426], [0.0]]\n Vector[[1.0], Float32[3.066149], [0.0]]\n Vector[[1.0], Float32[3.6018147], [0.0]]\n Vector[[1.0], Float32[3.0138078], [0.0]]\n Vector[[1.0], Float32[3.5724509], [0.0]]\n Vector[[1.0], Float32[3.117551], [0.0]]\n Vector[[1.0], Float32[2.9670508], [0.0]]\n Vector[[1.0], Float32[3.4107168], [0.0]]\n Vector[[1.0], Float32[3.0252533], [0.0]]","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"Benchmarks can also be run with parallelization by specifying parallelizer argument:","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"# Models:\nbmk = benchmark(counterfactual_data; parallelizer = parallelizer)","category":"page"},{"location":"tutorials/parallelization/#MPI","page":"Parallelization","title":"MPI","text":"","category":"section"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"note: Note\nTo use MPI, you need to have MPI installed on your machine. Running the following code straight from a running Julia session will work if you have MPI installed on your machine, but it will be run on a single process. To execute the code on multiple processes, you need to run it from the command line with mpirun or mpiexec. For example, to run a script on 4 processes, you can run the following command from the command line:\n\nmpiexecjl --project -n 4 julia -e 'include(\"docs/src/srcipts/mpi.jl\")'For more information, see MPI.jl. ","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"We first instantiate an MPIParallelizer object:","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"import MPI\nMPI.Init()\nparallelizer = MPIParallelizer(MPI.COMM_WORLD; threaded=true)","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"Precompiling MPIExt\n ✓ TaijaParallel → MPIExt\n 1 dependency successfully precompiled in 3 seconds. 255 already precompiled.\n[ Info: Precompiling MPIExt [48137b38-b316-530b-be8a-261f41e68c23]\n┌ Warning: Module TaijaParallel with build ID ffffffff-ffff-ffff-0001-2d458926c256 is missing from the cache.\n│ This may mean TaijaParallel [bf1c2c22-5e42-4e78-8b6b-92e6c673eeb0] does not support precompilation but is imported by a module that does.\n└ @ Base loading.jl:1948\n[ Info: Skipping precompilation since __precompile__(false). Importing MPIExt [48137b38-b316-530b-be8a-261f41e68c23].\n[ Info: Using `MPI.jl` for multi-processing.\n\nRunning on 1 processes.\n\nMPIExt.MPIParallelizer(MPI.Comm(1140850688), 0, 1, nothing, true)","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"To generate counterfactuals in parallel, we use the parallelize function:","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"ces = @with_parallelizer parallelizer begin\n generate_counterfactual(\n xs,\n target,\n counterfactual_data,\n M,\n generator\n )\nend","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"Generating counterfactuals ... 9%|▋ | ETA: 0:00:01 ( 1.15 ms/it)Generating counterfactuals ... 19%|█▍ | ETA: 0:00:01 ( 1.07 ms/it)Generating counterfactuals ... 29%|██ | ETA: 0:00:01 ( 1.10 ms/it)Generating counterfactuals ... 39%|██▊ | ETA: 0:00:01 ( 1.08 ms/it)Generating counterfactuals ... 49%|███▍ | ETA: 0:00:01 ( 1.08 ms/it)Generating counterfactuals ... 59%|████▏ | ETA: 0:00:00 ( 1.08 ms/it)Generating counterfactuals ... 69%|████▊ | ETA: 0:00:00 ( 1.08 ms/it)Generating counterfactuals ... 79%|█████▌ | ETA: 0:00:00 ( 1.07 ms/it)Generating counterfactuals ... 89%|██████▎| ETA: 0:00:00 ( 1.07 ms/it)Generating counterfactuals ... 99%|██████▉| ETA: 0:00:00 ( 1.06 ms/it)Generating counterfactuals ... 100%|███████| Time: 0:00:01 ( 1.06 ms/it)\n\n1000-element Vector{AbstractCounterfactualExplanation}:\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 8 steps.\n CounterfactualExplanation\nConvergence: ✅ after 6 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 8 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 8 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 6 steps.\n CounterfactualExplanation\nConvergence: ✅ after 8 steps.\n CounterfactualExplanation\nConvergence: ✅ after 6 steps.\n ⋮\n CounterfactualExplanation\nConvergence: ✅ after 9 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 6 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 8 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 8 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"To evaluate counterfactuals in parallel, we again use the parallelize function:","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"@with_parallelizer parallelizer evaluate(ces)","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"1000-element Vector{Any}:\n Vector[[1.0], Float32[3.0941274], [0.0]]\n Vector[[1.0], Float32[3.0894346], [0.0]]\n Vector[[1.0], Float32[3.5737448], [0.0]]\n Vector[[1.0], Float32[2.6201036], [0.0]]\n Vector[[1.0], Float32[2.8519764], [0.0]]\n Vector[[1.0], Float32[3.7762523], [0.0]]\n Vector[[1.0], Float32[3.4162796], [0.0]]\n Vector[[1.0], Float32[3.6095932], [0.0]]\n Vector[[1.0], Float32[3.1347957], [0.0]]\n Vector[[1.0], Float32[3.0313473], [0.0]]\n Vector[[1.0], Float32[2.7612567], [0.0]]\n Vector[[1.0], Float32[3.6191392], [0.0]]\n Vector[[1.0], Float32[2.610616], [0.0]]\n ⋮\n Vector[[1.0], Float32[4.0844703], [0.0]]\n Vector[[1.0], Float32[3.0119], [0.0]]\n Vector[[1.0], Float32[2.4461186], [0.0]]\n Vector[[1.0], Float32[3.071967], [0.0]]\n Vector[[1.0], Float32[3.132917], [0.0]]\n Vector[[1.0], Float32[3.5403214], [0.0]]\n Vector[[1.0], Float32[3.0588162], [0.0]]\n Vector[[1.0], Float32[3.5600657], [0.0]]\n Vector[[1.0], Float32[3.2205954], [0.0]]\n Vector[[1.0], Float32[2.896302], [0.0]]\n Vector[[1.0], Float32[3.2603998], [0.0]]\n Vector[[1.0], Float32[3.1369917], [0.0]]","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"tip: Tip\nNote that parallelizable processes can be supplied as input to the macro either as a block or directly as an expression.","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"Benchmarks can also be run with parallelization by specifying parallelizer argument:","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"# Models:\nbmk = benchmark(counterfactual_data; parallelizer = parallelizer)","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"The following code snippet shows a complete example script that uses MPI for running a benchmark in parallel:","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"using CounterfactualExplanations\nusing CounterfactualExplanations.Evaluation: benchmark\nusing CounterfactualExplanations.Models\nimport MPI\n\nMPI.Init()\n\ndata = TaijaData.load_linearly_separable()\ncounterfactual_data = DataPreprocessing.CounterfactualData(data...)\nM = fit_model(counterfactual_data, :Linear)\nfactual = 1\ntarget = 2\nchosen = rand(findall(predict_label(M, counterfactual_data) .== factual), 100)\nxs = select_factual(counterfactual_data, chosen)\ngenerator = GenericGenerator()\n\nparallelizer = MPIParallelizer(MPI.COMM_WORLD)\n\nbmk = benchmark(counterfactual_data; parallelizer=parallelizer)\n\nMPI.Finalize()","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"The file can be executed from the command line as follows:","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"mpiexecjl --project -n 4 julia -e 'include(\"docs/src/srcipts/mpi.jl\")'","category":"page"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"EditURL = \"https://github.com/juliatrustworthyai/CounterfactualExplanations.jl/blob/master/CHANGELOG.md\"","category":"page"},{"location":"release-notes/#Changelog","page":"Release Notes","title":"Changelog","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"All notable changes to this project will be documented in this file.","category":"page"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.","category":"page"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Note: We try to adhere to these practices as of version v1.1.1.","category":"page"},{"location":"release-notes/#Version-[1.3.3]-2024-09-30","page":"Release Notes","title":"Version [1.3.3] - 2024-09-30","text":"","category":"section"},{"location":"release-notes/#Changed","page":"Release Notes","title":"Changed","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Fixed a remaining bug in NeuroTreeExt extensions. #475","category":"page"},{"location":"release-notes/#Version-[1.3.2]-2024-09-24","page":"Release Notes","title":"Version [1.3.2] - 2024-09-24","text":"","category":"section"},{"location":"release-notes/#Added","page":"Release Notes","title":"Added","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Added support for using a random forest as a surrogate model for the T-CREx generator. #483","category":"page"},{"location":"release-notes/#Changed-2","page":"Release Notes","title":"Changed","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Improved the T-CREx documentation further by bringing example even closer to the example in the paper. #483\nInclude citation linking to ICML paper in T-CREx documentation and docstrings. #480","category":"page"},{"location":"release-notes/#Version-[1.3.1]-2024-09-24","page":"Release Notes","title":"Version [1.3.1] - 2024-09-24","text":"","category":"section"},{"location":"release-notes/#Changed-3","page":"Release Notes","title":"Changed","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Fixed a remaining bug in NeuroTreeExt extensions. #475","category":"page"},{"location":"release-notes/#Version-[1.3.0]-2024-09-16","page":"Release Notes","title":"Version [1.3.0] - 2024-09-16","text":"","category":"section"},{"location":"release-notes/#Changed-4","page":"Release Notes","title":"Changed","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Fixed bug in NeuroTreeExt extensions. #475","category":"page"},{"location":"release-notes/#Added-2","page":"Release Notes","title":"Added","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Added basic support for the T-CREx counterfactual generator. #473\nAdded docstrings for package extensions to documentation. #475","category":"page"},{"location":"release-notes/#Version-[1.2.0]-2024-09-10","page":"Release Notes","title":"Version [1.2.0] - 2024-09-10","text":"","category":"section"},{"location":"release-notes/#Added-3","page":"Release Notes","title":"Added","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Added documentation for generating counterfactuals consistent with the MINT framework. #467\nAdded tests for new evaluation metrics and JEM extension. #471\nAdded support for gradient-based causal algorithm-recourse (MNIT) as described in Karimi et al. (2020). This incorporates an input encoder that is based on a Structural Causal Model #457 \nAdded out-of-the-box support for training joint energy models (JEM). #454\nAdded new evaluation metric to measure faithfulness of counterfactual explanations as in Altmeyer et al. (2024). #454\nA tutorial in the documentation (\"Explanation\" section) explaining the faithfulness metric in detail. #454\nAdded support for an energy constraint as in Altmeyer et al. (2024). This is the first step towards adding functionality for ECCCo. #387 ","category":"page"},{"location":"release-notes/#Changed-5","page":"Release Notes","title":"Changed","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"The fitresult field of Model now takes a concrete Fitresult type, for which some basic methods have been defined. This mutable struct has a field called other that accepts a dictionary Dict that can be filled with additional objects. #454\nRegenerated pre-trained model artifacts. #454\nUpdated the tutorial on \"Handling Data\". #454","category":"page"},{"location":"release-notes/#Removed","page":"Release Notes","title":"Removed","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Removed bug in find_potential_neighbours method. #454","category":"page"},{"location":"release-notes/#Version-[1.1.6]-2024-05-19","page":"Release Notes","title":"Version [1.1.6] - 2024-05-19","text":"","category":"section"},{"location":"release-notes/#Removed-2","page":"Release Notes","title":"Removed","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Removed the call to the Iris function in the test suite because of HTTPs issues. #452\nRemoved the mlj_models_catalogue because it served no obvious purpose. In the future, we may instead add meta information to the all_models_catalogue. #444","category":"page"},{"location":"release-notes/#Added-4","page":"Release Notes","title":"Added","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"New general Model struct that wraps empty concrete types. This adds a more general interface that is still flexible enough by simply using multiple dispatch on the empty concrete types. #444\nA new incompatible(::AbstractGenerator, ::AbstractCounterfactualExplanation) function has been added to avoid running a counterfactual search if the generator is incompatible with any other specification (e.g. the model). #444","category":"page"},{"location":"release-notes/#Changed-6","page":"Release Notes","title":"Changed","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"No longer exporting many of the deprecated functions. #452\nUpdated pre-trained model artifacts. #444\nSome function signatures have been deprecated, e.g. NeuroTreeModel to NeuroTree, LaplaceReduxModel to LaplaceNN. #444\nSupport for DecisionTree.jl models and the FeatureTweakGenerator have been moved to an extension (DecisionTreeExt). #444\nUpdates to NeuroTreeModels extensions to incorporate breaking changes to package. #444\nNo longer running alloc test on Windows. #441\nSlight change to doctests. #447","category":"page"},{"location":"release-notes/#Version-[v1.1.5](https://github.com/juliatrustworthyai/CounterfactualExplanations.jl/releases/tag/v1.1.5)-2024-04-30","page":"Release Notes","title":"Version v1.1.5 - 2024-04-30","text":"","category":"section"},{"location":"release-notes/#Added-5","page":"Release Notes","title":"Added","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Unit tests: adds a simple performance benchmark to test that for a small problem, generating a counterfactual using the generic generator takes at most 4700 allocations. Only run on julia v1.10 and higher. #436","category":"page"},{"location":"release-notes/#Changed-7","page":"Release Notes","title":"Changed","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"The find_potential_neighbours is now only triggered if one of the penalties of the generator requires access to samples from the target domain. This improves scalability because calling the function can be computationally costly (forward-pass). #436 \nThe target variable encodings are now handled more efficiently. Previously certain tasks were repeated, which was not necessary. #436","category":"page"},{"location":"release-notes/#Removed-3","page":"Release Notes","title":"Removed","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Removed the assertion checking that the model ever predicts the target value. While this assertion is useful, it is not essential. For large enough models and datasets, this forward pass can be very costly. #436\nRemoved redundant distance_from_targets function. #436","category":"page"},{"location":"release-notes/#Version-[v1.1.4](https://github.com/juliatrustworthyai/CounterfactualExplanations.jl/releases/tag/v1.1.4)-2024-04-25","page":"Release Notes","title":"Version v1.1.4 - 2024-04-25","text":"","category":"section"},{"location":"release-notes/#Changed-8","page":"Release Notes","title":"Changed","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Refactors the encodings and decodings such that it is now more streamlined. Instead of conditional statements, encodings are now dispatched on the type of a new unifying data.input_encoder field. #432\nRefactors the check for redundancy. This is now based on the convergence type and done right before the counterfactual search begins, if not redundant. #432","category":"page"},{"location":"release-notes/#Added-6","page":"Release Notes","title":"Added","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Added additional unit tests. #437","category":"page"},{"location":"release-notes/#Version-[v1.1.3](https://github.com/juliatrustworthyai/CounterfactualExplanations.jl/releases/tag/v1.1.3)-2024-04-17","page":"Release Notes","title":"Version v1.1.3 - 2024-04-17","text":"","category":"section"},{"location":"release-notes/#Added-7","page":"Release Notes","title":"Added","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Adds a section on Convergence to the documentation, Changelog.jl functionality and a few doc tests. #429","category":"page"},{"location":"release-notes/#Changed-9","page":"Release Notes","title":"Changed","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Changes style of taking gradients for the counterfactual search from implicit to explicit. #430\nRemoved all implicit imports. #430","category":"page"},{"location":"release-notes/#Removed-4","page":"Release Notes","title":"Removed","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Removed CUDA.jl dependency, because redundant. #430\nRemoved Parameters.jl dependency, because redundant. #430","category":"page"},{"location":"release-notes/#Version-[v1.1.2](https://github.com/juliatrustworthyai/CounterfactualExplanations.jl/releases/tag/v1.1.2)-2024-04-16","page":"Release Notes","title":"Version v1.1.2 - 2024-04-16","text":"","category":"section"},{"location":"release-notes/#Changed-10","page":"Release Notes","title":"Changed","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Replaces the GIF in the README and introduction of docs for a static image. ","category":"page"},{"location":"release-notes/#Version-[v1.1.1](https://github.com/juliatrustworthyai/CounterfactualExplanations.jl/releases/tag/v1.1.1)-2024-04-15","page":"Release Notes","title":"Version v1.1.1 - 2024-04-15","text":"","category":"section"},{"location":"release-notes/#Added-8","page":"Release Notes","title":"Added","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Added tests for LaplaceRedux extension. Bumped upper compat bound for LaplaceRedux.jl. #428","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/generators/revise/#REVISEGenerator","page":"REVISE","title":"REVISEGenerator","text":"","category":"section"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"REVISE is a Latent Space generator introduced by Joshi et al. (2019).","category":"page"},{"location":"explanation/generators/revise/#Description","page":"REVISE","title":"Description","text":"","category":"section"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"The current consensus in the literature is that Counterfactual Explanations should be realistic: the generated counterfactuals should look like they were generated by the data-generating process (DGP) that governs the problem at hand. With respect to Algorithmic Recourse, it is certainly true that counterfactuals should be realistic in order to be actionable for individuals.[1] To address this need, researchers have come up with various approaches in recent years. Among the most popular approaches is Latent Space Search, which was first proposed in Joshi et al. (2019): instead of traversing the feature space directly, this approach relies on a separate generative model that learns a latent space representation of the DGP. Assuming the generative model is well-specified, access to the learned latent embeddings of the data comes with two advantages:","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"Since the learned DGP is encoded in the latent space, the generated counterfactuals will respect the learned representation of the data. In practice, this means that counterfactuals will be realistic.\nThe latent space is typically a compressed (i.e. lower dimensional) version of the feature space. This makes the counterfactual search less costly.","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"There are also certain disadvantages though:","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"Learning generative models is (typically) an expensive task, which may well outweigh the benefits associated with utlimately traversing a lower dimensional space.\nIf the generative model is poorly specified, this will affect the quality of the counterfactuals.[2]","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"Anyway, traversing latent embeddings is a powerful idea that may be very useful depending on the specific context. This tutorial introduces the concept and how it is implemented in this package.","category":"page"},{"location":"explanation/generators/revise/#Usage","page":"REVISE","title":"Usage","text":"","category":"section"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"The approach can be used in our package as follows:","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"generator = REVISEGenerator()\nce = generate_counterfactual(x, target, counterfactual_data, M, generator)\nplot(ce)","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"(Image: )","category":"page"},{"location":"explanation/generators/revise/#Worked-2D-Examples","page":"REVISE","title":"Worked 2D Examples","text":"","category":"section"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"Below we load 2D data and train a VAE on it and plot the original samples against their reconstructions.","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"# output: true\n\ncounterfactual_data = CounterfactualData(load_overlapping()...)\nX = counterfactual_data.X\ny = counterfactual_data.y\ninput_dim = size(X, 1)\nusing CounterfactualExplanations.GenerativeModels: VAE, train!, reconstruct\nvae = VAE(input_dim; nll=Flux.Losses.mse, epochs=100, λ=0.01, latent_dim=2, hidden_dim=32)\nflux_training_params.verbose = true\ntrain!(vae, X)\nX̂ = reconstruct(vae, X)[1]\np0 = scatter(X[1, :], X[2, :], color=:blue, label=\"Original\", xlab=\"x₁\", ylab=\"x₂\")\nscatter!(X̂[1, :], X̂[2, :], color=:orange, label=\"Reconstructed\", xlab=\"x₁\", ylab=\"x₂\")\np1 = scatter(X[1, :], X̂[1, :], color=:purple, label=\"\", xlab=\"x₁\", ylab=\"x̂₁\")\np2 = scatter(X[2, :], X̂[2, :], color=:purple, label=\"\", xlab=\"x₂\", ylab=\"x̂₂\")\nplt2 = plot(p1,p2, layout=(1,2), size=(800, 400))\nplot(p0, plt2, layout=(2,1), size=(800, 600))","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"Next, we train a simple MLP for the classification task. Then we determine a target and factual class for our counterfactual search and select a random factual instance to explain.","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"M = fit_model(counterfactual_data, :MLP)\ntarget = 2\nfactual = 1\nchosen = rand(findall(predict_label(M, counterfactual_data) .== factual))\nx = select_factual(counterfactual_data, chosen)","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"Finally, we generate and visualize the generated counterfactual:","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"# Search:\ngenerator = REVISEGenerator()\nce = generate_counterfactual(x, target, counterfactual_data, M, generator)\nplot(ce)","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"(Image: )","category":"page"},{"location":"explanation/generators/revise/#3D-Example","page":"REVISE","title":"3D Example","text":"","category":"section"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"To illustrate the notion of Latent Space search, let’s look at an example involving 3-dimensional input data, which we can still visualize. The code chunk below loads the data and implements the counterfactual search.","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"# Data and Classifier:\ncounterfactual_data = CounterfactualData(load_blobs(k=3)...)\nX = counterfactual_data.X\nys = counterfactual_data.output_encoder.labels.refs\nM = fit_model(counterfactual_data, :MLP)\n\n# Randomly selected factual:\nx = select_factual(counterfactual_data,rand(1:size(counterfactual_data.X,2)))\ny = predict_label(M, counterfactual_data, x)[1]\ntarget = counterfactual_data.y_levels[counterfactual_data.y_levels .!= y][1]\n\n# Generate recourse:\nce = generate_counterfactual(x, target, counterfactual_data, M, generator)","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"The figure below demonstrates the idea of searching counterfactuals in a lower-dimensional latent space: on the left, we can see the counterfactual search in the 3-dimensional feature space, while on the right we can see the corresponding search in the latent space.","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"(Image: )","category":"page"},{"location":"explanation/generators/revise/#MNIST-data","page":"REVISE","title":"MNIST data","text":"","category":"section"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"Let’s carry the ideas introduced above over to a more complex example. The code below loads MNIST data as well as a pre-trained classifier and generative model for the data.","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"using CounterfactualExplanations.Models: load_mnist_mlp, load_mnist_ensemble, load_mnist_vae\ncounterfactual_data = CounterfactualData(load_mnist()...)\nX, y = CounterfactualExplanations.DataPreprocessing.unpack_data(counterfactual_data)\ninput_dim, n_obs = size(counterfactual_data.X)\nM = load_mnist_mlp()\nvae = load_mnist_vae()","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"The F1-score of our pre-trained image classifier on test data is: 0.94","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"Before continuing, we supply the pre-trained generative model to our data container:","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"counterfactual_data.input_encoder = vae # assign generative model","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"Now let’s define a factual and target label:","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"# Randomly selected factual:\nRandom.seed!(2023)\nfactual_label = 8\nx = reshape(X[:,rand(findall(predict_label(M, counterfactual_data).==factual_label))],input_dim,1)\ntarget = 3\nfactual = predict_label(M, counterfactual_data, x)[1]","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"Using REVISE, we are going to turn a randomly drawn 8 into a 3.","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"The API call is the same as always:","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"γ = 0.95\nconv = \n CounterfactualExplanations.Convergence.DecisionThresholdConvergence(decision_threshold=γ)\n# Define generator:\ngenerator = REVISEGenerator(opt=Flux.Adam(0.1))\n# Generate recourse:\nce = generate_counterfactual(x, target, counterfactual_data, M, generator; convergence=conv)","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"The chart below shows the results:","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"(Image: )","category":"page"},{"location":"explanation/generators/revise/#References","page":"REVISE","title":"References","text":"","category":"section"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"Joshi, Shalmali, Oluwasanmi Koyejo, Warut Vijitbenjaronk, Been Kim, and Joydeep Ghosh. 2019. “Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems.” https://arxiv.org/abs/1907.09615.","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"[1] In general, we believe that there may be a trade-off between creating counterfactuals that respect the DGP vs. counterfactuals reflect the behaviour of the black-model in question - both accurately and complete.","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"[2] We believe that there is another potentially crucial disadvantage of relying on a separate generative model: it reallocates the task of learning realistic explanations for the data from the black-box model to the generative model.","category":"page"},{"location":"explanation/generators/overview/","page":"Overview","title":"Overview","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/generators/overview/#generators_explanation","page":"Overview","title":"Counterfactual Generators","text":"","category":"section"},{"location":"explanation/generators/overview/","page":"Overview","title":"Overview","text":"Counterfactual generators form the very core of this package. The generator_catalogue can be used to inspect the available generators:","category":"page"},{"location":"explanation/generators/overview/","page":"Overview","title":"Overview","text":"generator_catalogue","category":"page"},{"location":"explanation/generators/overview/","page":"Overview","title":"Overview","text":"Dict{Symbol, Any} with 11 entries:\n :gravitational => GravitationalGenerator\n :growing_spheres => GrowingSpheresGenerator\n :revise => REVISEGenerator\n :clue => CLUEGenerator\n :probe => ProbeGenerator\n :dice => DiCEGenerator\n :feature_tweak => FeatureTweakGenerator\n :claproar => ClaPROARGenerator\n :wachter => WachterGenerator\n :generic => GenericGenerator\n :greedy => GreedyGenerator","category":"page"},{"location":"explanation/generators/overview/","page":"Overview","title":"Overview","text":"The following sections provide brief descriptions of all of them.","category":"page"},{"location":"explanation/generators/overview/#Gradient-based-Counterfactual-Generators","page":"Overview","title":"Gradient-based Counterfactual Generators","text":"","category":"section"},{"location":"explanation/generators/overview/","page":"Overview","title":"Overview","text":"At the time of writing, all generators are gradient-based: that is, counterfactuals are searched through gradient descent. In Altmeyer et al. (2023) we lay out a general methodological framework that can be applied to all of these generators:","category":"page"},{"location":"explanation/generators/overview/","page":"Overview","title":"Overview","text":"beginaligned\nmathbfs^prime = arg min_mathbfs^prime in mathcalS left textyloss(M(f(mathbfs^prime))y^*)+ lambda textcost(f(mathbfs^prime)) right \nendaligned ","category":"page"},{"location":"explanation/generators/overview/","page":"Overview","title":"Overview","text":"“Here mathbfs^prime=lefts_k^primeright_K is a K-dimensional array of counterfactual states and f mathcalS mapsto mathcalX maps from the counterfactual state space to the feature space.” (Altmeyer et al. 2023)","category":"page"},{"location":"explanation/generators/overview/","page":"Overview","title":"Overview","text":"For most generators, the state space is the feature space (f is the identity function) and the number of counterfactuals K is one. Latent Space generators instead search counterfactuals in some latent space mathcalS. In this case, f corresponds to the decoder part of the generative model, that is the function that maps back from the latent space to inputs.","category":"page"},{"location":"explanation/generators/overview/#References","page":"Overview","title":"References","text":"","category":"section"},{"location":"explanation/generators/overview/","page":"Overview","title":"Overview","text":"Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia Liem. 2023. “Endogenous Macrodynamics in Algorithmic Recourse.” In First IEEE Conference on Secure and Trustworthy Machine Learning. https://doi.org/10.1109/satml54575.2023.00036.","category":"page"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/generators/gravitational/#GravitationalGenerator","page":"Gravitational","title":"GravitationalGenerator","text":"","category":"section"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"The GravitationalGenerator was introduced in Altmeyer et al. (2023). It is named so because it generates counterfactuals that gravitate towards some sensible point in the target domain.","category":"page"},{"location":"explanation/generators/gravitational/#Description","page":"Gravitational","title":"Description","text":"","category":"section"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"Altmeyer et al. (2023) extend the general framework as follows,","category":"page"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"beginaligned\nmathbfs^prime = arg min_mathbfs^prime in mathcalS textyloss(M(f(mathbfs^prime))y^*) + lambda_1 textcost(f(mathbfs^prime)) + lambda_2 textextcost(f(mathbfs^prime)) \nendaligned ","category":"page"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"where textcost(f(mathbfs^prime)) denotes the proxy for costs faced by the individual. “The newly introduced term textextcost(f(mathbfs^prime)) is meant to capture and address external costs incurred by the collective of individuals in response to changes in mathbfs^prime.” (Altmeyer et al. 2023)","category":"page"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"For the GravitationalGenerator we have,","category":"page"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"beginaligned\ntextextcost(f(mathbfs^prime)) = textdist(f(mathbfs^prime)barx^*) \nendaligned","category":"page"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"where barx is some sensible point in the target domain, for example, the subsample average barx^*=textmean(x), x in mathcalD_1.","category":"page"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"There is a tradeoff then, between the distance of counterfactuals from their factual value and the chosen point in the target domain. The chart below illustrates how the counterfactual outcome changes as the penalty lambda_2 on the distance to the point in the target domain is increased from left to right (holding the other penalty term constant).","category":"page"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"(Image: )","category":"page"},{"location":"explanation/generators/gravitational/#Usage","page":"Gravitational","title":"Usage","text":"","category":"section"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"The approach can be used in our package as follows:","category":"page"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"generator = GravitationalGenerator()\nce = generate_counterfactual(x, target, counterfactual_data, M, generator)\ndisplay(plot(ce))","category":"page"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"(Image: )","category":"page"},{"location":"explanation/generators/gravitational/#Comparison-to-GenericGenerator","page":"Gravitational","title":"Comparison to GenericGenerator","text":"","category":"section"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"The figure below compares the outcome for the GenericGenerator and the GravitationalGenerator.","category":"page"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"(Image: )","category":"page"},{"location":"explanation/generators/gravitational/#References","page":"Gravitational","title":"References","text":"","category":"section"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia Liem. 2023. “Endogenous Macrodynamics in Algorithmic Recourse.” In First IEEE Conference on Secure and Trustworthy Machine Learning. https://doi.org/10.1109/satml54575.2023.00036.","category":"page"}] +[{"location":"tutorials/simple_example/","page":"Simple Example","title":"Simple Example","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"tutorials/simple_example/#Simple-Example","page":"Simple Example","title":"Simple Example","text":"","category":"section"},{"location":"tutorials/simple_example/","page":"Simple Example","title":"Simple Example","text":"In this tutorial, we will go through a simple example involving synthetic data and a generic counterfactual generator.","category":"page"},{"location":"tutorials/simple_example/#Data-and-Classifier","page":"Simple Example","title":"Data and Classifier","text":"","category":"section"},{"location":"tutorials/simple_example/","page":"Simple Example","title":"Simple Example","text":"Below we generate some linearly separable data and fit a simple MLP classifier with batch normalization to it. For more information on generating data and models, refer to the Handling Data and Handling Models tutorials respectively.","category":"page"},{"location":"tutorials/simple_example/","page":"Simple Example","title":"Simple Example","text":"# Counteractual data and model:\nflux_training_params.batchsize = 10\ndata = TaijaData.load_linearly_separable()\ncounterfactual_data = DataPreprocessing.CounterfactualData(data...)\ncounterfactual_data.standardize = true\nM = fit_model(counterfactual_data, :MLP, batch_norm=true)","category":"page"},{"location":"tutorials/simple_example/#Counterfactual-Search","page":"Simple Example","title":"Counterfactual Search","text":"","category":"section"},{"location":"tutorials/simple_example/","page":"Simple Example","title":"Simple Example","text":"Next, determine a target and factual class for our counterfactual search and select a random factual instance to explain.","category":"page"},{"location":"tutorials/simple_example/","page":"Simple Example","title":"Simple Example","text":"target = 2\nfactual = 1\nchosen = rand(findall(predict_label(M, counterfactual_data) .== factual))\nx = select_factual(counterfactual_data, chosen)","category":"page"},{"location":"tutorials/simple_example/","page":"Simple Example","title":"Simple Example","text":"Finally, we generate and visualize the generated counterfactual:","category":"page"},{"location":"tutorials/simple_example/","page":"Simple Example","title":"Simple Example","text":"# Search:\ngenerator = WachterGenerator()\nce = generate_counterfactual(x, target, counterfactual_data, M, generator)\nplot(ce)","category":"page"},{"location":"tutorials/simple_example/","page":"Simple Example","title":"Simple Example","text":"(Image: )","category":"page"},{"location":"how_to_guides/custom_generators/","page":"... add custom generators","title":"... add custom generators","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"how_to_guides/custom_generators/#How-to-add-Custom-Generators","page":"... add custom generators","title":"How to add Custom Generators","text":"","category":"section"},{"location":"how_to_guides/custom_generators/","page":"... add custom generators","title":"... add custom generators","text":"As we will see in this short tutorial, building custom counterfactual generators is straightforward. We hope that this will facilitate contributions through the community.","category":"page"},{"location":"how_to_guides/custom_generators/#Generic-generator-with-dropout","page":"... add custom generators","title":"Generic generator with dropout","text":"","category":"section"},{"location":"how_to_guides/custom_generators/","page":"... add custom generators","title":"... add custom generators","text":"To illustrate how custom generators can be implemented we will consider a simple example of a generator that extends the functionality of our GenericGenerator. We have noted elsewhere that the effectiveness of counterfactual explanations depends to some degree on the quality of the fitted model. Another, perhaps trivial, thing to note is that counterfactual explanations are not unique: there are potentially many valid counterfactual paths. One interesting (or silly) idea following these two observations might be to introduce some form of regularization in the counterfactual search. For example, we could use dropout to randomly switch features on and off in each iteration. Without dwelling further on the usefulness of this idea, let us see how it can be implemented.","category":"page"},{"location":"how_to_guides/custom_generators/","page":"... add custom generators","title":"... add custom generators","text":"The first code chunk below implements two important steps: 1) create an abstract subtype of the AbstractGradientBasedGenerator and 2) create a constructor similar to the GenericConstructor, but with one additional field for the probability of dropout.","category":"page"},{"location":"how_to_guides/custom_generators/","page":"... add custom generators","title":"... add custom generators","text":"# Abstract suptype:\nabstract type AbstractDropoutGenerator <: AbstractGradientBasedGenerator end\n\n# Constructor:\nstruct DropoutGenerator <: AbstractDropoutGenerator\n loss::Function # loss function\n penalty::Function\n λ::AbstractFloat # strength of penalty\n latent_space::Bool\n opt::Any # optimizer\n generative_model_params::NamedTuple\n p_dropout::AbstractFloat # dropout rate\nend\n\n# Instantiate:\ngenerator = DropoutGenerator(\n Flux.logitbinarycrossentropy,\n CounterfactualExplanations.Objectives.distance_l1,\n 0.1,\n false,\n Flux.Optimise.Descent(0.1),\n (;),\n 0.5,\n)","category":"page"},{"location":"how_to_guides/custom_generators/","page":"... add custom generators","title":"... add custom generators","text":"Next, we define how feature perturbations are generated for our dropout generator: in particular, we extend the relevant function through a method that implemented the dropout logic.","category":"page"},{"location":"how_to_guides/custom_generators/","page":"... add custom generators","title":"... add custom generators","text":"using CounterfactualExplanations.Generators\nusing StatsBase\nfunction Generators.generate_perturbations(\n generator::AbstractDropoutGenerator, \n ce::CounterfactualExplanation\n)\n s′ = deepcopy(ce.s′)\n new_s′ = Generators.propose_state(generator, ce)\n Δs′ = new_s′ - s′ # gradient step\n\n # Dropout:\n set_to_zero = sample(\n 1:length(Δs′),\n Int(round(generator.p_dropout*length(Δs′))),\n replace=false\n )\n Δs′[set_to_zero] .= 0\n return Δs′\nend","category":"page"},{"location":"how_to_guides/custom_generators/","page":"... add custom generators","title":"... add custom generators","text":"Finally, we proceed to generate counterfactuals in the same way we always do:","category":"page"},{"location":"how_to_guides/custom_generators/","page":"... add custom generators","title":"... add custom generators","text":"# Data and Classifier:\nM = fit_model(counterfactual_data, :DeepEnsemble)\n\n# Factual and Target:\nyhat = predict_label(M, counterfactual_data)\ntarget = 2 # target label\ncandidates = findall(vec(yhat) .!= target)\nchosen = rand(candidates)\nx = select_factual(counterfactual_data, chosen)\n\n# Counterfactual search:\nce = generate_counterfactual(\n x, target, counterfactual_data, M, generator;\n num_counterfactuals=5)\n\nplot(ce)","category":"page"},{"location":"how_to_guides/custom_generators/","page":"... add custom generators","title":"... add custom generators","text":"(Image: )","category":"page"},{"location":"extensions/laplace_redux/","page":"LaplaceRedux","title":"LaplaceRedux","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"extensions/laplace_redux/#[LaplaceRedux.jl](https://github.com/JuliaTrustworthyAI/LaplaceRedux.jl)","page":"LaplaceRedux","title":"LaplaceRedux.jl","text":"","category":"section"},{"location":"extensions/laplace_redux/","page":"LaplaceRedux","title":"LaplaceRedux","text":"LaplaceRedux.jl is one of Taija’s own packages that provides a framework for Effortless Bayesian Deep Learning through Laplace Approximation for Flux.jl neural networks. The methodology was first proposed by Immer, Korzepa, and Bauer (2020) and implemented in Python by Daxberger et al. (2021). This is relevant to the work on counterfactual explanations (CE), because research has shown that counterfactual explanations for Bayesian models are typically more plausible, because Bayesian models are able to capture the uncertainty in the data (Schut et al. 2021).","category":"page"},{"location":"extensions/laplace_redux/","page":"LaplaceRedux","title":"LaplaceRedux","text":"tip: Read More\nTo learn more about Laplace Redux, head over to the official documentation.","category":"page"},{"location":"extensions/laplace_redux/#Example","page":"LaplaceRedux","title":"Example","text":"","category":"section"},{"location":"extensions/laplace_redux/","page":"LaplaceRedux","title":"LaplaceRedux","text":"The extension will be loaded automatically when loading the LaplaceRedux package (assuming the CounterfactualExplanations package is also loaded).","category":"page"},{"location":"extensions/laplace_redux/","page":"LaplaceRedux","title":"LaplaceRedux","text":"using LaplaceRedux","category":"page"},{"location":"extensions/laplace_redux/","page":"LaplaceRedux","title":"LaplaceRedux","text":"Next, we will fit a neural network with Laplace Approximation to the moons dataset using our standard package API for doing so. By default, the Bayesian prior is optimized through empirical Bayes using the LaplaceRedux package.","category":"page"},{"location":"extensions/laplace_redux/","page":"LaplaceRedux","title":"LaplaceRedux","text":"# Fit model to data:\ndata = CounterfactualData(load_moons()...)\nM = fit_model(data, :LaplaceRedux; n_hidden=16)","category":"page"},{"location":"extensions/laplace_redux/","page":"LaplaceRedux","title":"LaplaceRedux","text":"LaplaceReduxExt.LaplaceNN(Laplace(Chain(Dense(2 => 16, relu), Dense(16 => 2)), :classification, :all, nothing, :full, LaplaceRedux.Curvature.GGN(Chain(Dense(2 => 16, relu), Dense(16 => 2)), :classification, Flux.Losses.logitcrossentropy, Array{Float32}[[-1.3098596 0.59241515; 0.91760206 0.02950162; … ; -0.018356863 0.12850936; -0.5381665 -0.7872097], [-0.2581085, -0.90997887, -0.5418944, -0.23735572, 0.81020063, -0.3033359, -0.47902864, -0.6432098, -0.038013518, 0.028280666, 0.009903266, -0.8796683, 0.41090682, 0.011093224, -0.1580453, 0.7911349], [3.092321 -2.4660816 … -0.3446268 -1.465249; -2.9468734 3.167357 … 0.31758657 1.7140366], [-0.3107697, 0.31076983]], 1.0, :all, nothing), 1.0, 0.0, Float32[-1.3098596, 0.91760206, 0.5239727, -1.1579771, -0.851813, -1.9411169, 0.47409698, 0.6679365, 0.8944433, 0.663116 … -0.3172857, 0.15530388, 1.3264753, -0.3506721, -0.3446268, 0.31758657, -1.465249, 1.7140366, -0.3107697, 0.31076983], [0.10530027048093525 0.0 … 0.0 0.0; 0.0 0.10530027048093525 … 0.0 0.0; … ; 0.0 0.0 … 0.10530027048093525 0.0; 0.0 0.0 … 0.0 0.10530027048093525], [0.10066431429751965 0.0 … -0.030656783425475176 0.030656334963944154; 0.0 20.93513766443357 … -2.3185940232360736 2.3185965484008193; … ; -0.030656783425475176 -2.3185940232360736 … 1.0101450999063672 -1.0101448118057204; 0.030656334963944154 2.3185965484008193 … -1.0101448118057204 1.0101451389641771], [1.1006643142975197 0.0 … -0.030656783425475176 0.030656334963944154; 0.0 21.93513766443357 … -2.3185940232360736 2.3185965484008193; … ; -0.030656783425475176 -2.3185940232360736 … 2.0101450999063672 -1.0101448118057204; 0.030656334963944154 2.3185965484008193 … -1.0101448118057204 2.010145138964177], [0.9412600568016627 0.003106911671721699 … 0.003743740333409532 -0.003743452315572739; 0.003106912946573237 0.6539263732691709 … 0.0030385955287734246 -0.0030390041204196414; … ; 0.0037437406323562283 0.003038591829991259 … 0.9624905710233649 0.03750911813897676; -0.0037434526145225856 -0.0030390004216833593 … 0.03750911813898124 0.9624905774453485], 82, 250, 2, 997.8087484836578), :classification_multi)","category":"page"},{"location":"extensions/laplace_redux/","page":"LaplaceRedux","title":"LaplaceRedux","text":"Finally, we select a factual instance and generate a counterfactual explanation for it using the generic gradient-based CE method.","category":"page"},{"location":"extensions/laplace_redux/","page":"LaplaceRedux","title":"LaplaceRedux","text":"# Select a factual instance:\ntarget = 1\nfactual = 0\nchosen = rand(findall(predict_label(M, data) .== factual))\nx = select_factual(data, chosen)\n\n# Generate counterfactual explanation:\nη = 0.01\ngenerator = GenericGenerator(; opt=Descent(η), λ=0.01)\nconv = CounterfactualExplanations.Convergence.DecisionThresholdConvergence(;\n decision_threshold=0.9, max_iter=100\n)\nce = generate_counterfactual(x, target, data, M, generator; convergence=conv)\nplot(ce, alpha=0.1)","category":"page"},{"location":"extensions/laplace_redux/","page":"LaplaceRedux","title":"LaplaceRedux","text":"(Image: )","category":"page"},{"location":"extensions/laplace_redux/#References","page":"LaplaceRedux","title":"References","text":"","category":"section"},{"location":"extensions/laplace_redux/","page":"LaplaceRedux","title":"LaplaceRedux","text":"Daxberger, Erik, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, and Philipp Hennig. 2021. “Laplace Redux-Effortless Bayesian Deep Learning.” Advances in Neural Information Processing Systems 34.","category":"page"},{"location":"extensions/laplace_redux/","page":"LaplaceRedux","title":"LaplaceRedux","text":"Immer, Alexander, Maciej Korzepa, and Matthias Bauer. 2020. “Improving Predictions of Bayesian Neural Networks via Local Linearization.” https://arxiv.org/abs/2008.08400.","category":"page"},{"location":"extensions/laplace_redux/","page":"LaplaceRedux","title":"LaplaceRedux","text":"Schut, Lisa, Oscar Key, Rory Mc Grath, Luca Costabello, Bogdan Sacaleanu, Yarin Gal, et al. 2021. “Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties.” In International Conference on Artificial Intelligence and Statistics, 1756–64. PMLR.","category":"page"},{"location":"contribute/performance/","page":"-","title":"-","text":"Random.seed!(42)\n# Counteractual data and model:\ndata = TaijaData.load_linearly_separable()\ncounterfactual_data = DataPreprocessing.CounterfactualData(data...)\nM = fit_model(counterfactual_data, :Linear)\ntarget = 2\nfactual = 1\nchosen = rand(findall(predict_label(M, counterfactual_data) .== factual))\nx = select_factual(counterfactual_data, chosen)\n\n# Search:\ngenerator = GenericGenerator()\nce = generate_counterfactual(x, target, counterfactual_data, M, generator)","category":"page"},{"location":"contribute/performance/","page":"-","title":"-","text":"data_large = TaijaData.load_linearly_separable(100000)\ncounterfactual_data_large = DataPreprocessing.CounterfactualData(data_large...)","category":"page"},{"location":"contribute/performance/","page":"-","title":"-","text":"@time generate_counterfactual(x, target, counterfactual_data, M, generator)","category":"page"},{"location":"contribute/performance/","page":"-","title":"-","text":"@time generate_counterfactual(x, target, counterfactual_data_large, M, generator)","category":"page"},{"location":"explanation/generators/clue/","page":"CLUE","title":"CLUE","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/generators/clue/#CLUEGenerator","page":"CLUE","title":"CLUEGenerator","text":"","category":"section"},{"location":"explanation/generators/clue/","page":"CLUE","title":"CLUE","text":"In this tutorial, we introduce the CLUEGenerator, a counterfactual generator based on the Counterfactual Latent Uncertainty Explanations (CLUE) method proposed by Antorán et al. (2020).","category":"page"},{"location":"explanation/generators/clue/#Description","page":"CLUE","title":"Description","text":"","category":"section"},{"location":"explanation/generators/clue/","page":"CLUE","title":"CLUE","text":"The CLUEGenerator leverages differentiable probabilistic models, such as Bayesian Neural Networks (BNNs), to estimate uncertainty in predictions. It aims to provide interpretable counterfactual explanations by identifying input patterns that lead to predictive uncertainty. The generator utilizes a latent variable framework and employs a decoder from a variational autoencoder (VAE) to generate counterfactual samples in latent space.","category":"page"},{"location":"explanation/generators/clue/","page":"CLUE","title":"CLUE","text":"The CLUE algorithm minimizes a loss function that combines uncertainty estimates and the distance between the generated counterfactual and the original input. By optimizing this loss function iteratively, the CLUEGenerator generates counterfactuals that are similar to the original observation but assigned low uncertainty.","category":"page"},{"location":"explanation/generators/clue/","page":"CLUE","title":"CLUE","text":"The formula for predictive entropy is as follow:","category":"page"},{"location":"explanation/generators/clue/","page":"CLUE","title":"CLUE","text":"beginaligned\nH(y^*x^* D) = - sum_k=1^K p(y^*=c_kx^* D) log p(y^*=c_kx^* D)\nendaligned","category":"page"},{"location":"explanation/generators/clue/#Usage","page":"CLUE","title":"Usage","text":"","category":"section"},{"location":"explanation/generators/clue/","page":"CLUE","title":"CLUE","text":"While using one must keep in mind that the CLUE algorithim is meant to find a more robust datapoint of the same class, using CLUE generator without any additional penalties/losses will mean that it is not a counterfactual generator. The generated result will be of the same class as the original input, but a more robust datapoint.","category":"page"},{"location":"explanation/generators/clue/","page":"CLUE","title":"CLUE","text":"CLUE works best for BNN’s. The CLUEGenerator can be used with any differentiable probabilistic model, but the results may not be as good as with BNNs.","category":"page"},{"location":"explanation/generators/clue/","page":"CLUE","title":"CLUE","text":"The CLUEGenerator can be used in the following manner:","category":"page"},{"location":"explanation/generators/clue/","page":"CLUE","title":"CLUE","text":"generator = CLUEGenerator()\nM = fit_model(counterfactual_data, :DeepEnsemble)\nconv = CounterfactualExplanations.Convergence.MaxIterConvergence(max_iter=1000)\nce = generate_counterfactual(\n x, target, counterfactual_data, M, generator;\n convergence=conv)\nplot(ce)","category":"page"},{"location":"explanation/generators/clue/","page":"CLUE","title":"CLUE","text":"(Image: )","category":"page"},{"location":"explanation/generators/clue/","page":"CLUE","title":"CLUE","text":"Extra: The CLUE generator can also be used upon already having achieved a counterfactual with a different generator. In this case, you can use CLUE and make the counterfactual more robust.","category":"page"},{"location":"explanation/generators/clue/","page":"CLUE","title":"CLUE","text":"Note: The above documentation is based on the information provided in the CLUE paper. Please refer to the original paper for more detailed explanations and implementation specifics.","category":"page"},{"location":"explanation/generators/clue/#References","page":"CLUE","title":"References","text":"","category":"section"},{"location":"explanation/generators/clue/","page":"CLUE","title":"CLUE","text":"Antorán, Javier, Umang Bhatt, Tameem Adel, Adrian Weller, and José Miguel Hernández-Lobato. 2020. “Getting a Clue: A Method for Explaining Uncertainty Estimates.” https://arxiv.org/abs/2006.06848.","category":"page"},{"location":"CHANGELOG/#Changelog","page":"Changelog","title":"Changelog","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"All notable changes to this project will be documented in this file.","category":"page"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.","category":"page"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Note: We try to adhere to these practices as of version [v1.1.1].","category":"page"},{"location":"CHANGELOG/#Version-[1.3.4]-2024-10-22","page":"Changelog","title":"Version [1.3.4] - 2024-10-22","text":"","category":"section"},{"location":"CHANGELOG/#Changed","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Fixed a bug in the find_potential_neighbours method. ","category":"page"},{"location":"CHANGELOG/#Version-[1.3.3]-2024-09-30","page":"Changelog","title":"Version [1.3.3] - 2024-09-30","text":"","category":"section"},{"location":"CHANGELOG/#Changed-2","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Fixed a remaining bug in NeuroTreeExt extensions. [#475]","category":"page"},{"location":"CHANGELOG/#Version-[1.3.2]-2024-09-24","page":"Changelog","title":"Version [1.3.2] - 2024-09-24","text":"","category":"section"},{"location":"CHANGELOG/#Added","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Added support for using a random forest as a surrogate model for the T-CREx generator. [#483]","category":"page"},{"location":"CHANGELOG/#Changed-3","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Improved the T-CREx documentation further by bringing example even closer to the example in the paper. [#483]\nInclude citation linking to ICML paper in T-CREx documentation and docstrings. [#480]","category":"page"},{"location":"CHANGELOG/#Version-[1.3.1]-2024-09-24","page":"Changelog","title":"Version [1.3.1] - 2024-09-24","text":"","category":"section"},{"location":"CHANGELOG/#Changed-4","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Fixed a remaining bug in NeuroTreeExt extensions. [#475]","category":"page"},{"location":"CHANGELOG/#Version-[1.3.0]-2024-09-16","page":"Changelog","title":"Version [1.3.0] - 2024-09-16","text":"","category":"section"},{"location":"CHANGELOG/#Changed-5","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Fixed bug in NeuroTreeExt extensions. [#475]","category":"page"},{"location":"CHANGELOG/#Added-2","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Added basic support for the T-CREx counterfactual generator. [#473]\nAdded docstrings for package extensions to documentation. [#475]","category":"page"},{"location":"CHANGELOG/#Version-[1.2.0]-2024-09-10","page":"Changelog","title":"Version [1.2.0] - 2024-09-10","text":"","category":"section"},{"location":"CHANGELOG/#Added-3","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Added documentation for generating counterfactuals consistent with the MINT framework. [#467]\nAdded tests for new evaluation metrics and JEM extension. [#471]\nAdded support for gradient-based causal algorithm-recourse (MNIT) as described in Karimi et al. (2020). This incorporates an input encoder that is based on a Structural Causal Model [#457] \nAdded out-of-the-box support for training joint energy models (JEM). [#454]\nAdded new evaluation metric to measure faithfulness of counterfactual explanations as in Altmeyer et al. (2024). [#454]\nA tutorial in the documentation (\"Explanation\" section) explaining the faithfulness metric in detail. [#454]\nAdded support for an energy constraint as in Altmeyer et al. (2024). This is the first step towards adding functionality for ECCCo. [#387] ","category":"page"},{"location":"CHANGELOG/#Changed-6","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"The fitresult field of Model now takes a concrete Fitresult type, for which some basic methods have been defined. This mutable struct has a field called other that accepts a dictionary Dict that can be filled with additional objects. [#454]\nRegenerated pre-trained model artifacts. [#454]\nUpdated the tutorial on \"Handling Data\". [#454]","category":"page"},{"location":"CHANGELOG/#Removed","page":"Changelog","title":"Removed","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Removed bug in find_potential_neighbours method. [#454]","category":"page"},{"location":"CHANGELOG/#Version-[1.1.6]-2024-05-19","page":"Changelog","title":"Version [1.1.6] - 2024-05-19","text":"","category":"section"},{"location":"CHANGELOG/#Removed-2","page":"Changelog","title":"Removed","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Removed the call to the Iris function in the test suite because of HTTPs issues. [#452]\nRemoved the mlj_models_catalogue because it served no obvious purpose. In the future, we may instead add meta information to the all_models_catalogue. [#444]","category":"page"},{"location":"CHANGELOG/#Added-4","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"New general Model struct that wraps empty concrete types. This adds a more general interface that is still flexible enough by simply using multiple dispatch on the empty concrete types. [#444]\nA new incompatible(::AbstractGenerator, ::AbstractCounterfactualExplanation) function has been added to avoid running a counterfactual search if the generator is incompatible with any other specification (e.g. the model). [#444]","category":"page"},{"location":"CHANGELOG/#Changed-7","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"No longer exporting many of the deprecated functions. [#452]\nUpdated pre-trained model artifacts. [#444]\nSome function signatures have been deprecated, e.g. NeuroTreeModel to NeuroTree, LaplaceReduxModel to LaplaceNN. [#444]\nSupport for DecisionTree.jl models and the FeatureTweakGenerator have been moved to an extension (DecisionTreeExt). [#444]\nUpdates to NeuroTreeModels extensions to incorporate breaking changes to package. [#444]\nNo longer running alloc test on Windows. [#441]\nSlight change to doctests. [#447]","category":"page"},{"location":"CHANGELOG/#Version-[v1.1.5]-2024-04-30","page":"Changelog","title":"Version [v1.1.5] - 2024-04-30","text":"","category":"section"},{"location":"CHANGELOG/#Added-5","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Unit tests: adds a simple performance benchmark to test that for a small problem, generating a counterfactual using the generic generator takes at most 4700 allocations. Only run on julia v1.10 and higher. [#436]","category":"page"},{"location":"CHANGELOG/#Changed-8","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"The find_potential_neighbours is now only triggered if one of the penalties of the generator requires access to samples from the target domain. This improves scalability because calling the function can be computationally costly (forward-pass). [#436] \nThe target variable encodings are now handled more efficiently. Previously certain tasks were repeated, which was not necessary. [#436]","category":"page"},{"location":"CHANGELOG/#Removed-3","page":"Changelog","title":"Removed","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Removed the assertion checking that the model ever predicts the target value. While this assertion is useful, it is not essential. For large enough models and datasets, this forward pass can be very costly. [#436]\nRemoved redundant distance_from_targets function. [#436]","category":"page"},{"location":"CHANGELOG/#Version-[v1.1.4]-2024-04-25","page":"Changelog","title":"Version [v1.1.4] - 2024-04-25","text":"","category":"section"},{"location":"CHANGELOG/#Changed-9","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Refactors the encodings and decodings such that it is now more streamlined. Instead of conditional statements, encodings are now dispatched on the type of a new unifying data.input_encoder field. [#432]\nRefactors the check for redundancy. This is now based on the convergence type and done right before the counterfactual search begins, if not redundant. [#432]","category":"page"},{"location":"CHANGELOG/#Added-6","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Added additional unit tests. [#437]","category":"page"},{"location":"CHANGELOG/#Version-[v1.1.3]-2024-04-17","page":"Changelog","title":"Version [v1.1.3] - 2024-04-17","text":"","category":"section"},{"location":"CHANGELOG/#Added-7","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Adds a section on Convergence to the documentation, Changelog.jl functionality and a few doc tests. [#429]","category":"page"},{"location":"CHANGELOG/#Changed-10","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Changes style of taking gradients for the counterfactual search from implicit to explicit. [#430]\nRemoved all implicit imports. [#430]","category":"page"},{"location":"CHANGELOG/#Removed-4","page":"Changelog","title":"Removed","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Removed CUDA.jl dependency, because redundant. [#430]\nRemoved Parameters.jl dependency, because redundant. [#430]","category":"page"},{"location":"CHANGELOG/#Version-[v1.1.2]-2024-04-16","page":"Changelog","title":"Version [v1.1.2] - 2024-04-16","text":"","category":"section"},{"location":"CHANGELOG/#Changed-11","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Replaces the GIF in the README and introduction of docs for a static image. ","category":"page"},{"location":"CHANGELOG/#Version-[v1.1.1]-2024-04-15","page":"Changelog","title":"Version [v1.1.1] - 2024-04-15","text":"","category":"section"},{"location":"CHANGELOG/#Added-8","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"Added tests for LaplaceRedux extension. Bumped upper compat bound for LaplaceRedux.jl. [#428]","category":"page"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"","category":"page"},{"location":"CHANGELOG/","page":"Changelog","title":"Changelog","text":"[#428]: https://github.com/juliatrustworthyai/CounterfactualExplanations.jl/issues/428 [#429]: https://github.com/juliatrustworthyai/CounterfactualExplanations.jl/issues/429","category":"page"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"tutorials/convergence/#convergence","page":"Convergence","title":"Convergence","text":"","category":"section"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"The search for counterfactuals can be seen as an optimization problem, where the goal is to find a point in the input space. One questions that has received surprisingly little attention is how to determine when the search has converged. In a recent paper, we have briefly discussed why it is important to consider convergence (Altmeyer et al. 2024):","category":"page"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"One intuitive way to specify convergence is in terms of threshold probabilities: once the predicted probability p(y^+x^prime) exceeds some user-defined threshold γ such that the counterfactual is valid, we could consider the search to have converged. In the binary case, for example, convergence could be defined as p(y^+x^prime) 05 in this sense. Note, however, how this can be expected to yield counterfactuals in the proximity of the decision boundary, a region characterized by high aleatoric uncertainty. In other words, counterfactuals generated in this way would generally not be plausible. To avoid this from happening, we specify convergence in terms of gradients approaching zero for all our experiments and all of our generators. This is allows us to get a cleaner read on how the different counterfactual search objectives affect counterfactual outcomes.","category":"page"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"In the paper, we were primarily interested in benchmarking counterfactuals generated by different search objectives. In other contexts, however, it may be more appropriate to specify convergence in terms of threshold probabilities. Our package allows you to specify convergence in terms of gradients, threshold probabilities or simply in terms of the total number of iterations. In this section, we will show you how to do this.","category":"page"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"using CounterfactualExplanations.Convergence\ngenerator = GenericGenerator(λ=0.01)","category":"page"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"GradientBasedGenerator(nothing, CounterfactualExplanations.Objectives.distance_l1, 0.01, false, false, Descent(0.1), NamedTuple())","category":"page"},{"location":"tutorials/convergence/#Convergence-in-terms-of-gradients","page":"Convergence","title":"Convergence in terms of gradients","text":"","category":"section"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"As gradients approach zero, the conditions defined by the search objective and hence the generator are satisfied. We therefore refere to this type of convergece criterium as GeneratorConditionsConvergence","category":"page"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"conv = GeneratorConditionsConvergence(gradient_tol=0.01, max_iter=1000)\nce_gen = generate_counterfactual(x, target, counterfactual_data, M, generator; convergence = conv)","category":"page"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"CounterfactualExplanation\nConvergence: ✅ after 179 steps.","category":"page"},{"location":"tutorials/convergence/#Convergence-in-terms-of-threshold-probabilities","page":"Convergence","title":"Convergence in terms of threshold probabilities","text":"","category":"section"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"In this case, the search is considered to have converged once the predicted probability p(y^+x^prime) exceeds some user-defined threshold γ such that the counterfactual is valid. We refer to this type of convergence criterium as DecisionThresholdConvergence.","category":"page"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"conv = DecisionThresholdConvergence(decision_threshold=0.75)\nce_dec = generate_counterfactual(x, target, counterfactual_data, M, generator; convergence = conv)","category":"page"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"CounterfactualExplanation\nConvergence: ✅ after 9 steps.","category":"page"},{"location":"tutorials/convergence/#Convergence-in-terms-of-the-total-number-of-iterations","page":"Convergence","title":"Convergence in terms of the total number of iterations","text":"","category":"section"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"In this case, the search is considered to have converged once the total number of iterations exceeds some user-defined threshold max_iter. We refer to this type of convergence criterium as MaxIterConvergence.","category":"page"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"conv = MaxIterConvergence(max_iter=25)\nce_max = generate_counterfactual(x, target, counterfactual_data, M, generator; convergence = conv)","category":"page"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"CounterfactualExplanation\nConvergence: ✅ after 25 steps.","category":"page"},{"location":"tutorials/convergence/#Comparison","page":"Convergence","title":"Comparison","text":"","category":"section"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"plts = []\nfor (ce, titl) in zip([ce_gen, ce_dec, ce_max], [\"Gradient Convergence\", \"Decision Threshold Convergence\", \"Max Iterations Convergence\"])\n push!(plts, plot(ce; title=titl, cbar=false))\nend\nplot(plts..., layout=(1,3), size=(1200, 380))","category":"page"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"(Image: )","category":"page"},{"location":"tutorials/convergence/#References","page":"Convergence","title":"References","text":"","category":"section"},{"location":"tutorials/convergence/","page":"Convergence","title":"Convergence","text":"Altmeyer, Patrick, Mojtaba Farmanbar, Arie van Deursen, and Cynthia CS Liem. 2024. “Faithful Model Explanations Through Energy-Constrained Conformal Counterfactuals.” In Proceedings of the AAAI Conference on Artificial Intelligence, 38:10829–37. 10.","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/generators/tcrex/#T-CREx-Generator","page":"T-CREx","title":"T-CREx Generator","text":"","category":"section"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"The T-CREx is a novel model-agnostic counterfactual generator that can be used to generate local and global Counterfactual Rule Explanations (CREx) (Bewley et al. 2024).","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"warning: Breaking Changes Expected\nWork on this feature is still in its very early stages and breaking changes should be expected. The introduction of this new generator introduces new concepts such as global counterfactual explanations that are not explained anywhere else in this documentation. If you want to use this generator, please make sure you are familiar with the related literature. ","category":"page"},{"location":"explanation/generators/tcrex/#Usage","page":"T-CREx","title":"Usage","text":"","category":"section"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"The implementation of the TCRExGenerator depends on DecisionTree.jl. For the time being, we have decided to not add a strong dependency on DecisionTree.jl to the package. Instead, the functionality of the TCRExGenerator is made available through the DecisionTreeExt extension, which will be loaded conditionally on loading the DecisionTree.jl (see Julia docs for more details extensions):","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"using DecisionTree","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"Let us first load set up the problem by loading some data. To reproduce the example in Bewley et al. (2024) as accurately as possible, we use Python’s scikit-learn to load the synthetic data:","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"using CondaPkg; CondaPkg.add(\"scikit-learn\");\nusing PythonCall;\nskd = pyimport(\"sklearn.datasets\");\nn = 5000\nX, y = skd.make_moons(n_samples=n, noise=0.3, random_state=0)\nX = pyconvert(Matrix, X) |> permutedims |> x -> Float32.(x)\ny = pyconvert(Vector, y)\n# Setting up color palette as in paper:\ncol_pal = palette(:seaborn_bright)[[4,1,2,3,6,5,7,8,9]];","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"Next, we wrap the data in a CounterfactuaData container, fit a simple classification model to the data and store the model prediction for the entire training dataset (we need those to train the tree-based surrogate model).","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"# Counteractual data and model:\ndata = CounterfactualData(X, y)\nflux_training_params.batchsize = 100\nM = fit_model(data, :MLP)","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"Finally, we determine a target and factual class and choose a random sample from the factual class:","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"target = 1\nfactual = 0\nchosen = rand(findall(predict_label(M, data) .== factual))\nx = select_factual(data, chosen) ","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"Next, we instantiate the generator much like any other counterfactual generator in our package:","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"ρ = 0.02 # feasibility threshold (see Bewley et al. (2024))\nτ = 0.9 # accuracy threshold (see Bewley et al. (2024))\ngenerator = Generators.TCRExGenerator(ρ=ρ, τ=τ)","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"Finally, we can use the TCRExGenerator instance to generate a (global) counterfactual rule epxlanation (CRE) for the given target, data and model as follows:","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"cre = generator(target, data, M) # counterfactual rule explanation (global)","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"The CRE can be applied to our factual x to derive a (local) counterfactual point explanation (CPE):","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"idx, optimal_rule = cre(x) # counterfactual point explanation (local)","category":"page"},{"location":"explanation/generators/tcrex/#Worked-Example-from-Bewley-et-al.-(2024)","page":"T-CREx","title":"Worked Example from Bewley et al. (2024)","text":"","category":"section"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"To make better sense of this, we will now go through the worked example presented in Bewley et al. (2024). For this purpose, we need to make the functions of the DecisionTreeExt extension available.","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"warning: Private API\nPlease note that of the DecisionTreeExt extension is loaded here purely for demonstrative purposes. You should not load the extension like this in your own work.","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"DTExt = Base.get_extension(CounterfactualExplanations, :DecisionTreeExt)","category":"page"},{"location":"explanation/generators/tcrex/#(a)-Tree-based-surrogate-model","page":"T-CREx","title":"(a) Tree-based surrogate model","text":"","category":"section"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"In the first step, we train a tree-based surrogate model based on the data and the black-box model M. Specifically, the surrogate model is trained on pairs of observed input data and the labels predicted by the black-box model: (x M(x))_1leq i leq n.","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"note: Oracle Black-Box\nAs in the paper, we assume here that the black-box model is an oracle with perfect accuracy. This is done purely to stay as close as possible to the example in the paper.","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"Following Bewley et al. (2024), we impose a minimum number of samples per leaf to ensure counterfactual feasibility (also often referred to as plausibility). This number is computed under the hood and based on the generator.ρ field of the TCRExGenerator, which can be used to specify the minimum fraction of all samples that is contained by any given rule.","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"# Surrogate:\nXtrain = permutedims(X)\nytrain = categorical(y)\nfx = ytrain # assume perfect accuracy\nmodel, fitresult = DTExt.grow_surrogate(generator, Xtrain, fx)\nM_sur = CounterfactualExplanations.DecisionTreeModel(model; fitresult=fitresult)","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"We can reassure ourselves that the feasibility constraint is indeed respected:","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"# Extract rules:\nR = DTExt.extract_rules(fitresult[1])\n\n# Compute feasibility and accuracy:\nfeas = DTExt.rule_feasibility.(R, (X,))\n@assert minimum(feas) >= ρ\n@info \"Minimum fraction of samples across all rules is $(round(minimum(feas), digits=3))\"\nacc_factual = DTExt.rule_accuracy.(R, (X,), (fx,), (factual,))\nacc_target = DTExt.rule_accuracy.(R, (X,), (fx,), (target,))\n@assert all(acc_target .+ acc_factual .== 1.0)","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"plt = plot(data; ms=2, markerstrokewidth=0, size=(500, 500), palette=col_pal, alpha=0.5)\nrectangle(w, h, x, y) = Shape(x .+ [0,w,w,0], y .+ [0,0,h,h])\nfunction plot_grid!(p, grid)\n for (i, (bounds_x, bounds_y)) in enumerate(grid)\n lbx, ubx = bounds_x\n lby, uby = bounds_y\n lbx = maximum([lbx, minimum(X[1, :])])\n lby = maximum([lby, minimum(X[2, :])])\n ubx = minimum([ubx, maximum(X[1, :])])\n uby = minimum([uby, maximum(X[2, :])])\n plot!(\n p,\n rectangle(ubx - lbx, uby - lby, lbx, lby);\n fillcolor=\"black\",\n fillalpha=0.0,\n label=nothing,\n lw=2, palette=col_pal\n )\n end\nend\nplot_grid!(plt, R)\nplt","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"(Image: )","category":"page"},{"location":"explanation/generators/tcrex/#(b)-Maximal-valid-rules","page":"T-CREx","title":"(b) Maximal-valid rules","text":"","category":"section"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"From the complete set of rules derived from the surrogate tree, we can derive the maximal-valid rules next. Intuitively, “a maximal-valid rule is one that cannot be made any larger without violating the validity conditions”, where validity is defined in terms of both feasibility (generator.ρ) and accuracy (generator.τ).","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"R_max = DTExt.max_valid(R, X, fx, target, τ)\nfeas_max = DTExt.rule_feasibility.(R_max, (X,))\nacc_max = DTExt.rule_accuracy.(R_max, (X,), (fx,), (target,))\np1 = deepcopy(plt)\nfunction plot_surr!(plt)\n for (i, rule) in enumerate(R_max)\n ubx, uby = minimum([rule[1][2], maximum(X[1, :])]),\n minimum([rule[2][2], maximum(X[2, :])])\n lbx, lby = maximum([rule[1][1], minimum(X[1, :])]),\n maximum([rule[2][1], minimum(X[2, :])])\n _feas = round(feas_max[i]; digits=2)\n _n = Int(round(feas_max[i] * n; digits=2))\n _acc = round(acc_max[i]; digits=2)\n @info \"Rectangle R$i with feasibility $(_feas) (n≈$(_n)) and accuracy $(_acc)\"\n lab = \"R$i (ρ̂=$(_feas), τ̂=$(_acc))\"\n plot!(plt, rectangle(ubx-lbx,uby-lby,lbx,lby), opacity=.5, color=i+2, label=lab, palette=col_pal)\n end\nend\nplot_surr!(p1)\np1","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"(Image: )","category":"page"},{"location":"explanation/generators/tcrex/#(c)-Induced-grid-partition","page":"T-CREx","title":"(c) Induced grid partition","text":"","category":"section"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"Based on the set of maximal-valid rules, we compute and plot the induced grid partition below.","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"_grid = DTExt.induced_grid(R_max)\n\nplt = plot(data; ms=2, markerstrokewidth=0, size=(500, 500), palette=col_pal, alpha=0.1)\np2 = deepcopy(plt)\nplot_surr!(p2)\nplot_grid!(p2, _grid)\np2","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"(Image: )","category":"page"},{"location":"explanation/generators/tcrex/#(d)-Grid-cell-prototypes","page":"T-CREx","title":"(d) Grid cell prototypes","text":"","category":"section"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"Next, we pick prototypes from each cell in the induced grid. By setting pick_arbitrary=false here we enfore that prototypes correspond to cell centroids, which is not necessary. For each prototype, we compute the corresponding CRE, which is indicated by the color of the large markers in the figure below:","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"xs = DTExt.prototype.(_grid, (X,); pick_arbitrary=false)\nRᶜ = DTExt.cre.((R_max,), xs, (X,); return_index=true) \np3 = deepcopy(p2)\nscatter!(p3, eachrow(hcat(xs...))..., ms=10, label=nothing, color=Rᶜ.+2)\np3","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"(Image: )","category":"page"},{"location":"explanation/generators/tcrex/#(e)-(f)-Global-CE-representation","page":"T-CREx","title":"(e) - (f) Global CE representation","text":"","category":"section"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"Based on the prototypes and their corresponding rule assignments, we fit a CART classification tree with restricted feature thresholds. Specificically, features thresholds are restricted to the partition bounds induced by the set of maximal-valid rules as in Bewley et al. (2024). The figure below shows the resulting global CE representation (i.e. the metarules).","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"bounds = DTExt.partition_bounds(R_max)\ntree = DTExt.classify_prototypes(hcat(xs...)', Rᶜ, bounds)\nR_final, labels = DTExt.extract_leaf_rules(tree) \np4 = deepcopy(plt)\nfor (i, rule) in enumerate(R_final)\n ubx, uby = minimum([rule[1][2], maximum(X[1, :])]),\n minimum([rule[2][2], maximum(X[2, :])])\n lbx, lby = maximum([rule[1][1], minimum(X[1, :])]),\n maximum([rule[2][1], minimum(X[2, :])])\n plot!(\n p4,\n rectangle(ubx - lbx, uby - lby, lbx, lby);\n fillalpha=0.5,\n label=nothing,\n color=labels[i] + 2\n )\nend\np4","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"(Image: )","category":"page"},{"location":"explanation/generators/tcrex/#(g)-Local-CE-example","page":"T-CREx","title":"(g) Local CE example","text":"","category":"section"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"To generate a local explanation based on the global CE representation, we simply apply the CART decision tree classifier from the previous step to our factual:","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"optimal_rule = apply_tree(tree, vec(x))\np5 = deepcopy(p2)\nscatter!(p5, [x[1]], [x[2]], ms=10, color=2+optimal_rule, label=\"Local CE (move to R$optimal_rule)\")\np5","category":"page"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"(Image: )","category":"page"},{"location":"explanation/generators/tcrex/#References","page":"T-CREx","title":"References","text":"","category":"section"},{"location":"explanation/generators/tcrex/","page":"T-CREx","title":"T-CREx","text":"Bewley, Tom, Salim I. Amoukou, Saumitra Mishra, Daniele Magazzeni, and Manuela Veloso. 2024. “Counterfactual Metarules for Local and Global Recourse.” In Proceedings of the 41st International Conference on Machine Learning, edited by Ruslan Salakhutdinov, Zico Kolter, Katherine Heller, Adrian Weller, Nuria Oliver, Jonathan Scarlett, and Felix Berkenkamp, 235:3707–24. Proceedings of Machine Learning Research. PMLR. https://proceedings.mlr.press/v235/bewley24a.html.","category":"page"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"reference/#Reference","page":"🧐 Reference","title":"Reference","text":"","category":"section"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"In this reference, you will find a detailed overview of the package API.","category":"page"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"Reference guides are technical descriptions of the machinery and how to operate it. Reference material is information-oriented.— Diátaxis","category":"page"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"In other words, you come here because you want to take a very close look at the code 🧐.","category":"page"},{"location":"reference/#Content","page":"🧐 Reference","title":"Content","text":"","category":"section"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"Pages = [\"reference.md\"]\nDepth = 2:3","category":"page"},{"location":"reference/#Exported-functions","page":"🧐 Reference","title":"Exported functions","text":"","category":"section"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"Modules = [\n CounterfactualExplanations, \n CounterfactualExplanations.Convergence,\n CounterfactualExplanations.Evaluation,\n CounterfactualExplanations.DataPreprocessing,\n CounterfactualExplanations.Models,\n CounterfactualExplanations.GenerativeModels, \n CounterfactualExplanations.Generators, \n CounterfactualExplanations.Objectives\n]\nPrivate = false","category":"page"},{"location":"reference/#CounterfactualExplanations.RawOutputArrayType","page":"🧐 Reference","title":"CounterfactualExplanations.RawOutputArrayType","text":"RawOutputArrayType\n\nA type union for the allowed type for the output array y.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.RawTargetType","page":"🧐 Reference","title":"CounterfactualExplanations.RawTargetType","text":"RawTargetType\n\nA type union for the allowed types for the target variable.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.flux_training_params","page":"🧐 Reference","title":"CounterfactualExplanations.flux_training_params","text":"flux_training_params\n\nThe default training parameter for FluxModels etc.\n\n\n\n\n\n","category":"constant"},{"location":"reference/#CounterfactualExplanations.AbstractConvergence","page":"🧐 Reference","title":"CounterfactualExplanations.AbstractConvergence","text":"An abstract type that serves as the base type for convergence objects.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.AbstractCounterfactualExplanation","page":"🧐 Reference","title":"CounterfactualExplanations.AbstractCounterfactualExplanation","text":"Base type for counterfactual explanations.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.AbstractGenerator","page":"🧐 Reference","title":"CounterfactualExplanations.AbstractGenerator","text":"An abstract type that serves as the base type for counterfactual generators.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.AbstractModel","page":"🧐 Reference","title":"CounterfactualExplanations.AbstractModel","text":"Base type for models.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.CounterfactualExplanation","page":"🧐 Reference","title":"CounterfactualExplanations.CounterfactualExplanation","text":"A struct that collects all information relevant to a specific counterfactual explanation for a single individual.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.CounterfactualExplanation-Tuple{AbstractArray, Union{Int64, AbstractFloat, String, Symbol}, CounterfactualData, AbstractModel, AbstractGenerator}","page":"🧐 Reference","title":"CounterfactualExplanations.CounterfactualExplanation","text":"function CounterfactualExplanation(;\n\tx::AbstractArray,\n\ttarget::RawTargetType,\n\tdata::CounterfactualData,\n\tM::Models.AbstractModel,\n\tgenerator::Generators.AbstractGenerator,\n\tnum_counterfactuals::Int = 1,\n\tinitialization::Symbol = :add_perturbation,\n convergence::Union{AbstractConvergence,Symbol}=:decision_threshold,\n)\n\nOuter method to construct a CounterfactualExplanation structure.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.EncodedOutputArrayType","page":"🧐 Reference","title":"CounterfactualExplanations.EncodedOutputArrayType","text":"EncodedOutputArrayType\n\nType of encoded output array.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.EncodedTargetType","page":"🧐 Reference","title":"CounterfactualExplanations.EncodedTargetType","text":"EncodedTargetType\n\nType of encoded target variable.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.OutputEncoder","page":"🧐 Reference","title":"CounterfactualExplanations.OutputEncoder","text":"OutputEncoder\n\nThe OutputEncoder takes a raw output array (y) and encodes it.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.OutputEncoder-Tuple{Union{Int64, AbstractFloat, String, Symbol}}","page":"🧐 Reference","title":"CounterfactualExplanations.OutputEncoder","text":"(encoder::OutputEncoder)(ynew::RawTargetType)\n\nWhen called on a new value ynew, the OutputEncoder encodes it based on the initial encoding.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.OutputEncoder-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.OutputEncoder","text":"(encoder::OutputEncoder)()\n\nOn call, the OutputEncoder returns the encoded output array.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.generate_counterfactual-Tuple{Base.Iterators.Zip, Union{Int64, AbstractFloat, String, Symbol}, CounterfactualData, AbstractModel, AbstractGenerator}","page":"🧐 Reference","title":"CounterfactualExplanations.generate_counterfactual","text":"generate_counterfactual(\n x::Base.Iterators.Zip,\n target::RawTargetType,\n data::CounterfactualData,\n M::Models.AbstractModel,\n generator::AbstractGenerator;\n kwargs...,\n)\n\nOverloads the generate_counterfactual method to accept a zip of factuals x and return a vector of counterfactuals.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.generate_counterfactual-Tuple{Matrix, Union{Int64, AbstractFloat, String, Symbol}, CounterfactualData, AbstractModel, AbstractGenerator}","page":"🧐 Reference","title":"CounterfactualExplanations.generate_counterfactual","text":"generate_counterfactual(\n x::Matrix,\n target::RawTargetType,\n data::CounterfactualData,\n M::Models.AbstractModel,\n generator::AbstractGenerator;\n num_counterfactuals::Int=1,\n initialization::Symbol=:add_perturbation,\n convergence::Union{AbstractConvergence,Symbol}=:decision_threshold,\n timeout::Union{Nothing,Real}=nothing,\n)\n\nThe core function that is used to run counterfactual search for a given factual x, target, counterfactual data, model and generator. Keywords can be used to specify the desired threshold for the predicted target class probability and the maximum number of iterations.\n\nArguments\n\nx::Matrix: Factual data point.\ntarget::RawTargetType: Target class.\ndata::CounterfactualData: Counterfactual data.\nM::Models.AbstractModel: Fitted model.\ngenerator::AbstractGenerator: Generator.\nnum_counterfactuals::Int=1: Number of counterfactuals to generate for factual.\ninitialization::Symbol=:add_perturbation: Initialization method. By default, the initialization is done by adding a small random perturbation to the factual to achieve more robustness.\nconvergence::Union{AbstractConvergence,Symbol}=:decision_threshold: Convergence criterion. By default, the convergence is based on the decision threshold. Possible values are :decision_threshold, :max_iter, :generator_conditions or a conrete convergence object (e.g. DecisionThresholdConvergence). \ntimeout::Union{Nothing,Int}=nothing: Timeout in seconds.\n\nExamples\n\nGeneric generator\n\njulia> using CounterfactualExplanations\n\njulia> using TaijaData\n \n # Counteractual data and model:\n\njulia> counterfactual_data = CounterfactualData(load_linearly_separable()...);\n\njulia> M = fit_model(counterfactual_data, :Linear);\n\njulia> target = 2;\n\njulia> factual = 1;\n\njulia> chosen = rand(findall(predict_label(M, counterfactual_data) .== factual));\n\njulia> x = select_factual(counterfactual_data, chosen);\n \n # Search:\n\njulia> generator = Generators.GenericGenerator();\n\njulia> ce = generate_counterfactual(x, target, counterfactual_data, M, generator);\n\njulia> converged(ce.convergence, ce)\ntrue\n\nBroadcasting\n\nThe generate_counterfactual method can also be broadcasted over a tuple containing an array. This allows for generating multiple counterfactuals in parallel. \n\njulia> chosen = rand(findall(predict_label(M, counterfactual_data) .== factual), 5);\n\njulia> xs = select_factual(counterfactual_data, chosen);\n\njulia> ces = generate_counterfactual.(xs, target, counterfactual_data, M, generator);\n\njulia> converged(ce.convergence, ce)\ntrue\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.generate_counterfactual-Tuple{Matrix, Union{Int64, AbstractFloat, String, Symbol}, CounterfactualData, AbstractModel, GrowingSpheresGenerator}","page":"🧐 Reference","title":"CounterfactualExplanations.generate_counterfactual","text":"generate_counterfactual(\n x::Matrix,\n target::RawTargetType,\n data::DataPreprocessing.CounterfactualData,\n M::Models.AbstractModel,\n generator::Generators.GrowingSpheresGenerator;\n num_counterfactuals::Int=1,\n convergence::Union{AbstractConvergence,Symbol}=Convergence.DecisionThresholdConvergence(;\n decision_threshold=(1 / length(data.y_levels)), max_iter=1000\n ),\n kwrgs...,\n)\n\nOverloads the generate_counterfactual for the GrowingSpheresGenerator generator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.generate_counterfactual-Tuple{Tuple{AbstractArray}, Vararg{Any}}","page":"🧐 Reference","title":"CounterfactualExplanations.generate_counterfactual","text":"generate_counterfactual(x::Tuple{<:AbstractArray}, args...; kwargs...)\n\nOverloads the generate_counterfactual method to accept a tuple containing and array. This allows for broadcasting over Zip iterators.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.generate_counterfactual-Tuple{Vector{<:Matrix}, Union{Int64, AbstractFloat, String, Symbol}, CounterfactualData, AbstractModel, AbstractGenerator}","page":"🧐 Reference","title":"CounterfactualExplanations.generate_counterfactual","text":"generate_counterfactual(\n x::Vector{<:Matrix},\n target::RawTargetType,\n data::CounterfactualData,\n M::Models.AbstractModel,\n generator::AbstractGenerator;\n kwargs...,\n)\n\nOverloads the generate_counterfactual method to accept a vector of factuals x and return a vector of counterfactuals.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.get_target_index-Tuple{Any, Any}","page":"🧐 Reference","title":"CounterfactualExplanations.get_target_index","text":"get_target_index(y_levels, target)\n\nUtility that returns the index of target in y_levels.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.path-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.path","text":"path(ce::CounterfactualExplanation)\n\nA convenience method that returns the entire counterfactual path.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.target_probs","page":"🧐 Reference","title":"CounterfactualExplanations.target_probs","text":"target_probs(\n ce::CounterfactualExplanation,\n x::Union{AbstractArray,Nothing}=nothing,\n)\n\nReturns the predicted probability of the target class for x. If x is nothing, the predicted probability corresponding to the counterfactual value is returned.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.terminated-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.terminated","text":"terminated(ce::CounterfactualExplanation)\n\nA convenience method that checks if the counterfactual search has terminated.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.total_steps-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.total_steps","text":"total_steps(ce::CounterfactualExplanation)\n\nA convenience method that returns the total number of steps of the counterfactual search.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Convergence.convergence_catalogue","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.convergence_catalogue","text":"convergence_catalogue\n\nA dictionary containing all convergence criteria.\n\n\n\n\n\n","category":"constant"},{"location":"reference/#CounterfactualExplanations.Convergence.DecisionThresholdConvergence","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.DecisionThresholdConvergence","text":"DecisionThresholdConvergence\n\nConvergence criterion based on the target class probability threshold. The search stops when the target class probability exceeds the predefined threshold.\n\nFields\n\ndecision_threshold::AbstractFloat: The predefined threshold for the target class probability.\nmax_iter::Int: The maximum number of iterations.\nmin_success_rate::AbstractFloat: The minimum success rate for the target class probability.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Convergence.GeneratorConditionsConvergence","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.GeneratorConditionsConvergence","text":"GeneratorConditionsConvergence\n\nConvergence criterion for counterfactual explanations based on the generator conditions. The search stops when the gradients of the search objective are below a certain threshold and the generator conditions are satisfied.\n\nFields\n\ndecision_threshold::AbstractFloat: The threshold for the decision probability.\ngradient_tol::AbstractFloat: The tolerance for the gradients of the search objective.\nmax_iter::Int: The maximum number of iterations.\nmin_success_rate::AbstractFloat: The minimum success rate for the generator conditions (across counterfactuals).\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Convergence.GeneratorConditionsConvergence-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.GeneratorConditionsConvergence","text":"GeneratorConditionsConvergence(; decision_threshold=0.5, gradient_tol=1e-2, max_iter=100, min_success_rate=0.75, y_levels=nothing)\n\nOuter constructor for GeneratorConditionsConvergence.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Convergence.MaxIterConvergence","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.MaxIterConvergence","text":"MaxIterConvergence\n\nConvergence criterion based on the maximum number of iterations.\n\nFields\n\nmax_iter::Int: The maximum number of iterations.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Convergence.converged","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.converged","text":"converged(\n convergence::DecisionThresholdConvergence,\n ce::AbstractCounterfactualExplanation,\n x::Union{AbstractArray,Nothing}=nothing,\n)\n\nChecks if the counterfactual search has converged when the convergence criterion is the decision threshold.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.Convergence.converged-2","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.converged","text":"converged(\n convergence::GeneratorConditionsConvergence,\n ce::AbstractCounterfactualExplanation,\n x::Union{AbstractArray,Nothing}=nothing,\n)\n\nChecks if the counterfactual search has converged when the convergence criterion is generator_conditions.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.Convergence.converged-3","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.converged","text":"converged(\n convergence::MaxIterConvergence,\n ce::AbstractCounterfactualExplanation,\n x::Union{AbstractArray,Nothing}=nothing,\n)\n\nChecks if the counterfactual search has converged when the convergence criterion is maximum iterations. This means the counterfactual search will not terminate until the maximum number of iterations has been reached independently of the other convergence criteria.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.Convergence.converged-4","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.converged","text":"converged(\n convergence::InvalidationRateConvergence,\n ce::AbstractCounterfactualExplanation,\n x::Union{AbstractArray,Nothing}=nothing,\n)\n\nChecks if the counterfactual search has converged when the convergence criterion is invalidation rate.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.Convergence.converged-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.converged","text":"converged(ce::AbstractCounterfactualExplanation)\n\nReturns true if the counterfactual explanation has converged.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Convergence.get_convergence_type-Tuple{AbstractConvergence, AbstractVector}","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.get_convergence_type","text":"get_convergence_type(convergence::AbstractConvergence)\n\nReturns the convergence object.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Convergence.get_convergence_type-Tuple{Symbol, AbstractVector}","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.get_convergence_type","text":"get_convergence_type(convergence::Symbol)\n\nReturns the convergence object from the dictionary of default convergence types.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Convergence.hinge_loss-Tuple{CounterfactualExplanations.Convergence.InvalidationRateConvergence, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.hinge_loss","text":"hinge_loss(convergence::InvalidationRateConvergence, ce::AbstractCounterfactualExplanation)\n\nCalculates the hinge loss of a counterfactual explanation.\n\nArguments\n\nconvergence::InvalidationRateConvergence: The convergence criterion to use.\nce::AbstractCounterfactualExplanation: The counterfactual explanation to calculate the hinge loss for.\n\nReturns\n\nThe hinge loss of the counterfactual explanation.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Convergence.invalidation_rate-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.invalidation_rate","text":"invalidation_rate(ce::AbstractCounterfactualExplanation)\n\nCalculates the invalidation rate of a counterfactual explanation.\n\nArguments\n\nce::AbstractCounterfactualExplanation: The counterfactual explanation to calculate the invalidation rate for.\nkwargs: Additional keyword arguments to pass to the function.\n\nReturns\n\nThe invalidation rate of the counterfactual explanation.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Convergence.threshold_reached","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.threshold_reached","text":"threshold_reached(ce::AbstractCounterfactualExplanation, x::Union{AbstractArray,Nothing}=nothing)\n\nDetermines if the predefined threshold for the target class probability has been reached.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.Evaluation.default_measures","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.default_measures","text":"The default evaluation measures.\n\n\n\n\n\n","category":"constant"},{"location":"reference/#CounterfactualExplanations.Evaluation.Benchmark","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.Benchmark","text":"A container for benchmarks of counterfactual explanations. Instead of subtyping DataFrame, it contains a DataFrame of evaluation measures (see this discussion for why we don't subtype DataFrame directly).\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Evaluation.Benchmark-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.Benchmark","text":"(bmk::Benchmark)(; agg=mean)\n\nReturns a DataFrame containing evaluation measures aggregated by num_counterfactual.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.benchmark-Tuple{CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.benchmark","text":"benchmark(\n data::CounterfactualData;\n models::Dict{<:Any,<:Any}=standard_models_catalogue,\n generators::Union{Nothing,Dict{<:Any,<:AbstractGenerator}}=nothing,\n measure::Union{Function,Vector{Function}}=default_measures,\n n_individuals::Int=5,\n suppress_training::Bool=false,\n factual::Union{Nothing,RawTargetType}=nothing,\n target::Union{Nothing,RawTargetType}=nothing,\n store_ce::Bool=false,\n parallelizer::Union{Nothing,AbstractParallelizer}=nothing,\n kwrgs...,\n)\n\nRuns the benchmarking exercise as follows:\n\nRandomly choose a factual and target label unless specified. \nIf no pretrained models are provided, it is assumed that a dictionary of callable model objects is provided (by default using the standard_models_catalogue). \nEach of these models is then trained on the data. \nFor each model separately choose n_individuals randomly from the non-target (factual) class. For each generator create a benchmark as in benchmark(xs::Union{AbstractArray,Base.Iterators.Zip}).\nFinally, concatenate the results.\n\nIf vertical_splits is specified to an integer, the computations are split vertically into vertical_splits chunks. In this case, the results are stored in a temporary directory and concatenated afterwards. \n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.benchmark-Tuple{Union{AbstractArray, Base.Iterators.Zip}, Union{Int64, AbstractFloat, String, Symbol}, CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.benchmark","text":"benchmark(\n x::Union{AbstractArray,Base.Iterators.Zip},\n target::RawTargetType,\n data::CounterfactualData;\n models::Dict{<:Any,<:AbstractModel},\n generators::Dict{<:Any,<:AbstractGenerator},\n measure::Union{Function,Vector{Function}}=default_measures,\n xids::Union{Nothing,AbstractArray}=nothing,\n dataname::Union{Nothing,Symbol,String}=nothing,\n verbose::Bool=true,\n store_ce::Bool=false,\n parallelizer::Union{Nothing,AbstractParallelizer}=nothing,\n kwrgs...,\n)\n\nFirst generates counterfactual explanations for factual x, the target and data using each of the provided models and generators. Then generates a Benchmark for the vector of counterfactual explanations as in benchmark(counterfactual_explanations::Vector{CounterfactualExplanation}).\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.benchmark-Tuple{Vector{CounterfactualExplanation}}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.benchmark","text":"benchmark(\n counterfactual_explanations::Vector{CounterfactualExplanation};\n meta_data::Union{Nothing,<:Vector{<:Dict}}=nothing,\n measure::Union{Function,Vector{Function}}=default_measures,\n store_ce::Bool=false,\n)\n\nGenerates a Benchmark for a vector of counterfactual explanations. Optionally meta_data describing each individual counterfactual explanation can be supplied. This should be a vector of dictionaries of the same length as the vector of counterfactuals. If no meta_data is supplied, it will be automatically inferred. All measure functions are applied to each counterfactual explanation. If store_ce=true, the counterfactual explanations are stored in the benchmark.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.evaluate","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.evaluate","text":"evaluate(\n ce::CounterfactualExplanation;\n measure::Union{Function,Vector{Function}}=default_measures,\n agg::Function=mean,\n report_each::Bool=false,\n output_format::Symbol=:Vector,\n pivot_longer::Bool=true\n)\n\nJust computes evaluation measures for the counterfactual explanation. By default, no meta data is reported. For report_meta=true, meta data is automatically inferred, unless this overwritten by meta_data. The optional meta_data argument should be a vector of dictionaries of the same length as the vector of counterfactual explanations. \n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.Evaluation.redundancy-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.redundancy","text":"redundancy(ce::CounterfactualExplanation)\n\nComputes the feature redundancy: that is, the number of features that remain unchanged from their original, factual values.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.validity-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.validity","text":"validity(ce::CounterfactualExplanation; γ=0.5)\n\nChecks of the counterfactual search has been successful with respect to the probability threshold γ. In case multiple counterfactuals were generated, the function returns the proportion of successful counterfactuals.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.CounterfactualData-Tuple{AbstractMatrix, Union{AbstractMatrix, AbstractVector}}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.CounterfactualData","text":"CounterfactualData(\n X::AbstractMatrix,\n y::RawOutputArrayType;\n mutability::Union{Vector{Symbol},Nothing}=nothing,\n domain::Union{Any,Nothing}=nothing,\n features_categorical::Union{Vector{Vector{Int}},Nothing}=nothing,\n features_continuous::Union{Vector{Int},Nothing}=nothing,\n input_encoder::Union{Nothing,InputTransformer,TypedInputTransformer}=nothing,\n)\n\nThis outer constructor method prepares features X and labels y to be used with the package. Mutability and domain constraints can be added for the features. The function also accepts arguments that specify which features are categorical and which are continues. These arguments are currently not used. \n\nExamples\n\nusing CounterfactualExplanations.Data\nx, y = toy_data_linear()\nX = hcat(x...)\ncounterfactual_data = CounterfactualData(X,y')\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.CounterfactualData-Tuple{Tables.MatrixTable, Union{AbstractMatrix, AbstractVector}}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.CounterfactualData","text":"function CounterfactualData(\n X::Tables.MatrixTable,\n y::RawOutputArrayType;\n kwrgs...\n)\n\nOuter constructor method that accepts a Tables.MatrixTable. By default, the indices of categorical and continuous features are automatically inferred the features' scitype.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.apply_domain_constraints-Tuple{CounterfactualData, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.apply_domain_constraints","text":"apply_domain_constraints(counterfactual_data::CounterfactualData, x::AbstractArray)\n\nA subroutine that is used to apply the predetermined domain constraints.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.fit_transformer!-Tuple{CounterfactualData, Union{Nothing, CausalInference.SCM, CounterfactualExplanations.GenerativeModels.AbstractGenerativeModel, MultivariateStats.AbstractDimensionalityReduction, StatsBase.AbstractDataTransform, Type{<:StatsBase.AbstractDataTransform}, Type{<:MultivariateStats.AbstractDimensionalityReduction}, Type{<:CounterfactualExplanations.GenerativeModels.AbstractGenerativeModel}, Type{<:CausalInference.SCM}}}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.fit_transformer!","text":"fit_transformer!(\n data::CounterfactualData,\n input_encoder::Union{Nothing,InputTransformer,TypedInputTransformer};\n kwargs...,\n)\n\nFit a transformer to the data in place.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.fit_transformer-Tuple{CounterfactualData, Nothing}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.fit_transformer","text":"fit_transformer(data::CounterfactualData, input_encoder::Nothing; kwargs...)\n\nFit a transformer to the data. This is a no-op if input_encoder is Nothing.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.fit_transformer-Tuple{CounterfactualData, Type{<:CausalInference.SCM}}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.fit_transformer","text":"fit_transformer(\n data::CounterfactualData,\n input_encoder::Type{<:CausalInference.SCM};\n kwargs...,\n)\n\nFit a transformer to the data for a SCM object.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.fit_transformer-Tuple{CounterfactualData, Type{<:CounterfactualExplanations.GenerativeModels.AbstractGenerativeModel}}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.fit_transformer","text":"fit_transformer(\n data::CounterfactualData,\n input_encoder::Type{GenerativeModels.AbstractGenerativeModel};\n kwargs...,\n)\n\nFit a transformer to the data for a GenerativeModels.AbstractGenerativeModel object.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.fit_transformer-Tuple{CounterfactualData, Type{<:MultivariateStats.AbstractDimensionalityReduction}}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.fit_transformer","text":"fit_transformer(\n data::CounterfactualData,\n input_encoder::Type{MultivariateStats.AbstractDimensionalityReduction};\n kwargs...,\n)\n\nFit a transformer to the data for a MultivariateStats.AbstractDimensionalityReduction object.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.fit_transformer-Tuple{CounterfactualData, Type{<:StatsBase.AbstractDataTransform}}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.fit_transformer","text":"fit_transformer(\n data::CounterfactualData,\n input_encoder::Type{StatsBase.AbstractDataTransform};\n kwargs...,\n)\n\nFit a transformer to the data for a StatsBase.AbstractDataTransform object.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.fit_transformer-Tuple{CounterfactualData, Union{CausalInference.SCM, CounterfactualExplanations.GenerativeModels.AbstractGenerativeModel, MultivariateStats.AbstractDimensionalityReduction, StatsBase.AbstractDataTransform}}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.fit_transformer","text":"fit_transformer(data::CounterfactualData, input_encoder::InputTransformer; kwargs...)\n\nFit a transformer to the data for an InputTransformer object. This is a no-op.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.select_factual-Tuple{CounterfactualData, Int64}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.select_factual","text":"select_factual(counterfactual_data::CounterfactualData, index::Int)\n\nA convenience method that can be used to access the feature matrix.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.select_factual-Tuple{CounterfactualData, Union{UnitRange{Int64}, Vector{Int64}}}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.select_factual","text":"select_factual(counterfactual_data::CounterfactualData, index::Union{Vector{Int},UnitRange{Int}})\n\nA convenience method that can be used to access the feature matrix.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.transformable_features-Tuple{CounterfactualData, Any}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.transformable_features","text":"transformable_features(counterfactual_data::CounterfactualData, input_encoder::Any)\n\nBy default, all continuous features are transformable. This function returns the indices of all continuous features.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.transformable_features-Tuple{CounterfactualData, Type{CausalInference.SCM}}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.transformable_features","text":"transformable_features(\n counterfactual_data::CounterfactualData, input_encoder::Type{CausalInference.SCM}\n)\n\nReturns the indices of all features that have causal parents.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.transformable_features-Tuple{CounterfactualData, Type{StatsBase.ZScoreTransform}}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.transformable_features","text":"transformable_features(\n counterfactual_data::CounterfactualData, input_encoder::Type{ZScoreTransform}\n)\n\nReturns the indices of all continuous features that can be transformed. For constant features ZScoreTransform returns NaN.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.transformable_features-Tuple{CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.transformable_features","text":"transformable_features(counterfactual_data::CounterfactualData)\n\nDispatches the transformable_features function to the appropriate method based on the type of the dt field.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.all_models_catalogue","page":"🧐 Reference","title":"CounterfactualExplanations.Models.all_models_catalogue","text":"all_models_catalogue\n\nA dictionary containing both differentiable and non-differentiable machine learning models.\n\n\n\n\n\n","category":"constant"},{"location":"reference/#CounterfactualExplanations.Models.standard_models_catalogue","page":"🧐 Reference","title":"CounterfactualExplanations.Models.standard_models_catalogue","text":"standard_models_catalogue\n\nA dictionary containing all differentiable machine learning models.\n\n\n\n\n\n","category":"constant"},{"location":"reference/#CounterfactualExplanations.AbstractModel-Tuple{AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.AbstractModel","text":"(model::AbstractModel)(X::AbstractArray)\n\nWhen called on data x, logits are returned.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.DeepEnsemble-Tuple{Any}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.DeepEnsemble","text":"DeepEnsemble(model; likelihood::Symbol=:classification_binary)\n\nAn outer constructor for a deep ensemble model.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Linear-Tuple{Any}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Linear","text":"Linear(model; likelihood::Symbol=:classification_binary)\n\nAn outer constructor for a linear model.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.MLP-Tuple{Any}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.MLP","text":"MLP(model; likelihood::Symbol=:classification_binary)\n\nAn outer constructor for a multi-layer perceptron (MLP) model.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Model","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Model","text":"Model <: AbstractModel\n\nConstructor for all models.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Models.Model-Tuple{Any, CounterfactualExplanations.Models.AbstractFluxNN}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Model","text":"Model(model, type::AbstractFluxNN; likelihood::Symbol=:classification_binary)\n\nOverloaded constructor for Flux models.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Model-Tuple{Any, CounterfactualExplanations.Models.AbstractModelType}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Model","text":"Model(model, type::AbstractModelType; likelihood::Symbol=:classification_binary)\n\nOuter constructor for Model where the atomic model is defined and assumed to be pre-trained.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Model-Tuple{CounterfactualData, DeepEnsemble}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Model","text":"(M::Model)(data::CounterfactualData, type::DeepEnsemble; kwargs...)\n\nConstructs a deep ensemble for the given data.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Model-Tuple{CounterfactualData, Linear}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Model","text":"(M::Model)(data::CounterfactualData, type::Linear; kwargs...)\n\nConstructs a model with one linear layer for the given data. If the output is binary, this corresponds to logistic regression, since model outputs are passed through the sigmoid function. If the output is multi-class, this corresponds to multinomial logistic regression, since model outputs are passed through the softmax function.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Model-Tuple{CounterfactualData, MLP}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Model","text":"(M::Model)(data::CounterfactualData, type::MLP; kwargs...)\n\nConstructs a multi-layer perceptron (MLP) for the given data.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Model-Tuple{CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Model","text":"(M::Model)(data::CounterfactualData; kwargs...)\n\nWrap model M around the data in data.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Model-Tuple{CounterfactualExplanations.Models.AbstractModelType}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Model","text":"Model(type::AbstractModelType; likelihood::Symbol=:classification_binary)\n\nOuter constructor for Model where the atomic model is not yet defined.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.fit_model","page":"🧐 Reference","title":"CounterfactualExplanations.Models.fit_model","text":"fit_model(\n counterfactual_data::CounterfactualData, model::Symbol=:MLP;\n kwrgs...\n)\n\nFits one of the available default models to the counterfactual_data. The model argument can be used to specify the desired model. The available values correspond to the keys of the all_models_catalogue dictionary.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.Models.fit_model-Tuple{CounterfactualData, CounterfactualExplanations.Models.AbstractModelType}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.fit_model","text":"fit_model(\n counterfactual_data::CounterfactualData, type::AbstractModelType; kwrgs...\n)\n\nA wrapper function to fit a model to the counterfactual_data for a given type of model.\n\nArguments\n\ncounterfactual_data::CounterfactualData: The data to be used for training the model.\ntype::AbstractModelType: The type of model to be trained, e.g., MLP, DecisionTreeModel, etc.\n\nExamples\n\njulia> using CounterfactualExplanations\n\njulia> using CounterfactualExplanations.Models\n\njulia> using TaijaData\n\njulia> data = CounterfactualData(load_linearly_separable()...);\n\njulia> M = fit_model(data, Linear())\nCounterfactualExplanations.Models.Model(Chain(Dense(2 => 2)), :classification_multi, CounterfactualExplanations.Models.Fitresult(Chain(Dense(2 => 2)), Dict{Any, Any}()), Linear())\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.logits-Tuple{CounterfactualExplanations.Models.Model, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.logits","text":"logits(M::Model, X::AbstractArray)\n\nReturns the logits of the model.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.logits-Tuple{CounterfactualExplanations.Models.Model, CounterfactualExplanations.Models.AbstractFluxNN, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.logits","text":"logits(M::Model, type::AbstractFluxNN, X::AbstractArray)\n\nOverloads the logits function for Flux models.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.logits-Tuple{CounterfactualExplanations.Models.Model, CounterfactualExplanations.Models.MLJModelType, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.logits","text":"logits(M::Model, type::MLJModelType, X::AbstractArray)\n\nOverloads the logits method for MLJ models.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.logits-Tuple{CounterfactualExplanations.Models.Model, DeepEnsemble, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.logits","text":"logits(M::Model, type::DeepEnsemble, X::AbstractArray)\n\nOverloads the logits function for deep ensembles.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.model_evaluation-Tuple{AbstractModel, CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.model_evaluation","text":"model_evaluation(M::AbstractModel, test_data::CounterfactualData)\n\nHelper function to compute F-Score for AbstractModel on a (test) data set. By default, it computes the accuracy. Any other measure, e.g. from the StatisticalMeasures package, can be passed as an argument. Currently, only measures applicable to classification tasks are supported.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.predict_label-Tuple{AbstractModel, CounterfactualData, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.predict_label","text":"predict_label(M::AbstractModel, counterfactual_data::CounterfactualData, X::AbstractArray)\n\nReturns the predicted output label for a given model M, data set counterfactual_data and input data X.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.predict_label-Tuple{AbstractModel, CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.predict_label","text":"predict_label(M::AbstractModel, counterfactual_data::CounterfactualData)\n\nReturns the predicted output labels for all data points of data set counterfactual_data for a given model M.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.predict_proba-Tuple{AbstractModel, Union{Nothing, CounterfactualData}, Union{Nothing, AbstractArray}}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.predict_proba","text":"predict_proba(M::AbstractModel, counterfactual_data::CounterfactualData, X::Union{Nothing,AbstractArray})\n\nReturns the predicted output probabilities for a given model M, data set counterfactual_data and input data X.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.probs-Tuple{CounterfactualExplanations.Models.Model, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.probs","text":"probs(M::Model, X::AbstractArray)\n\nReturns the probabilities of the model.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.probs-Tuple{CounterfactualExplanations.Models.Model, CounterfactualExplanations.Models.AbstractFluxNN, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.probs","text":"probs(M::Model, type::AbstractFluxNN, X::AbstractArray)\n\nOverloads the probs function for Flux models.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.probs-Tuple{CounterfactualExplanations.Models.Model, CounterfactualExplanations.Models.MLJModelType, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.probs","text":"probs(\n M::Model,\n type::MLJModelType,\n X::AbstractArray,\n)\n\nOverloads the probs method for MLJ models. \n\nNote for developers\n\nNote that currently the underlying MLJ methods (reformat, predict) are incompatible with Zygote's autodiff. For differentiable MLJ models, the probs` and logits methods need to be overloaded.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.probs-Tuple{CounterfactualExplanations.Models.Model, DeepEnsemble, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.probs","text":"probs(M::Model, type::DeepEnsemble, X::AbstractArray)\n\nOverloads the probs function for deep ensembles.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.generator_catalogue","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.generator_catalogue","text":"A dictionary containing the constructors of all available counterfactual generators.\n\n\n\n\n\n","category":"constant"},{"location":"reference/#CounterfactualExplanations.Generators.AbstractGradientBasedGenerator","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.AbstractGradientBasedGenerator","text":"AbstractGradientBasedGenerator\n\nAn abstract type that serves as the base type for gradient-based counterfactual generators. \n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Generators.AbstractNonGradientBasedGenerator","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.AbstractNonGradientBasedGenerator","text":"AbstractNonGradientBasedGenerator\n\nAn abstract type that serves as the base type for non gradient-based counterfactual generators. \n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Generators.FeatureTweakGenerator","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.FeatureTweakGenerator","text":"Feature Tweak counterfactual generator class.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Generators.FeatureTweakGenerator-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.FeatureTweakGenerator","text":"FeatureTweakGenerator(; penalty::Union{Nothing,Function,Vector{Function}}=Objectives.distance_l2, ϵ::AbstractFloat=0.1)\n\nConstructs a new Feature Tweak Generator object.\n\nUses the L2-norm as the penalty to measure the distance between the counterfactual and the factual. According to the paper by Tolomei et al., another recommended choice for the penalty in addition to the L2-norm is the L0-norm. The L0-norm simply minimizes the number of features that are changed through the tweak.\n\nArguments\n\npenalty::Union{Nothing,Function,Vector{Function}}: The penalty function to use for the generator. Defaults to distance_l2.\nϵ::AbstractFloat: The tolerance value for the feature tweaks. Described at length in Tolomei et al. (https://arxiv.org/pdf/1706.06691.pdf). Defaults to 0.1.\n\nReturns\n\ngenerator::FeatureTweakGenerator: A non-gradient-based generator that can be used to generate counterfactuals using the feature tweak method.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.GradientBasedGenerator","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.GradientBasedGenerator","text":"Base class for gradient-based counterfactual generators.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Generators.GradientBasedGenerator-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.GradientBasedGenerator","text":"GradientBasedGenerator(;\n\tloss::Union{Nothing,Function}=nothing,\n\tpenalty::Penalty=nothing,\n\tλ::Union{Nothing,AbstractFloat,Vector{AbstractFloat}}=nothing,\n\tlatent_space::Bool::false,\n\topt::Flux.Optimise.AbstractOptimiser=Flux.Descent(),\n generative_model_params::NamedTuple=(;),\n)\n\nDefault outer constructor for GradientBasedGenerator.\n\nArguments\n\nloss::Union{Nothing,Function}=nothing: The loss function used by the model.\npenalty::Penalty=nothing: A penalty function for the generator to penalize counterfactuals too far from the original point.\nλ::Union{Nothing,AbstractFloat,Vector{AbstractFloat}}=nothing: The weight of the penalty function.\nlatent_space::Bool=false: Whether to use the latent space of a generative model to generate counterfactuals.\nopt::Flux.Optimise.AbstractOptimiser=Flux.Descent(): The optimizer to use for the generator.\ngenerative_model_params::NamedTuple: The parameters of the generative model associated with the generator.\n\nReturns\n\ngenerator::GradientBasedGenerator: A gradient-based counterfactual generator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.GrowingSpheresGenerator","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.GrowingSpheresGenerator","text":"Growing Spheres counterfactual generator class.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Generators.GrowingSpheresGenerator-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.GrowingSpheresGenerator","text":"GrowingSpheresGenerator(; n::Int=100, η::Float64=0.1, kwargs...)\n\nConstructs a new Growing Spheres Generator object.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.JSMADescent","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.JSMADescent","text":"An optimisation rule that can be used to implement a Jacobian-based Saliency Map Attack.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Generators.JSMADescent-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.JSMADescent","text":"Outer constructor for the JSMADescent rule.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.CLUEGenerator-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.CLUEGenerator","text":"Constructor for CLUEGenerator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.ClaPROARGenerator-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.ClaPROARGenerator","text":"Constructor for ClaPGenerator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.DiCEGenerator-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.DiCEGenerator","text":"Constructor for DiCEGenerator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.ECCoGenerator-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.ECCoGenerator","text":"Constructor for ECCoGenerator. This corresponds to the generator proposed in https://arxiv.org/abs/2312.10648, without the conformal set size penalty.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.GenericGenerator-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.GenericGenerator","text":"Constructor for GenericGenerator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.GravitationalGenerator-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.GravitationalGenerator","text":"Constructor for GravitationalGenerator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.GreedyGenerator-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.GreedyGenerator","text":"Constructor for GreedyGenerator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.ProbeGenerator-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.ProbeGenerator","text":"Constructor for ProbeGenerator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.REVISEGenerator-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.REVISEGenerator","text":"Constructor for REVISEGenerator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.WachterGenerator-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.WachterGenerator","text":"Constructor for WachterGenerator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.conditions_satisfied-Tuple{AbstractGradientBasedGenerator, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.conditions_satisfied","text":"conditions_satisfied(generator::AbstractGradientBasedGenerator, ce::AbstractCounterfactualExplanation)\n\nThe default method to check if the all conditions for convergence of the counterfactual search have been satisified for gradient-based generators. By default, gradient-based search is considered to have converged as soon as the proposed feature changes for all features are smaller than one percent of its standard deviation.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.generate_perturbations-Tuple{AbstractGenerator, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.generate_perturbations","text":"generate_perturbations(\n generator::AbstractGenerator, ce::AbstractCounterfactualExplanation\n)\n\nThe default method to generate feature perturbations for any generator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.generate_perturbations-Tuple{AbstractGradientBasedGenerator, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.generate_perturbations","text":"generate_perturbations(generator::AbstractGradientBasedGenerator, ce::AbstractCounterfactualExplanation)\n\nThe default method to generate feature perturbations for gradient-based generators through simple gradient descent.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.hinge_loss-Tuple{AbstractConvergence, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.hinge_loss","text":"hinge_loss(convergence::AbstractConvergence, ce::AbstractCounterfactualExplanation)\n\nThe default hinge loss for any convergence criterion. Can be overridden inside the Convergence module as part of the definition of specific convergence criteria.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.@objective-Tuple{Any, Any}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.@objective","text":"objective(generator, ex)\n\nA macro that can be used to define the counterfactual search objective.\n\n\n\n\n\n","category":"macro"},{"location":"reference/#CounterfactualExplanations.Generators.@search_feature_space-Tuple{Any}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.@search_feature_space","text":"search_feature_space(generator)\n\nA simple macro that can be used to specify feature space search.\n\n\n\n\n\n","category":"macro"},{"location":"reference/#CounterfactualExplanations.Generators.@search_latent_space-Tuple{Any}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.@search_latent_space","text":"search_latent_space(generator)\n\nA simple macro that can be used to specify latent space search.\n\n\n\n\n\n","category":"macro"},{"location":"reference/#CounterfactualExplanations.Generators.@with_optimiser-Tuple{Any, Any}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.@with_optimiser","text":"with_optimiser(generator, optimiser)\n\nA simple macro that can be used to specify the optimiser to be used.\n\n\n\n\n\n","category":"macro"},{"location":"reference/#CounterfactualExplanations.Objectives.ddp_diversity-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.ddp_diversity","text":"ddp_diversity(\n ce::AbstractCounterfactualExplanation;\n perturbation_size=1e-5\n)\n\nEvaluates how diverse the counterfactuals are using a Determinantal Point Process (DDP).\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Objectives.distance-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.distance","text":"distance(\n ce::AbstractCounterfactualExplanation;\n from::Union{Nothing,AbstractArray}=nothing,\n agg=mean,\n p::Real=1,\n weights::Union{Nothing,AbstractArray}=nothing,\n)\n\nComputes the distance of the counterfactual to the original factual.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Objectives.distance_l0-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.distance_l0","text":"distance_l0(ce::AbstractCounterfactualExplanation)\n\nComputes the L0 distance of the counterfactual to the original factual.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Objectives.distance_l1-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.distance_l1","text":"distance_l1(ce::AbstractCounterfactualExplanation)\n\nComputes the L1 distance of the counterfactual to the original factual.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Objectives.distance_l2-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.distance_l2","text":"distance_l2(ce::AbstractCounterfactualExplanation)\n\nComputes the L2 (Euclidean) distance of the counterfactual to the original factual.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Objectives.distance_linf-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.distance_linf","text":"distance_linf(ce::AbstractCounterfactualExplanation)\n\nComputes the L-inf distance of the counterfactual to the original factual.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Objectives.distance_mad-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.distance_mad","text":"distance_mad(ce::AbstractCounterfactualExplanation; agg=mean)\n\nThis is the distance measure proposed by Wachter et al. (2017).\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Objectives.predictive_entropy-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.predictive_entropy","text":"predictive_entropy(ce::AbstractCounterfactualExplanation; agg=Statistics.mean)\n\nComputes the predictive entropy of the counterfactuals. Explained in https://arxiv.org/abs/1406.2541.\n\n\n\n\n\n","category":"method"},{"location":"reference/#Flux.Losses.logitbinarycrossentropy-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"Flux.Losses.logitbinarycrossentropy","text":"Flux.Losses.logitbinarycrossentropy(ce::AbstractCounterfactualExplanation)\n\nSimply extends the logitbinarycrossentropy method to work with objects of type AbstractCounterfactualExplanation.\n\n\n\n\n\n","category":"method"},{"location":"reference/#Flux.Losses.logitcrossentropy-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"Flux.Losses.logitcrossentropy","text":"Flux.Losses.logitcrossentropy(ce::AbstractCounterfactualExplanation)\n\nSimply extends the logitcrossentropy method to work with objects of type AbstractCounterfactualExplanation.\n\n\n\n\n\n","category":"method"},{"location":"reference/#Flux.Losses.mse-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"Flux.Losses.mse","text":"Flux.Losses.mse(ce::AbstractCounterfactualExplanation)\n\nSimply extends the mse method to work with objects of type AbstractCounterfactualExplanation.\n\n\n\n\n\n","category":"method"},{"location":"reference/#Internal-functions","page":"🧐 Reference","title":"Internal functions","text":"","category":"section"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"Modules = [\n CounterfactualExplanations, \n CounterfactualExplanations.Convergence,\n CounterfactualExplanations.Evaluation,\n CounterfactualExplanations.DataPreprocessing,\n CounterfactualExplanations.Models, \n CounterfactualExplanations.GenerativeModels,\n CounterfactualExplanations.Generators, \n CounterfactualExplanations.Objectives\n]\nPublic = false","category":"page"},{"location":"reference/#CounterfactualExplanations.CRE","page":"🧐 Reference","title":"CounterfactualExplanations.CRE","text":"CRE <: AbstractCounterfactualExplanation\n\nA Counterfactual Rule Explanation (CRE) is a global explanation for a given target, model M, data and generator.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.CRE-Tuple{AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.CRE","text":"(cre::CRE)(x::AbstractArray)\n\nGenerates a local counterfactual point explanation for x using the generator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DecisionTreeModel","page":"🧐 Reference","title":"CounterfactualExplanations.DecisionTreeModel","text":"DecisionTreeModel\n\nConcrete type for tree-based models from DecisionTree.jl. Since DecisionTree.jl has an MLJ interface, we subtype the MLJModelType model type.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.FluxModelParams","page":"🧐 Reference","title":"CounterfactualExplanations.FluxModelParams","text":"FluxModelParams\n\nDefault MLP training parameters.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.JEM","page":"🧐 Reference","title":"CounterfactualExplanations.JEM","text":"JEM\n\nConcrete type for joint-energy models from JointEnergyModels. Since JointEnergyModels has an MLJ interface, we subtype the MLJModelType model type.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.LaplaceReduxModel","page":"🧐 Reference","title":"CounterfactualExplanations.LaplaceReduxModel","text":"LaplaceReduxModel\n\nConcrete type for neural networks with Laplace Approximation from the LaplaceRedux package. Currently subtyping the AbstractFluxNN model type, although this may be changed to MLJ in the future.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.NeuroTreeModel","page":"🧐 Reference","title":"CounterfactualExplanations.NeuroTreeModel","text":"NeuroTreeModel\n\nConcrete type for differentiable tree-based models from NeuroTreeModels. Since NeuroTreeModels has an MLJ interface, we subtype the MLJModelType model type.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.RandomForestModel","page":"🧐 Reference","title":"CounterfactualExplanations.RandomForestModel","text":"RandomForestModel\n\nConcrete type for random forest model from DecisionTree.jl. Since the DecisionTree package has an MLJ interface, we subtype the MLJModelType model type.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Rule","page":"🧐 Reference","title":"CounterfactualExplanations.Rule","text":"Rule\n\nA Rule is just a list of bounds for the different features. See also CRE.\n\n\n\n\n\n","category":"type"},{"location":"reference/#Base.Broadcast.broadcastable-Tuple{AbstractGenerator}","page":"🧐 Reference","title":"Base.Broadcast.broadcastable","text":"Treat AbstractGenerator as scalar when broadcasting.\n\n\n\n\n\n","category":"method"},{"location":"reference/#Base.Broadcast.broadcastable-Tuple{AbstractModel}","page":"🧐 Reference","title":"Base.Broadcast.broadcastable","text":"Treat AbstractModel as scalar when broadcasting.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.adjust_shape!-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.adjust_shape!","text":"adjust_shape!(ce::CounterfactualExplanation)\n\nA convenience method that adjusts the dimensions of the counterfactual state and related fields.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.adjust_shape-Tuple{CounterfactualExplanation, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.adjust_shape","text":"adjust_shape(\n ce::CounterfactualExplanation, \n x::AbstractArray\n)\n\nA convenience method that adjusts the dimensions of x.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.already_in_target_class-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.already_in_target_class","text":"already_in_target_class(ce::CounterfactualExplanation)\n\nCheck if the factual is already in the target class.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.apply_domain_constraints!-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.apply_domain_constraints!","text":"apply_domain_constraints!(ce::CounterfactualExplanation)\n\nWrapper function that applies underlying domain constraints.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.apply_mutability-Tuple{CounterfactualExplanation, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.apply_mutability","text":"apply_mutability(\n ce::CounterfactualExplanation,\n Δs′::AbstractArray,\n)\n\nA subroutine that applies mutability constraints to the proposed vector of feature perturbations.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.counterfactual-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.counterfactual","text":"counterfactual(ce::CounterfactualExplanation)\n\nA convenience method that returns the counterfactual.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.counterfactual_label-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.counterfactual_label","text":"counterfactual_label(ce::CounterfactualExplanation)\n\nA convenience method that returns the predicted label of the counterfactual.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.counterfactual_label_path-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.counterfactual_label_path","text":"counterfactual_label_path(ce::CounterfactualExplanation)\n\nReturns the counterfactual labels for each step of the search.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.counterfactual_probability","page":"🧐 Reference","title":"CounterfactualExplanations.counterfactual_probability","text":"counterfactual_probability(ce::CounterfactualExplanation)\n\nA convenience method that computes the class probabilities of the counterfactual.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.counterfactual_probability_path-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.counterfactual_probability_path","text":"counterfactual_probability_path(ce::CounterfactualExplanation)\n\nReturns the counterfactual probabilities for each step of the search.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.decode_array-Tuple{CounterfactualData, CausalInference.SCM, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.decode_array","text":"decode_array(\n data::CounterfactualData,\n dt::CausalInference.SCM,\n x::AbstractArray,\n)\n\nHelper function to decode an array x using a data transform dt::GenerativeModels.AbstractGenerativeModel.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.decode_array-Tuple{CounterfactualData, CounterfactualExplanations.GenerativeModels.AbstractGenerativeModel, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.decode_array","text":"decode_array(dt::GenerativeModels.AbstractGenerativeModel, x::AbstractArray)\n\nHelper function to decode an array x using a data transform dt::GenerativeModels.AbstractGenerativeModel.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.decode_array-Tuple{CounterfactualData, MultivariateStats.AbstractDimensionalityReduction, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.decode_array","text":"decode_array(dt::MultivariateStats.AbstractDimensionalityReduction, x::AbstractArray)\n\nHelper function to decode an array x using a data transform dt::MultivariateStats.AbstractDimensionalityReduction.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.decode_array-Tuple{CounterfactualData, Nothing, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.decode_array","text":"decode_array(dt::Nothing, x::AbstractArray)\n\nHelper function to decode an array x using a data transform dt::Nothing. This is a no-op.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.decode_array-Tuple{CounterfactualData, StatsBase.AbstractDataTransform, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.decode_array","text":"decode_array(dt::StatsBase.AbstractDataTransform, x::AbstractArray)\n\nHelper function to decode an array x using a data transform dt::StatsBase.AbstractDataTransform.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.decode_state","page":"🧐 Reference","title":"CounterfactualExplanations.decode_state","text":"function decode_state( ce::CounterfactualExplanation, x::Union{AbstractArray,Nothing}=nothing, )\n\nApplies all the applicable decoding functions:\n\nIf applicable, map the state variable back from the latent space to the feature space.\nIf and where applicable, inverse-transform features.\nReconstruct all categorical encodings.\n\nFinally, the decoded counterfactual is returned.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.decode_state!","page":"🧐 Reference","title":"CounterfactualExplanations.decode_state!","text":"decode_state!(ce::CounterfactualExplanation, x::Union{AbstractArray,Nothing}=nothing)\n\nIn-place version of decode_state.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.encode_array-Tuple{CounterfactualData, CausalInference.SCM, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.encode_array","text":"encode_array(data::CounterfactualData, dt::CausalInference.SCM, x::AbstractArray)\n\nHelper function to encode an array x using a data transform dt::CausalInference.SCM. This is a no-op.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.encode_array-Tuple{CounterfactualData, CounterfactualExplanations.GenerativeModels.AbstractGenerativeModel, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.encode_array","text":"encode_array(dt::GenerativeModels.AbstractGenerativeModel, x::AbstractArray)\n\nHelper function to encode an array x using a data transform dt::GenerativeModels.AbstractGenerativeModel.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.encode_array-Tuple{CounterfactualData, MultivariateStats.AbstractDimensionalityReduction, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.encode_array","text":"encode_array(dt::MultivariateStats.AbstractDimensionalityReduction, x::AbstractArray)\n\nHelper function to encode an array x using a data transform dt::MultivariateStats.AbstractDimensionalityReduction.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.encode_array-Tuple{CounterfactualData, Nothing, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.encode_array","text":"encode_array(dt::Nothing, x::AbstractArray)\n\nHelper function to encode an array x using a data transform dt::Nothing. This is a no-op.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.encode_array-Tuple{CounterfactualData, StatsBase.AbstractDataTransform, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.encode_array","text":"encode_array(dt::StatsBase.AbstractDataTransform, x::AbstractArray)\n\nHelper function to encode an array x using a data transform dt::StatsBase.AbstractDataTransform.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.encode_state","page":"🧐 Reference","title":"CounterfactualExplanations.encode_state","text":"function encode_state( ce::CounterfactualExplanation, x::Union{AbstractArray,Nothing} = nothing, )\n\nApplies all required encodings to x:\n\nIf applicable, it maps x to the latent space learned by the generative model.\nIf and where applicable, it rescales features. \n\nFinally, it returns the encoded state variable.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.encode_state!","page":"🧐 Reference","title":"CounterfactualExplanations.encode_state!","text":"encode_state!(ce::CounterfactualExplanation, x::Union{AbstractArray,Nothing}=nothing)\n\nIn-place version of encode_state.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.factual-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.factual","text":"factual(ce::CounterfactualExplanation)\n\nA convenience method to retrieve the factual x.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.factual_label-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.factual_label","text":"factual_label(ce::CounterfactualExplanation)\n\nA convenience method to get the predicted label associated with the factual.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.factual_probability-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.factual_probability","text":"factual_probability(ce::CounterfactualExplanation)\n\nA convenience method to compute the class probabilities of the factual.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.find_potential_neighbours","page":"🧐 Reference","title":"CounterfactualExplanations.find_potential_neighbours","text":"find_potential_neighbors(ce::AbstractCounterfactualExplanation)\n\nFinds potential neighbors for the selected factual data point.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.get_meta-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.get_meta","text":"get_meta(ce::CounterfactualExplanation)\n\nReturns meta data for a counterfactual explanation.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.guess_likelihood-Tuple{Union{AbstractMatrix, AbstractVector}}","page":"🧐 Reference","title":"CounterfactualExplanations.guess_likelihood","text":"guess_likelihood(y::RawOutputArrayType)\n\nGuess the likelihood based on the scientific type of the output array. Returns a symbol indicating the guessed likelihood and the scientific type of the output array.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.guess_loss-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.guess_loss","text":"guess_loss(ce::CounterfactualExplanation)\n\nGuesses the loss function to be used for the counterfactual search in case likelihood field is specified for the AbstractModel instance and no loss function was explicitly declared for AbstractGenerator instance.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.initialize!-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.initialize!","text":"initialize!(ce::CounterfactualExplanation)\n\nInitializes the counterfactual explanation. This method is called by the constructor. It does the following:\n\nCreates a dictionary to store information about the search.\nInitializes the counterfactual state.\nInitializes the search path.\nInitializes the loss.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.initialize_state!-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.initialize_state!","text":"initialize_state!(ce::CounterfactualExplanation)\n\nInitializes the starting point for the factual(s) in-place.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.initialize_state-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.initialize_state","text":"initialize_state(ce::CounterfactualExplanation)\n\nInitializes the starting point for the factual(s):\n\nIf ce.initialization is set to :identity or counterfactuals are searched in a latent space, then nothing is done.\nIf ce.initialization is set to :add_perturbation, then a random perturbation is added to the factual following following Slack (2021): https://arxiv.org/abs/2106.02666. The authors show that this improves adversarial robustness.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.outdim-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.outdim","text":"outdim(ce::CounterfactualExplanation)\n\nA convenience method that returns the output dimension of the predictive model.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.polynomial_decay-Tuple{Real, Real, Real, Int64}","page":"🧐 Reference","title":"CounterfactualExplanations.polynomial_decay","text":"polynomial_decay(a::Real, b::Real, decay::Real, t::Int)\n\nComputes the polynomial decay function as in Welling et al. (2011): https://www.stats.ox.ac.uk/~teh/research/compstats/WelTeh2011a.pdf.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.reset!-Tuple{CounterfactualExplanations.FluxModelParams}","page":"🧐 Reference","title":"CounterfactualExplanations.reset!","text":"reset!(flux_training_params::FluxModelParams)\n\nRestores the default parameter values.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.steps_exhausted-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.steps_exhausted","text":"steps_exhausted(ce::CounterfactualExplanation)\n\nA convenience method that checks if the number of maximum iterations has been exhausted.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.target_probs_path-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.target_probs_path","text":"target_probs_path(ce::CounterfactualExplanation)\n\nReturns the target probabilities for each step of the search.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.update!-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.update!","text":"update!(ce::CounterfactualExplanation)\n\nAn important subroutine that updates the counterfactual explanation. It takes a snapshot of the current counterfactual search state and passes it to the generator. Based on the current state the generator generates perturbations. Various constraints are then applied to the proposed vector of feature perturbations. Finally, the counterfactual search state is updated.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Convergence.max_iter-Tuple{AbstractConvergence}","page":"🧐 Reference","title":"CounterfactualExplanations.Convergence.max_iter","text":"max_iter(conv::AbstractConvergence)\n\nReturns the maximum number of iterations specified.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.distance_measures","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.distance_measures","text":"All distance measures.\n\n\n\n\n\n","category":"constant"},{"location":"reference/#CounterfactualExplanations.Evaluation.EnergySampler","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.EnergySampler","text":"Base type that stores information relevant to energy-based posterior sampling from AbstractModel.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Evaluation.EnergySampler-Tuple{AbstractModel, Distributions.Distribution, Distributions.Distribution, NTuple{N, Int64} where N, Int64}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.EnergySampler","text":"EnergySampler(\n model::AbstractModel,\n 𝒟x::Distribution,\n 𝒟y::Distribution,\n input_size::Dims,\n yidx::Int;\n opt::Union{Nothing,AbstractSamplingRule}=nothing,\n nsamples::Int=100,\n niter_final::Int=1000,\n ntransitions::Int=0,\n opt_warmup::Union{Nothing,AbstractSamplingRule}=nothing,\n niter::Int=20,\n batch_size::Int=50,\n prob_buffer::AbstractFloat=0.95,\n kwargs...,\n)\n\nConstructor for EnergySampler, which is used to sample from the posterior distribution of the model conditioned on y.\n\nArguments\n\nmodel::AbstractModel: The model to be used for sampling.\ndata::CounterfactualData: The data to be used for sampling.\ny::Any: The conditioning value.\nopt::AbstractSamplingRule=ImproperSGLD(): The sampling rule to be used. By default, SGLD is used with a = (2 / std(Uniform()) * std(𝒟x) and b = 1 and γ=0.9.\nnsamples::Int=100: The number of samples to include in the final empirical posterior distribution.\nniter_final::Int=1000: The number of iterations for generating samples from the posterior distribution. Typically, this number will be larger than the number of iterations during PMC training. \nntransitions::Int=0: The number of transitions for (optionally) warming up the sampler. By default, this is set to 0 and the sampler is not warmed up. For valies larger than 0, the sampler is trained through PMC for niter iterations and ntransitions transitions to build a buffer of samples. The buffer is used for posterior sampling.\nopt_warmup::Union{Nothing,AbstractSamplingRule}=nothing: The sampling rule to be used for warm-up. By default, ImproperSGLD is used with α = (2 / std(Uniform()) * std(𝒟x) and γ = 0.005α.\nniter::Int=100: The number of iterations for training the sampler through PMC.\nbatch_size::Int=50: The batch size for training the sampler.\nprob_buffer::AbstractFloat=0.5: The probability of drawing samples from the replay buffer. Smaller values will result in more samples being drawn from the prior and typically lead to better mixing and diversity in the samples.\nkwargs...: Additional keyword arguments to be passed on to the sampler and PMC.\n\nReturns\n\nEnergySampler: An instance of EnergySampler.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.EnergySampler-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.EnergySampler","text":"EnergySampler(ce::CounterfactualExplanation; kwrgs...)\n\nOverloads the EnergySampler constructor to accept a CounterfactualExplanation object.\n\n\n\n\n\n","category":"method"},{"location":"reference/#Base.rand","page":"🧐 Reference","title":"Base.rand","text":"Base.rand(sampler::EnergySampler, n::Int=100; retrain=false)\n\nOverloads the rand method to randomly draw n samples from EnergySampler. If from_posterior is true, the samples are drawn from the posterior distribution. Otherwise, the samples are generated from the model conditioned on the target value using a single chain (see generate_posterior_samples).\n\nArguments\n\nsampler::EnergySampler: The EnergySampler object to be used for sampling.\nn::Int=100: The number of samples to draw.\nfrom_posterior::Bool=true: Whether to draw samples from the posterior distribution.\nniter::Int=500: The number of iterations for generating samples through Monte Carlo sampling (single chain).\n\nReturns\n\nAbstractArray: The samples.\n\n\n\n\n\n","category":"function"},{"location":"reference/#Base.vcat-Tuple{CounterfactualExplanations.Evaluation.Benchmark, CounterfactualExplanations.Evaluation.Benchmark}","page":"🧐 Reference","title":"Base.vcat","text":"Base.vcat(bmk1::Benchmark, bmk2::Benchmark)\n\nVertically concatenates two Benchmark objects.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.compute_measure-Tuple{CounterfactualExplanation, Function, Function}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.compute_measure","text":"compute_measure(ce::CounterfactualExplanation, measure::Function, agg::Function)\n\nComputes a single measure for a counterfactual explanation. The measure is applied to the counterfactual explanation ce and aggregated using the aggregation function agg.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.define_prior-Tuple{CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.define_prior","text":"define_prior(\n data::CounterfactualData;\n 𝒟x::Union{Nothing,Distribution}=nothing,\n 𝒟y::Union{Nothing,Distribution}=nothing,\n n_std::Int=3,\n)\n\nDefines the prior for the data. The space is defined as a uniform distribution with bounds defined by the mean and standard deviation of the data. The bounds are extended by n_std standard deviations.\n\nArguments\n\ndata::CounterfactualData: The data to be used for defining the prior sampling space.\nn_std::Int=3: The number of standard deviations to extend the bounds.\n\nReturns\n\nUniform: The uniform distribution defining the prior sampling space.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.distance_from_posterior-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.distance_from_posterior","text":"distance_from_posterior(ce::AbstractCounterfactualExplanation)\n\nComputes the distance from the counterfactual to generated conditional samples. The distance is computed as the mean distance from the counterfactual to the samples drawn from the posterior distribution of the model. By default, the cosine distance is used.\n\nArguments\n\nce::AbstractCounterfactualExplanation: The counterfactual explanation object.\nnsamples::Int=1000: The number of samples to draw.\nfrom_posterior::Bool=true: Whether to draw samples from the posterior distribution.\nagg: The aggregation function to use for computing the distance.\nchoose_lowest_energy::Bool=true: Whether to choose the samples with the lowest energy.\nchoose_random::Bool=false: Whether to choose random samples.\nnmin::Int=25: The minimum number of samples to choose.\np::Int=1: The norm to use for computing the distance.\ncosine::Bool=true: Whether to use the cosine distance.\nkwargs...: Additional keyword arguments to be passed on to the EnergySampler.\n\nReturns\n\nAbstractFloat: The distance from the counterfactual to the samples.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.faithfulness-Tuple{CounterfactualExplanation, typeof(CounterfactualExplanations.Evaluation.distance_from_posterior)}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.faithfulness","text":"faithfulness(\n ce::CounterfactualExplanation,\n fun::typeof(Objectives.distance_from_target);\n λ::AbstractFloat=1.0,\n kwrgs...,\n)\n\nComputes the faithfulness of a counterfactual explanation based on the cosine similarity between the counterfactual and samples drawn from the model posterior through SGLD (see distance_from_posterior).\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.generate_posterior_samples","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.generate_posterior_samples","text":"generate_posterior_samples(\n e::EnergySampler, n::Int=1000; niter::Int=1000, kwargs...\n)\n\nGenerates n samples from the posterior distribution of the model conditioned on the target value y. The samples are generated through (Persistent) Monte Carlo sampling using the EnergySampler object. If the replay buffer is not empty, the initial samples are drawn from the buffer. \n\nNote that by default the batch size of the sampler is set to round(Int, n / 100) by default for sampling. This is to ensure that the samples are drawn independently from the posterior distribution. It also helps to avoid vanishing gradients. \n\nThe chain is run persistently until n samples are generated. The number of transitions is set to ceil(Int, n / batch_size). Once the chain is run, the last n samples are form the replay buffer are returned.\n\nArguments\n\ne::EnergySampler: The EnergySampler object to be used for sampling.\nn::Int=100: The number of samples to generate.\nbatch_size::Int=round(Int, n / 100): The batch size for sampling.\nniter::Int=1000: The number of iterations for generating samples from the posterior distribution.\nkwargs...: Additional keyword arguments to be passed on to the sampler.\n\nReturns\n\nAbstractArray: The generated samples.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.Evaluation.get_lowest_energy_sample-Tuple{CounterfactualExplanations.Evaluation.EnergySampler}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.get_lowest_energy_sample","text":"get_lowest_energy_sample(sampler::EnergySampler; n::Int=5)\n\nChooses the samples with the lowest energy (i.e. highest probability) from EnergySampler.\n\nArguments\n\nsampler::EnergySampler: The EnergySampler object to be used for sampling.\nn::Int=5: The number of samples to choose.\n\nReturns\n\nAbstractArray: The samples with the lowest energy.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.get_sampler!-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.get_sampler!","text":"get_sampler!(ce::AbstractCounterfactualExplanation; kwargs...)\n\nGets the EnergySampler object from the counterfactual explanation. If the sampler is not found, it is constructed and stored in the counterfactual explanation object.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.plausibility-Tuple{CounterfactualExplanation, typeof(CounterfactualExplanations.Objectives.distance_from_target)}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.plausibility","text":"plausibility(\n ce::CounterfactualExplanation,\n fun::typeof(Objectives.distance_from_target);\n K=nothing,\n kwrgs...,\n)\n\nComputes the plausibility of a counterfactual explanation based on the cosine similarity between the counterfactual and samples drawn from the target distribution.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.to_dataframe-Tuple{Vector, Any, Bool, Bool, Bool, CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.to_dataframe","text":"evaluate_dataframe(\n ce::CounterfactualExplanation,\n measure::Vector{Function},\n agg::Function,\n report_each::Bool,\n pivot_longer::Bool,\n store_ce::Bool,\n)\n\nEvaluates a counterfactual explanation and returns a dataframe of evaluation measures.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.validity_strict-Tuple{CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.validity_strict","text":"validity_strict(ce::CounterfactualExplanation)\n\nChecks if the counterfactual search has been strictly valid in the sense that it has converged with respect to the pre-specified target probability γ.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Evaluation.warmup!-Tuple{CounterfactualExplanations.Evaluation.EnergySampler, Int64}","page":"🧐 Reference","title":"CounterfactualExplanations.Evaluation.warmup!","text":"warmup!(\n e::EnergySampler,\n y::Int;\n niter::Int=20,\n ntransitions::Int=100,\n kwargs...,\n)\n\nWarms up the EnergySampler to the underlying model for conditioning value y. Specifically, this entails running PMC for niter iterations and ntransitions transitions to build a buffer of samples. The buffer is used for posterior sampling.\n\nArguments\n\ne::EnergySampler: The EnergySampler object to be trained.\ny::Int: The conditioning value.\nopt::Union{Nothing,AbstractSamplingRule}: The sampling rule to be used. By default, ImproperSGLD is used with α = 2 * std(Uniform(𝒟x)) and γ = 0.005α.\nniter::Int=20: The number of iterations for training the sampler through PMC.\nntransitions::Int=100: The number of transitions for training the sampler. In each transition, the sampler is updated with a mini-batch of data. Data is either drawn from the replay buffer or reinitialized from the prior.\nkwargs...: Additional keyword arguments to be passed on to the sampler and PMC.\n\nReturns\n\nEnergySampler: The trained EnergySampler.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.InputTransformer","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.InputTransformer","text":"InputTransformer\n\nAbstract type for data transformers. This can be any of the following:\n\nStatsBase.AbstractDataTransform: A data transformation object from the StatsBase package.\nMultivariateStats.AbstractDimensionalityReduction: A dimensionality reduction object from the MultivariateStats package.\nGenerativeModels.AbstractGenerativeModel: A generative model object from the GenerativeModels module.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.TypedInputTransformer","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.TypedInputTransformer","text":"TypedInputTransformer\n\nAbstract type for data transformers.\n\n\n\n\n\n","category":"type"},{"location":"reference/#Base.Broadcast.broadcastable-Tuple{CounterfactualData}","page":"🧐 Reference","title":"Base.Broadcast.broadcastable","text":"Treat CounterfactualData as scalar when broadcasting.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing._subset-Tuple{CounterfactualData, Vector{Int64}}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing._subset","text":"_subset(data::CounterfactualData, idx::Vector{Int})\n\nCreates a subset of the data.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.convert_to_1d-Tuple{Matrix, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.convert_to_1d","text":"convert_to_1d(y::Matrix, y_levels::AbstractArray)\n\nHelper function to convert a one-hot encoded matrix to a vector of labels. This is necessary because MLJ models require the labels to be represented as a vector, but the synthetic datasets in this package hold the labels in one-hot encoded form.\n\nArguments\n\ny::Matrix: The one-hot encoded matrix.\ny_levels::AbstractArray: The levels of the categorical variable.\n\nReturns\n\nlabels: A vector of labels.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.input_dim-Tuple{CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.input_dim","text":"input_dim(counterfactual_data::CounterfactualData)\n\nHelper function that returns the input dimension (number of features) of the data. \n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.mutability_constraints-Tuple{CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.mutability_constraints","text":"mutability_constraints(counterfactual_data::CounterfactualData)\n\nA convenience function that returns the mutability constraints. If none were specified, it is assumed that all features are mutable in :both directions.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.outdim-Tuple{CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.outdim","text":"outdim(data::CounterfactualData)\n\nReturns the number of output classes.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.preprocess_data_for_mlj-Tuple{CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.preprocess_data_for_mlj","text":"preprocess_data_for_mlj(data::CounterfactualData)\n\nHelper function to preprocess data::CounterfactualData for MLJ models.\n\nArguments\n\ndata::CounterfactualData: The data to be preprocessed.\n\nReturns\n\n(df_x, y): A tuple containing the preprocessed data, with df_x being a DataFrame object and y being a categorical vector.\n\nExample\n\nX, y = preprocessdatafor_mlj(data)\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.reconstruct_cat_encoding-Tuple{CounterfactualData, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.reconstruct_cat_encoding","text":"reconstruct_cat_encoding(counterfactual_data::CounterfactualData, x::Vector)\n\nReconstruct the categorical encoding for a single instance.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.subsample-Tuple{CounterfactualData, Int64}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.subsample","text":"subsample(data::CounterfactualData, n::Int)\n\nHelper function to randomly subsample data::CounterfactualData.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.train_test_split-Tuple{CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.train_test_split","text":"train_test_split(data::CounterfactualData;test_size=0.2,keep_class_ratio=false)\n\nSplits data into train and test split.\n\nArguments\n\ndata::CounterfactualData: The data to be preprocessed.\ntest_size=0.2: Proportion of the data to be used for testing. \nkeep_class_ratio=false: Decides whether to sample equally from each class, or keep their relative size.\n\nReturns\n\n(train_data::CounterfactualData, test_data::CounterfactualData): A tuple containing the train and test splits.\n\nExample\n\ntrain, test = traintestsplit(data, testsize=0.1, keepclass_ratio=true)\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.DataPreprocessing.unpack_data-Tuple{CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.DataPreprocessing.unpack_data","text":"unpack_data(data::CounterfactualData)\n\nHelper function that unpacks data.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.AbstractCustomDifferentiableModel","page":"🧐 Reference","title":"CounterfactualExplanations.Models.AbstractCustomDifferentiableModel","text":"Base type for custom differentiable models.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Models.AbstractDifferentiableModel","page":"🧐 Reference","title":"CounterfactualExplanations.Models.AbstractDifferentiableModel","text":"Base type for differentiable models.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Models.AbstractDifferentiableModelType","page":"🧐 Reference","title":"CounterfactualExplanations.Models.AbstractDifferentiableModelType","text":"Abstract types for differentiable models.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Models.AbstractFluxModel","page":"🧐 Reference","title":"CounterfactualExplanations.Models.AbstractFluxModel","text":"Base type for differentiable models written in Flux.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Models.AbstractFluxNN","page":"🧐 Reference","title":"CounterfactualExplanations.Models.AbstractFluxNN","text":"Abstract type for Flux models.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Models.AbstractMLJModel","page":"🧐 Reference","title":"CounterfactualExplanations.Models.AbstractMLJModel","text":"Base type for differentiable models from the MLJ library.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Models.AbstractModelType-Tuple{Any}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.AbstractModelType","text":"(type::AbstractModelType)(model; likelihood::Symbol=:classification_binary)\n\nWrap model type around the pre-trained model model.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.AbstractModelType-Tuple{CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.AbstractModelType","text":"(type::AbstractModelType)(data::CounterfactualData; kwargs...)\n\nWrap model type around the data in data. This is a convenience function to avoid having to construct a Model object.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Differentiability","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Differentiability","text":"A base type for model differentiability.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Models.Differentiability-Tuple{CounterfactualExplanations.Models.Model}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Differentiability","text":"Dispatches on the type of model for the differentiability trait.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Fitresult","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Fitresult","text":"Fitresult\n\nA struct to hold the results of fitting a model.\n\nFields\n\nfitresult: The result of fitting the model to the data. This object should be callable on new data.\nother::Dict: A dictionary to hold any other relevant information.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Models.Fitresult-Tuple{AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Fitresult","text":"(fitresult::Fitresult)(newdata::AbstractArray)\n\nWhen called on new data, the Fitresult object returns the result of calling the fitresult on new data.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Fitresult-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Fitresult","text":"(fitresult::Fitresult)()\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.FluxNN","page":"🧐 Reference","title":"CounterfactualExplanations.Models.FluxNN","text":"Concrete type for Flux models.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Models.IsDifferentiable","page":"🧐 Reference","title":"CounterfactualExplanations.Models.IsDifferentiable","text":"Struct for models that are differentiable.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Models.MLJModelType","page":"🧐 Reference","title":"CounterfactualExplanations.Models.MLJModelType","text":"Abstract type for MLJ models.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Models.NonDifferentiable","page":"🧐 Reference","title":"CounterfactualExplanations.Models.NonDifferentiable","text":"By default, models are assumed not to be differentiable.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Models.binary_to_onehot-Tuple{Any}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.binary_to_onehot","text":"binary_to_onehot(p)\n\nHelper function to turn dummy-encoded variable into onehot-encoded variable.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.build_ensemble-Tuple{Int64}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.build_ensemble","text":"build_ensemble(K::Int;kw=(input_dim=2,n_hidden=32,output_dim=1))\n\nHelper function that builds an ensemble of K models.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.build_mlp-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.build_mlp","text":"build_mlp()\n\nHelper function to build simple MLP.\n\nExamples\n\nnn = build_mlp()\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.data_loader-Tuple{CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.data_loader","text":"data_loader(data::CounterfactualData)\n\nPrepares counterfactual data for training in Flux.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.forward!-Tuple{Flux.Chain, Any}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.forward!","text":"forward!(model::Flux.Chain, data; loss::Symbol, opt::Symbol, n_epochs::Int=10, model_name=\"MLP\")\n\nForward pass for training a Flux.Chain model.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.load_mnist_model-Tuple{CounterfactualExplanations.Models.AbstractModelType}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.load_mnist_model","text":"load_mnist_model(type::AbstractModelType)\n\nEmpty function to be overloaded for loading a pre-trained model for the AbstractModelType model type.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.load_mnist_model-Tuple{DeepEnsemble}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.load_mnist_model","text":"load_mnist_model(type::DeepEnsemble)\n\nLoad a pre-trained deep ensemble model for the MNIST dataset.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.load_mnist_model-Tuple{MLP}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.load_mnist_model","text":"load_mnist_model(type::MLP)\n\nLoad a pre-trained MLP model for the MNIST dataset.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.load_mnist_vae-Tuple{}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.load_mnist_vae","text":"load_mnist_vae(; strong=true)\n\nLoad a pre-trained VAE model for the MNIST dataset.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.train-Tuple{CounterfactualExplanations.Models.Model, CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.train","text":"train(M::Model, data::CounterfactualData)\n\nTrains the model M on the data in data.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.train-Tuple{CounterfactualExplanations.Models.Model, CounterfactualExplanations.Models.AbstractFluxNN, CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.train","text":"train(M::FluxModel, data::CounterfactualData; kwargs...)\n\nWrapper function to train Flux models.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.train-Tuple{CounterfactualExplanations.Models.Model, CounterfactualExplanations.Models.MLJModelType, CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.train","text":"train(\n M::Model,\n type::MLJModelType,\n data::CounterfactualData,\n)\n\nOverloads the train function for MLJ models.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.train-Tuple{CounterfactualExplanations.Models.Model, DeepEnsemble, CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.train","text":"train(M::Model, type::DeepEnsemble, data::CounterfactualData; kwargs...)\n\nOverloads the train function for deep ensembles.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.GenerativeModels.AbstractGMParams","page":"🧐 Reference","title":"CounterfactualExplanations.GenerativeModels.AbstractGMParams","text":"Base type of generative model hyperparameter container.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.GenerativeModels.AbstractGenerativeModel","page":"🧐 Reference","title":"CounterfactualExplanations.GenerativeModels.AbstractGenerativeModel","text":"Base type for generative model.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.GenerativeModels.Encoder","page":"🧐 Reference","title":"CounterfactualExplanations.GenerativeModels.Encoder","text":"Encoder\n\nConstructs encoder part of VAE: a simple Flux neural network with one hidden layer and two linear output layers for the first two moments of the latent distribution.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.GenerativeModels.VAE","page":"🧐 Reference","title":"CounterfactualExplanations.GenerativeModels.VAE","text":"VAE <: AbstractGenerativeModel\n\nConstructs the Variational Autoencoder. The VAE is a subtype of AbstractGenerativeModel. Any (sub-)type of AbstractGenerativeModel is accepted by latent space generators. \n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.GenerativeModels.VAE-Tuple{Any}","page":"🧐 Reference","title":"CounterfactualExplanations.GenerativeModels.VAE","text":"VAE(input_dim;kws...)\n\nOuter method for instantiating a VAE.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.GenerativeModels.VAEParams","page":"🧐 Reference","title":"CounterfactualExplanations.GenerativeModels.VAEParams","text":"VAEParams <: AbstractGMParams\n\nThe default VAE parameters describing both the encoder/decoder architecture and the training process.\n\n\n\n\n\n","category":"type"},{"location":"reference/#Base.rand-2","page":"🧐 Reference","title":"Base.rand","text":"Random.rand(encoder::Encoder, x, device=cpu)\n\nDraws random samples from the latent distribution.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.GenerativeModels.Decoder-Tuple{Int64, Int64, Int64}","page":"🧐 Reference","title":"CounterfactualExplanations.GenerativeModels.Decoder","text":"Decoder(input_dim::Int, latent_dim::Int, hidden_dim::Int; activation=relu)\n\nThe default decoder architecture is just a Flux Chain with one hidden layer and a linear output layer. \n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.GenerativeModels.decode-Tuple{CounterfactualExplanations.GenerativeModels.VAE, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.GenerativeModels.decode","text":"decode(generative_model::VAE, x::AbstractArray)\n\nDecodes an array x using the VAE decoder.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.GenerativeModels.encode-Tuple{CounterfactualExplanations.GenerativeModels.VAE, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.GenerativeModels.encode","text":"encode(generative_model::VAE, x::AbstractArray)\n\nEncodes an array x using the VAE encoder. Specifically, it samples from the latent distribution. It does so by first passing x through the encoder to obtain the mean and log-variance of the latent distribution. Then, it samples from the latent distribution using the reparameterization trick. See Random.rand(encoder::Encoder, x, device=cpu) for more details.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.GenerativeModels.get_data-Tuple{AbstractArray, AbstractArray, Any}","page":"🧐 Reference","title":"CounterfactualExplanations.GenerativeModels.get_data","text":"get_data(X::AbstractArray, y::AbstractArray, batch_size)\n\nPreparing data for mini-batch training .\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.GenerativeModels.get_data-Tuple{AbstractArray, Any}","page":"🧐 Reference","title":"CounterfactualExplanations.GenerativeModels.get_data","text":"get_data(X::AbstractArray, batch_size)\n\nPreparing data for mini-batch training .\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.GenerativeModels.reconstruct","page":"🧐 Reference","title":"CounterfactualExplanations.GenerativeModels.reconstruct","text":"reconstruct(generative_model::VAE, x, device=cpu)\n\nImplements a full pass of some input x through the VAE: x ↦ x̂.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.GenerativeModels.reparameterization_trick","page":"🧐 Reference","title":"CounterfactualExplanations.GenerativeModels.reparameterization_trick","text":"reparameterization_trick(μ,logσ,device=cpu)\n\nHelper function that implements the reparameterization trick: z ∼ 𝒩(μ,σ²) ⇔ z=μ + σ ⊙ ε, ε ∼ 𝒩(0,I).\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.Generators.Penalty","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.Penalty","text":"Type union for acceptable argument types for the penalty field of GradientBasedGenerator.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Generators.TCRExGenerator","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.TCRExGenerator","text":"T-CREx counterfactual generator class.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Generators._replace_nans","page":"🧐 Reference","title":"CounterfactualExplanations.Generators._replace_nans","text":"_replace_nans(Δs′::AbstractArray, old_new::Pair=(NaN => 0))\n\nHelper function to deal with exploding gradients. This is only a temporary fix and will be improved.\n\n\n\n\n\n","category":"function"},{"location":"reference/#CounterfactualExplanations.Generators.feature_selection!-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.feature_selection!","text":"feature_selection!(ce::AbstractCounterfactualExplanation)\n\nPerform feature selection to find the dimension with the closest (but not equal) values between the ce.x (factual) and ce.s′ (counterfactual) arrays.\n\nArguments\n\nce::AbstractCounterfactualExplanation: An instance of the AbstractCounterfactualExplanation type representing the counterfactual explanation.\n\nReturns\n\nnothing\n\nThe function iteratively modifies the ce.s′ counterfactual array by updating its elements to match the corresponding elements in the ce.x factual array, one dimension at a time, until the predicted label of the modified ce.s′ matches the predicted label of the ce.x array.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.find_closest_dimension-Tuple{Any, Any}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.find_closest_dimension","text":"find_closest_dimension(factual, counterfactual)\n\nFind the dimension with the closest (but not equal) values between the factual and counterfactual arrays.\n\nArguments\n\nfactual: The factual array.\ncounterfactual: The counterfactual array.\n\nReturns\n\nclosest_dimension: The index of the dimension with the closest values.\n\nThe function iterates over the indices of the factual array and calculates the absolute difference between the corresponding elements in the factual and counterfactual arrays. It returns the index of the dimension with the smallest difference, excluding dimensions where the values in factual and counterfactual are equal.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.find_counterfactual-NTuple{4, Any}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.find_counterfactual","text":"find_counterfactual(model, factual_class, counterfactual_data, counterfactual_candidates)\n\nFind the first counterfactual index by predicting labels.\n\nArguments\n\nmodel: The fitted model used for prediction.\ntarget_class: Expected target class.\ncounterfactual_data: Data required for counterfactual generation.\ncounterfactual_candidates: The array of counterfactual candidates.\n\nReturns\n\ncounterfactual: The index of the first counterfactual found.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.growing_spheres_generation!-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.growing_spheres_generation!","text":"growing_spheres_generation(ce::AbstractCounterfactualExplanation)\n\nGenerate counterfactual candidates using the growing spheres generation algorithm.\n\nArguments\n\nce::AbstractCounterfactualExplanation: An instance of the AbstractCounterfactualExplanation type representing the counterfactual explanation.\n\nReturns\n\nnothing\n\nThis function applies the growing spheres generation algorithm to generate counterfactual candidates. It starts by generating random points uniformly on a sphere, gradually reducing the search space until no counterfactuals are found. Then it expands the search space until at least one counterfactual is found or the maximum number of iterations is reached.\n\nThe algorithm iteratively generates counterfactual candidates and predicts their labels using the model stored in ce.M. It checks if any of the predicted labels are different from the factual class. The process of reducing the search space involves halving the search radius, while the process of expanding the search space involves increasing the search radius.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.h-Tuple{AbstractGenerator, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.h","text":"h(generator::AbstractGenerator, ce::AbstractCounterfactualExplanation)\n\nDispatches to the appropriate complexity function for any generator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.h-Tuple{AbstractGenerator, Function, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.h","text":"h(generator::AbstractGenerator, penalty::Function, ce::AbstractCounterfactualExplanation)\n\nOverloads the h function for the case where a single penalty function is provided.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.h-Tuple{AbstractGenerator, Nothing, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.h","text":"h(generator::AbstractGenerator, penalty::Nothing, ce::AbstractCounterfactualExplanation)\n\nOverloads the h function for the case where no penalty is provided.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.h-Tuple{AbstractGenerator, Vector{<:Tuple}, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.h","text":"h(generator::AbstractGenerator, penalty::Tuple, ce::AbstractCounterfactualExplanation)\n\nOverloads the h function for the case where a single penalty function is provided with additional keyword arguments.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.h-Tuple{AbstractGenerator, Vector{Function}, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.h","text":"h(generator::AbstractGenerator, penalty::Tuple, ce::AbstractCounterfactualExplanation)\n\nOverloads the h function for the case where a single penalty function is provided with additional keyword arguments.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.hyper_sphere_coordinates-Tuple{Integer, AbstractArray, AbstractFloat, AbstractFloat}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.hyper_sphere_coordinates","text":"hyper_sphere_coordinates(n_search_samples::Int, instance::Vector{Float64}, low::Int, high::Int; p_norm::Int=2)\n\nGenerates candidate counterfactuals using the growing spheres method based on hyper-sphere coordinates.\n\nThe implementation follows the Random Point Picking over a sphere algorithm described in the paper: \"Learning Counterfactual Explanations for Tabular Data\" by Pawelczyk, Broelemann & Kascneci (2020), presented at The Web Conference 2020 (WWW). It ensures that points are sampled uniformly at random using insights from: http://mathworld.wolfram.com/HyperspherePointPicking.html\n\nThe growing spheres method is originally proposed in the paper: \"Comparison-based Inverse Classification for Interpretability in Machine Learning\" by Thibaut Laugel et al (2018), presented at the International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (2018).\n\nArguments\n\nn_search_samples::Int: The number of search samples (int > 0).\ninstance::AbstractArray: The input point array.\nlow::AbstractFloat: The lower bound (float >= 0, l < h).\nhigh::AbstractFloat: The upper bound (float >= 0, h > l).\np_norm::Integer: The norm parameter (int >= 1).\n\nReturns\n\ncandidate_counterfactuals::Array: An array of candidate counterfactuals.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.incompatible-Tuple{Any, Any}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.incompatible","text":"incompatible(AbstractGenerator, AbstractCounterfactualExplanation)\n\nChecks if the generator is incompatible with any of the additional specifications for the counterfactual explanations. By default, generators are assumed to be compatible.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.propose_state-Tuple{CounterfactualExplanations.Models.IsDifferentiable, AbstractGradientBasedGenerator, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.propose_state","text":"propose_state(\n ::Models.IsDifferentiable,\n generator::AbstractGradientBasedGenerator,\n ce::AbstractCounterfactualExplanation,\n)\n\nProposes new state based on backpropagation for gradient-based generators and differentiable models.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.total_loss-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.total_loss","text":"total_loss(ce::AbstractCounterfactualExplanation)\n\nComputes the total loss of a counterfactual explanation with respect to the search objective.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.ℓ-Tuple{AbstractGenerator, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.ℓ","text":"ℓ(generator::AbstractGenerator, ce::AbstractCounterfactualExplanation)\n\nDispatches to the appropriate loss function for any generator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.ℓ-Tuple{AbstractGenerator, Function, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.ℓ","text":"ℓ(generator::AbstractGenerator, loss::Function, ce::AbstractCounterfactualExplanation)\n\nOverloads the ℓ function for the case where a single loss function is provided.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.ℓ-Tuple{AbstractGenerator, Nothing, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.ℓ","text":"ℓ(generator::AbstractGenerator, loss::Nothing, ce::AbstractCounterfactualExplanation)\n\nOverloads the ℓ function for the case where no loss function is provided.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.∂h-Tuple{AbstractGradientBasedGenerator, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.∂h","text":"∂h(generator::AbstractGradientBasedGenerator, ce::AbstractCounterfactualExplanation)\n\nThe default method to compute the gradient of the complexity penalty at the current counterfactual state for gradient-based generators. It assumes that Zygote.jl has gradient access. \n\nIf the penalty is not provided, it returns 0.0. By default, Zygote never works out the gradient for constants and instead returns 'nothing', so we need to add a manual step to override this behaviour. See here: https://discourse.julialang.org/t/zygote-gradient/26715.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.∂ℓ-Tuple{AbstractGradientBasedGenerator, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.∂ℓ","text":"∂ℓ(\n generator::AbstractGradientBasedGenerator,\n ce::AbstractCounterfactualExplanation,\n)\n\nThe default method to compute the gradient of the loss function at the current counterfactual state for gradient-based generators. It assumes that Zygote.jl has gradient access.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.∇-Tuple{AbstractGradientBasedGenerator, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.∇","text":"∇(\n generator::AbstractGradientBasedGenerator,\n ce::AbstractCounterfactualExplanation,\n)\n\nThe default method to compute the gradient of the counterfactual search objective for gradient-based generators. It simply computes the weighted sum over partial derivates. It assumes that Zygote.jl has gradient access. If the counterfactual is being generated using Probe, the hinge loss is added to the gradient.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Objectives.NeedsNeighbours","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.NeedsNeighbours","text":"Penalties that need access to neighbors in the target class.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Objectives.NoPenaltyRequirements","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.NoPenaltyRequirements","text":"By default, penalties have no extra requirements.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Objectives.PenaltyRequirements","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.PenaltyRequirements","text":"A base type for a style of process.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Objectives.PenaltyRequirements-Tuple{Type{<:typeof(CounterfactualExplanations.Objectives.distance_from_target)}}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.PenaltyRequirements","text":"The distance_from_target method needs neighbors in the target class.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Objectives.cos_dist-Tuple{Any, Any}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.cos_dist","text":"cos_dist(x,y)\n\nComputes the cosine distance between two vectors.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Objectives.distance_from_target-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.distance_from_target","text":"distance_from_target(\n ce::AbstractCounterfactualExplanation;\n K::Int=50\n)\n\nComputes the distance of the counterfactual from samples in the target main. If choose_randomly is true, the function will randomly sample K neighbours from the target manifold. Otherwise, it will compute the pairwise distances and select the K closest neighbours.\n\nArguments\n\nce::AbstractCounterfactualExplanation: The counterfactual explanation.\nK::Int=50: The number of neighbours to sample.\nchoose_randomly::Bool=true: Whether to sample neighbours randomly.\nkwrgs...: Additional keyword arguments for the distance function.\n\nReturns\n\nΔ::AbstractFloat: The distance from the counterfactual to the target manifold.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Objectives.energy-Tuple{AbstractModel, AbstractArray, Int64}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.energy","text":"energy(M::AbstractModel, x::AbstractArray, t::Int)\n\nComputes the energy of the model at a given state as in Altmeyer et al. (2024): https://scholar.google.com/scholar?cluster=3697701546144846732&hl=en&as_sdt=0,5.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Objectives.energy_constraint-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.energy_constraint","text":"energy_constraint(\n ce::AbstractCounterfactualExplanation;\n agg=mean,\n reg_strength::AbstractFloat=0.0,\n decay::AbstractFloat=0.9,\n kwargs...,\n)\n\nComputes the energy constraint for the counterfactual explanation as in Altmeyer et al. (2024): https://scholar.google.com/scholar?cluster=3697701546144846732&hl=en&as_sdt=0,5. The energy constraint is a regularization term that penalizes the energy of the counterfactuals. The energy is computed as the negative logit of the target class.\n\nArguments\n\nce::AbstractCounterfactualExplanation: The counterfactual explanation.\nagg::Function=mean: The aggregation function (only applicable in case num_counterfactuals > 1). Default is mean.\nreg_strength::AbstractFloat=0.0: The regularization strength.\ndecay::AbstractFloat=0.9: The decay rate for the polynomial decay function (defaults to 0.9). Parameter a is set to 1.0 / ce.generator.opt.eta, such that the initial step size is equal to 1.0, not accounting for b. Parameter b is set to round(Int, max_steps / 20), where max_steps is the maximum number of iterations.\nkwargs...: Additional keyword arguments.\n\nReturns\n\nℒ::AbstractFloat: The energy constraint.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Objectives.model_loss_penalty-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.model_loss_penalty","text":"function model_loss_penalty(\n ce::AbstractCounterfactualExplanation;\n agg=mean\n)\n\nAdditional penalty for ClaPROARGenerator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Objectives.needs_neighbours-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.needs_neighbours","text":"needs_neighbours(ce::AbstractCounterfactualExplanation)\n\nCheck if a counterfactual explanation needs access to neighbors in the target class.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Objectives.needs_neighbours-Tuple{AbstractGenerator}","page":"🧐 Reference","title":"CounterfactualExplanations.Objectives.needs_neighbours","text":"needs_neighbours(gen::AbstractGenerator)\n\nCheck if a generator needs access to neighbors in the target class.\n\n\n\n\n\n","category":"method"},{"location":"reference/#Extensions","page":"🧐 Reference","title":"Extensions","text":"","category":"section"},{"location":"reference/","page":"🧐 Reference","title":"🧐 Reference","text":"Modules = [\n Base.get_extension(CounterfactualExplanations, :DecisionTreeExt),\n Base.get_extension(CounterfactualExplanations, :JEMExt),\n Base.get_extension(CounterfactualExplanations, :LaplaceReduxExt),\n Base.get_extension(CounterfactualExplanations, :NeuroTreeExt),\n]","category":"page"},{"location":"reference/#DecisionTreeExt.AtomicDecisionTree","page":"🧐 Reference","title":"DecisionTreeExt.AtomicDecisionTree","text":"Type union for DecisionTree decision tree classifiers and regressors.\n\n\n\n\n\n","category":"type"},{"location":"reference/#DecisionTreeExt.AtomicRandomForest","page":"🧐 Reference","title":"DecisionTreeExt.AtomicRandomForest","text":"Type union for DecisionTree random forest classifiers and regressors.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.DecisionTreeModel-Tuple{Union{MLJDecisionTreeInterface.DecisionTreeClassifier, MLJDecisionTreeInterface.DecisionTreeRegressor}}","page":"🧐 Reference","title":"CounterfactualExplanations.DecisionTreeModel","text":"CounterfactualExplanations.DecisionTreeModel(\n model::AtomicDecisionTree; likelihood::Symbol=:classification_binary\n)\n\nOuter constructor for a decision trees.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.TCRExGenerator-Tuple{Union{Int64, AbstractFloat, String, Symbol}, CounterfactualData, AbstractModel}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.TCRExGenerator","text":"(generator::Generators.TCRExGenerator)(\n target::RawTargetType,\n data::DataPreprocessing.CounterfactualData,\n M::Models.AbstractModel\n)\n\nApplies the Generators.TCRExGenerator to a given target and data using the M model. For details see Bewley et al. (2024) [arXiv, PMLR].\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Model-Tuple{CounterfactualData, CounterfactualExplanations.DecisionTreeModel}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Model","text":"(M::Models.Model)(\n data::CounterfactualData,\n type::CounterfactualExplanations.DecisionTreeModel;\n kwargs...,\n)\n\nConstructs a decision tree for the given data. This method is used internally when a decision-tree model is constructed to be trained from scratch (i.e. no pre-trained model is supplied by the user).\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Model-Tuple{CounterfactualData, CounterfactualExplanations.RandomForestModel}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Model","text":"(M::Models.Model)(\n data::CounterfactualData, type::CounterfactualExplanations.RandomForestModel; kwargs...\n)\n\nConstructs a random forest for the given data.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.RandomForestModel-Tuple{Union{MLJDecisionTreeInterface.RandomForestClassifier, MLJDecisionTreeInterface.RandomForestRegressor}}","page":"🧐 Reference","title":"CounterfactualExplanations.RandomForestModel","text":"CounterfactualExplanations.RandomForestModel(\n model::AtomicRandomForest; likelihood::Symbol=:classification_binary\n)\n\nOuter constructor for random forests.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.incompatible-Tuple{FeatureTweakGenerator, CounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.incompatible","text":"Generators.incompatible(gen::FeatureTweakGenerator, ce::CounterfactualExplanation)\n\nOverloads the incompatible function for the FeatureTweakGenerator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Generators.propose_state-Tuple{FeatureTweakGenerator, AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"CounterfactualExplanations.Generators.propose_state","text":"Generators.propose_state(\n generator::Generators.FeatureTweakGenerator, ce::AbstractCounterfactualExplanation\n)\n\nOverloads the Generators.propose_state method for the FeatureTweakGenerator.\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.calculate_delta-Tuple{AbstractCounterfactualExplanation}","page":"🧐 Reference","title":"DecisionTreeExt.calculate_delta","text":"calculate_delta(ce::AbstractCounterfactualExplanation, penalty::Vector{Function})\n\nCalculates the penalty for the proposed feature tweak.\n\nArguments\n\nce::AbstractCounterfactualExplanation: The counterfactual explanation object.\n\nReturns\n\ndelta::Float64: The calculated penalty for the proposed feature tweak.\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.classify_prototypes-Tuple{Any, Any, Any}","page":"🧐 Reference","title":"DecisionTreeExt.classify_prototypes","text":"classify_prototypes(prototypes, rule_assignments, bounds)\n\nBuilds the second tree model using the given prototypes as inputs and their corresponding rule_assignments as labels. Split thresholds are restricted to the bounds, which can be computed using partition_bounds(rules). For details see Bewley et al. (2024) [arXiv, PMLR].\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.cre-Tuple{Any, Any, Any}","page":"🧐 Reference","title":"DecisionTreeExt.cre","text":"cre(rules, x, X)\n\nComputes the counterfactual rule explanations (CRE) for a given point x and a set of rules, where the rules correspond to the set of maximal-valid rules for some given target. For details see Bewley et al. (2024) [arXiv, PMLR].\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.esatisfactory_instance-Tuple{FeatureTweakGenerator, AbstractArray, AbstractArray}","page":"🧐 Reference","title":"DecisionTreeExt.esatisfactory_instance","text":"esatisfactory_instance(generator::FeatureTweakGenerator, x::AbstractArray, paths::Dict{String, Dict{String, Any}})\n\nReturns an epsilon-satisfactory counterfactual for x based on the paths provided.\n\nArguments\n\ngenerator::FeatureTweakGenerator: The feature tweak generator.\nx::AbstractArray: The factual instance.\npaths::Dict{String, Dict{String, Any}}: A list of paths to the leaves of the tree to be used for tweaking the feature.\n\nReturns\n\nesatisfactory::AbstractArray: The epsilon-satisfactory instance.\n\nExample\n\nesatisfactory = esatisfactory_instance(generator, x, paths) # returns an epsilon-satisfactory counterfactual for x based on the paths provided\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.extract_leaf_rules-Tuple{Root}","page":"🧐 Reference","title":"DecisionTreeExt.extract_leaf_rules","text":"extract_leaf_rules(root::DT.Root)\n\nExtracts leaf decision rules (i.e. hyperrectangles) from a decision tree (root). For a decision tree with L leaves this results in L hyperrectangles. The rules are returned as a vector of tuples containing 2-element tuples, where each 2-element tuple stores the lower and upper bound imposed by the given rule for a given feature. For details see Bewley et al. (2024) [arXiv, PMLR].\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.extract_leaf_rules-Tuple{Union{Leaf, Node}, AbstractArray, AbstractArray}","page":"🧐 Reference","title":"DecisionTreeExt.extract_leaf_rules","text":"extract_leaf_rules(node::Union{DT.Leaf,DT.Node}, conditions::AbstractArray, decisions::AbstractArray)\n\nSee extract_leaf_rules(root::DT.Root) for details.\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.extract_rules-Tuple{Root}","page":"🧐 Reference","title":"DecisionTreeExt.extract_rules","text":"extract_rules(root::DT.Root)\n\nExtracts decision rules (i.e. hyperrectangles) from a decision tree (root). For a decision tree with L leaves this results in 2L-1 hyperrectangles. The rules are returned as a vector of vectors of 2-element tuples, where each tuple stores the lower and upper bound imposed by the given rule for a given feature. For details see Bewley et al. (2024) [arXiv, PMLR].\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.extract_rules-Tuple{Union{Leaf, Node}, AbstractArray}","page":"🧐 Reference","title":"DecisionTreeExt.extract_rules","text":"extract_rules(node::DT.Node, conditions::AbstractArray)\n\nSee extract_rules(root::DT.Root).\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.get_individual_classifiers-Tuple{CounterfactualExplanations.Models.Model}","page":"🧐 Reference","title":"DecisionTreeExt.get_individual_classifiers","text":"get_individual_classifiers(M::Model)\n\nReturns the individual classifiers in the forest. If the input is a decision tree, the method returns the decision tree itself inside an array.\n\nArguments\n\nM::Model: The model selected by the user.\nmodel::CounterfactualExplanations.D\n\nReturns\n\nclassifiers::AbstractArray: An array of individual classifiers in the forest.\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.grow_surrogate-Tuple{CounterfactualExplanations.Generators.TCRExGenerator, AbstractArray, AbstractArray}","page":"🧐 Reference","title":"DecisionTreeExt.grow_surrogate","text":"grow_surrogate(\n generator::Generators.TCRExGenerator, X::AbstractArray, ŷ::AbstractArray\n)\n\nGrows the tree-based surrogate model for the Generators.TCRExGenerator. For details see Bewley et al. (2024) [arXiv, PMLR].\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.grow_surrogate-Tuple{CounterfactualExplanations.Generators.TCRExGenerator, CounterfactualData, AbstractModel}","page":"🧐 Reference","title":"DecisionTreeExt.grow_surrogate","text":"grow_surrogate(\n generator::Generators.TCRExGenerator, data::CounterfactualData, M::AbstractModel\n)\n\nOverloads the grow_surrogate function to accept a CounterfactualData and a AbstractModel to grow a surrogate model. See grow_surrogate(generator::Generators.TCRExGenerator, X::AbstractArray, ŷ::AbstractArray).\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.induced_grid-Tuple{Any}","page":"🧐 Reference","title":"DecisionTreeExt.induced_grid","text":"induced_grid(rules)\n\nComputes the induced grid of the given rules. For details see Bewley et al. (2024) [arXiv, PMLR]..\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.issubrule-Tuple{Any, Any}","page":"🧐 Reference","title":"DecisionTreeExt.issubrule","text":"issubrule(rule, otherrule)\n\nChecks if the rule hyperrectangle is a subset of the otherrule hyperrectangle. For details see Bewley et al. (2024) [arXiv, PMLR].\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.max_valid-NTuple{5, Any}","page":"🧐 Reference","title":"DecisionTreeExt.max_valid","text":"max_valid(rules, X, fx, target, τ)\n\nReturns the maximal-valid rules for a given target and accuracy threshold τ. For details see Bewley et al. (2024) [arXiv, PMLR].\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.partition_bounds-Tuple{Any, Int64}","page":"🧐 Reference","title":"DecisionTreeExt.partition_bounds","text":"partition_bounds(rules, dim::Int)\n\nComputes the set of (unique) bounds for each rule in rules along the dim-th dimension. For details see Bewley et al. (2024) [arXiv, PMLR].\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.partition_bounds-Tuple{Any}","page":"🧐 Reference","title":"DecisionTreeExt.partition_bounds","text":"partition_bounds(rules)\n\nComputes the set of (unique) bounds for each rule in rules and all dimensions. For details see Bewley et al. (2024) [arXiv, PMLR].\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.prototype-Tuple{Any, Any}","page":"🧐 Reference","title":"DecisionTreeExt.prototype","text":"prototype(rule, X; pick_arbitrary::Bool=true)\n\nPicks an arbitrary point x^C in X (i.e. prototype) from the subet of X that is contained by rule R_i. If pick_arbitrary is set to false, the prototype is instead computed as the average across all samples. For details see Bewley et al. (2024) [arXiv, PMLR].\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.rule_accuracy-NTuple{4, Any}","page":"🧐 Reference","title":"DecisionTreeExt.rule_accuracy","text":"rule_accuracy(rule, X, fx, target)\n\nComputes the accuracy of the rule on the data X for predicted outputs fx and the target. Accuracy is defined as the fraction of points contained by the rule, for which predicted values match the target. For details see Bewley et al. (2024) [arXiv, PMLR].\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.rule_changes-Tuple{Any, Any}","page":"🧐 Reference","title":"DecisionTreeExt.rule_changes","text":"rule_changes(rule, x)\n\nComputes the number of feature changes necessary for x to be contained by rule R_i. For details see Bewley et al. (2024) [arXiv, PMLR].\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.rule_contains-Tuple{Any, Any}","page":"🧐 Reference","title":"DecisionTreeExt.rule_contains","text":"rule_contains(rule, X)\n\nReturns the subet of X that is contained by rule R_i. For details see Bewley et al. (2024) [arXiv, PMLR].\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.rule_cost-Tuple{Any, Any, Any}","page":"🧐 Reference","title":"DecisionTreeExt.rule_cost","text":"rule_cost(rule, x, X)\n\nComputes the cost for x to be contained by rule R_i, where cost is defined as rule_changes(rule, x) - rule_feasibility(rule, X). For details see Bewley et al. (2024) [arXiv, PMLR]. \n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.rule_feasibility-Tuple{Any, Any}","page":"🧐 Reference","title":"DecisionTreeExt.rule_feasibility","text":"rule_feasibility(rule, X)\n\nComputes the feasibility of a rule R_i for a given dataset. Feasibility is defined as fraction of the data points that satisfy the rule. For details see Bewley et al. (2024) [arXiv, PMLR].\n\n\n\n\n\n","category":"method"},{"location":"reference/#DecisionTreeExt.search_path","page":"🧐 Reference","title":"DecisionTreeExt.search_path","text":"search_path(tree::Union{DT.Leaf, DT.Node}, target::RawTargetType, path::AbstractArray)\n\nReturn a path index list with the inequality symbols, thresholds and feature indices.\n\nArguments\n\ntree::Union{DT.Leaf, DT.Node}: The root node of a decision tree.\ntarget::RawTargetType: The target class.\npath::AbstractArray: A list containing the paths found thus far.\n\nReturns\n\npaths::AbstractArray: A list of paths to the leaves of the tree to be used for tweaking the feature.\n\nExample\n\npaths = search_path(tree, target) # returns a list of paths to the leaves of the tree to be used for tweaking the feature\n\n\n\n\n\n","category":"function"},{"location":"reference/#DecisionTreeExt.wrap_decision_tree","page":"🧐 Reference","title":"DecisionTreeExt.wrap_decision_tree","text":"wrap_decision_tree(node::TreeNode, X, y)\n\nTurns a custom decision tree into a DecisionTree.Root object from the DecisionTree.jl package.\n\n\n\n\n\n","category":"function"},{"location":"reference/#DecisionTreeExt.wrap_decision_tree-Tuple{DecisionTreeExt.TreeNode}","page":"🧐 Reference","title":"DecisionTreeExt.wrap_decision_tree","text":"wrap_decision_tree(node::TreeNode)\n\nSee wrap_decision_tree(node::TreeNode, X, y).\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.JEM-Tuple{JointEnergyClassifier}","page":"🧐 Reference","title":"CounterfactualExplanations.JEM","text":"CounterfactualExplanations.JEM(\n model::JointEnergyModels.JointEnergyClassifier; likelihood::Symbol=:classification_multi\n)\n\nOuter constructor for a neural network with Laplace Approximation from LaplaceRedux.jl.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Model-Tuple{Any, CounterfactualExplanations.JEM}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Model","text":"Models.Model(model, type::CounterfactualExplanations.JEM; likelihood::Symbol=:classification_multi)\n\nOverloaded constructor for Flux models.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Model-Tuple{CounterfactualData, CounterfactualExplanations.JEM}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Model","text":"(M::Model)(data::CounterfactualData, type::JEM; kwargs...)\n\nConstructs a differentiable tree-based model for the given data.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.load_mnist_model-Tuple{CounterfactualExplanations.JEM}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.load_mnist_model","text":"Models.load_mnist_model(type::CounterfactualExplanations.JEM)\n\nOverload for loading a pre-trained model for the JEM model type.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.logits-Tuple{CounterfactualExplanations.Models.Model, CounterfactualExplanations.JEM, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.logits","text":"Models.logits(M::JEM, X::AbstractArray)\n\nCalculates the logit scores output by the model M for the input data X.\n\nArguments\n\nM::JEM: The model selected by the user. Must be a model from the MLJ library.\nX::AbstractArray: The feature vector for which the logit scores are calculated.\n\nReturns\n\nlogits::Matrix: A matrix of logits for each output class for each data point in X.\n\nExample\n\nlogits = Models.logits(M, x) # calculates the logit scores for each output class for the data point x\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.probs-Tuple{CounterfactualExplanations.Models.Model, CounterfactualExplanations.JEM, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.probs","text":"Models.probs(\n M::Models.Model,\n type::CounterfactualExplanations.JEM,\n X::AbstractArray,\n)\n\nOverloads the Models.probs method for NeuroTree models.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.train-Tuple{CounterfactualExplanations.Models.Model, CounterfactualExplanations.JEM, CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.train","text":"train(M::JEM, data::CounterfactualData; kwargs...)\n\nFits the model M to the data in the CounterfactualData object. This method is not called by the user directly.\n\nArguments\n\nM::JEM: The wrapper for an JEM model.\ndata::CounterfactualData: The CounterfactualData object containing the data to be used for training the model.\n\nReturns\n\nM::JEM: The fitted JEM model.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.LaplaceReduxModel-Tuple{Laplace}","page":"🧐 Reference","title":"CounterfactualExplanations.LaplaceReduxModel","text":"CounterfactualExplanations.LaplaceReduxModel(\n model::LaplaceRedux.Laplace; likelihood::Symbol=:classification_binary\n)\n\nOuter constructor for a neural network with Laplace Approximation from LaplaceRedux.jl.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.Model-Tuple{CounterfactualData, CounterfactualExplanations.LaplaceReduxModel}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Model","text":"(M::Model)(data::CounterfactualData, type::LaplaceReduxModel; kwargs...)\n\nConstructs a differentiable tree-based model for the given data.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.logits-Tuple{CounterfactualExplanations.Models.Model, CounterfactualExplanations.LaplaceReduxModel, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.logits","text":"logits(M::LaplaceReduxModel, X::AbstractArray)\n\nPredicts the logit scores for the input data X using the model M.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.probs-Tuple{CounterfactualExplanations.Models.Model, CounterfactualExplanations.LaplaceReduxModel, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.probs","text":"probs(M::LaplaceReduxModel, X::AbstractArray)\n\nPredicts the probabilities of the classes for the input data X using the model M.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.train-Tuple{CounterfactualExplanations.Models.Model, CounterfactualExplanations.LaplaceReduxModel, CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.train","text":"train(M::LaplaceReduxModel, data::CounterfactualData; kwargs...)\n\nFits the model M to the data in the CounterfactualData object. This method is not called by the user directly.\n\nArguments\n\nM::LaplaceReduxModel: The wrapper for an LaplaceReduxModel model.\ndata::CounterfactualData: The CounterfactualData object containing the data to be used for training the model.\n\nReturns\n\nM::LaplaceReduxModel: The fitted LaplaceReduxModel model.\n\n\n\n\n\n","category":"method"},{"location":"reference/#NeuroTreeExt.AtomicNeuroTree","page":"🧐 Reference","title":"NeuroTreeExt.AtomicNeuroTree","text":"Type union for NeuroTree classifiers and regressors.\n\n\n\n\n\n","category":"type"},{"location":"reference/#CounterfactualExplanations.Models.Model-Tuple{CounterfactualData, CounterfactualExplanations.NeuroTreeModel}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.Model","text":"(M::Model)(data::CounterfactualData, type::NeuroTreeModel; kwargs...)\n\nConstructs a differentiable tree-based model for the given data.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.NeuroTreeModel-Tuple{Union{NeuroTreeClassifier, NeuroTreeRegressor}}","page":"🧐 Reference","title":"CounterfactualExplanations.NeuroTreeModel","text":"CounterfactualExplanations.NeuroTreeModel(\n model::AtomicNeuroTree; likelihood::Symbol=:classification_binary\n)\n\nOuter constructor for a differentiable tree-based model from NeuroTreeModels.jl.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.logits-Tuple{CounterfactualExplanations.Models.Model, CounterfactualExplanations.NeuroTreeModel, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.logits","text":"Models.logits(M::NeuroTreeModel, X::AbstractArray)\n\nCalculates the logit scores output by the model M for the input data X.\n\nArguments\n\nM::NeuroTreeModel: The model selected by the user. Must be a model from the MLJ library.\nX::AbstractArray: The feature vector for which the logit scores are calculated.\n\nReturns\n\nlogits::Matrix: A matrix of logits for each output class for each data point in X.\n\nExample\n\nlogits = Models.logits(M, x) # calculates the logit scores for each output class for the data point x\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.probs-Tuple{CounterfactualExplanations.Models.Model, CounterfactualExplanations.NeuroTreeModel, AbstractArray}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.probs","text":"Models.probs(\n M::Models.Model,\n type::CounterfactualExplanations.NeuroTreeModel,\n X::AbstractArray,\n)\n\nOverloads the probs method for NeuroTree models.\n\n\n\n\n\n","category":"method"},{"location":"reference/#CounterfactualExplanations.Models.train-Tuple{CounterfactualExplanations.Models.Model, CounterfactualExplanations.NeuroTreeModel, CounterfactualData}","page":"🧐 Reference","title":"CounterfactualExplanations.Models.train","text":"train(M::NeuroTreeModel, data::CounterfactualData; kwargs...)\n\nFits the model M to the data in the CounterfactualData object. This method is not called by the user directly.\n\nArguments\n\nM::NeuroTreeModel: The wrapper for an NeuroTree model.\ndata::CounterfactualData: The CounterfactualData object containing the data to be used for training the model.\n\nReturns\n\nM::NeuroTreeModel: The fitted NeuroTree model.\n\n\n\n\n\n","category":"method"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"tutorials/whistle_stop/#Whistle-Stop-Tour","page":"Whiste-Stop Tour","title":"Whistle-Stop Tour","text":"","category":"section"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"In this tutorial, we will go through a slightly more complex example involving synthetic data. We will generate Counterfactual Explanations using different generators and visualize the results.","category":"page"},{"location":"tutorials/whistle_stop/#Data-and-Classifier","page":"Whiste-Stop Tour","title":"Data and Classifier","text":"","category":"section"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"# Choose some values for data and a model:\nn_dim = 2\nn_classes = 4\nn_samples = 400\nmodel_name = :MLP","category":"page"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"The code chunk below generates synthetic data and uses it to fit a classifier. The outcome variable counterfactual_data.y consists of 4 classes. The input data counterfactual_data.X consists of 2 features. We generate a total of 400 samples. On the model side, we have specified model_name = :MLP. The fit_model can be used to fit a number of default models.","category":"page"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"data = TaijaData.load_multi_class(n_samples)\ncounterfactual_data = DataPreprocessing.CounterfactualData(data...)\nM = fit_model(counterfactual_data, model_name)","category":"page"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"The chart below visualizes our data along with the model predictions. In particular, the contour indicates the predicted probabilities generated by our classifier. By default, these are the predicted probabilities for y=1, the first label. For multi-dimensional input data is compressed into two dimensions and the decision boundary is approximated using Nearest Neighbors (this is still somewhat experimental).","category":"page"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"plot(M, counterfactual_data)","category":"page"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"(Image: )","category":"page"},{"location":"tutorials/whistle_stop/#Counterfactual-Explanation","page":"Whiste-Stop Tour","title":"Counterfactual Explanation","text":"","category":"section"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"Next, we begin by specifying our target and factual label. We then draw a random sample from the non-target (factual) class.","category":"page"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"# Factual and target:\ntarget = 2\nfactual = 4\nchosen = rand(findall(predict_label(M, counterfactual_data) .== factual))\nx = select_factual(counterfactual_data, chosen)","category":"page"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"This sets the baseline for our counterfactual search: we plan to perturb the factual x to change the predicted label from y=4 to our target label target=2.","category":"page"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"Counterfactual generators accept several default parameters that can be used to adjust the counterfactual search at a high level: for example, a Flux.jl optimizer can be supplied to define how exactly gradient steps are performed. Importantly, one can also define the threshold probability at which the counterfactual search will converge. This relates to the probability predicted by the underlying black-box model, that the counterfactual belongs to the target class. A higher decision threshold typically prolongs the counterfactual search.","category":"page"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"# Search params:\ndecision_threshold = 0.75\nnum_counterfactuals = 3","category":"page"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"The code below runs the counterfactual search for each generator available in the generator_catalogue. In each case, we also call the generic plot() method on the generated instance of type CounterfactualExplanation. This generates a simple plot that visualizes the entire counterfactual path. The chart below shows the results for all counterfactual generators: Factual: 4 → Target: 2.","category":"page"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"ces = Dict()\nplts = []\nplottable_generators = filter(((k,v),) -> k ∉ [:growing_spheres, :feature_tweak], generator_catalogue)\n# Search:\nfor (key, Generator) in plottable_generators\n generator = Generator()\n ce = generate_counterfactual(\n x, target, counterfactual_data, M, generator;\n num_counterfactuals = num_counterfactuals,\n convergence=GeneratorConditionsConvergence(\n decision_threshold=decision_threshold\n )\n )\n ces[key] = ce\n plts = [plts..., plot(ce; title=key, colorbar=false)]\nend","category":"page"},{"location":"tutorials/whistle_stop/","page":"Whiste-Stop Tour","title":"Whiste-Stop Tour","text":"(Image: )","category":"page"},{"location":"how_to_guides/custom_models/","page":"... add custom models","title":"... add custom models","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"how_to_guides/custom_models/#How-to-add-Custom-Models","page":"... add custom models","title":"How to add Custom Models","text":"","category":"section"},{"location":"how_to_guides/custom_models/","page":"... add custom models","title":"... add custom models","text":"Adding custom models is possible and relatively straightforward, as we will demonstrate in this guide.","category":"page"},{"location":"how_to_guides/custom_models/#Custom-Models","page":"... add custom models","title":"Custom Models","text":"","category":"section"},{"location":"how_to_guides/custom_models/","page":"... add custom models","title":"... add custom models","text":"Apart from the default models you can use any arbitrary (differentiable) model and generate recourse in the same way as before. Only two steps are necessary to make your own Julia model compatible with this package:","category":"page"},{"location":"how_to_guides/custom_models/","page":"... add custom models","title":"... add custom models","text":"The model needs to be declared as a subtype of <:CounterfactualExplanations.Models.AbstractModel.\nYou need to extend the functions CounterfactualExplanations.Models.logits and CounterfactualExplanations.Models.probs for your custom model.","category":"page"},{"location":"how_to_guides/custom_models/#How-FluxModel-was-added","page":"... add custom models","title":"How FluxModel was added","text":"","category":"section"},{"location":"how_to_guides/custom_models/","page":"... add custom models","title":"... add custom models","text":"To demonstrate how this can be done in practice, we will reiterate here how native support for Flux.jl models was enabled (Innes 2018). Once again we use synthetic data for an illustrative example. The code below loads the data and builds a simple model architecture that can be used for a multi-class prediction task. Note how outputs from the final layer are not passed through a softmax activation function, since the counterfactual loss is evaluated with respect to logits. The model is trained with dropout.","category":"page"},{"location":"how_to_guides/custom_models/","page":"... add custom models","title":"... add custom models","text":"# Data:\nN = 200\ndata = TaijaData.load_blobs(N; centers=4, cluster_std=0.5)\ncounterfactual_data = DataPreprocessing.CounterfactualData(data...)\ny = counterfactual_data.y\nX = counterfactual_data.X\n\n# Flux model setup: \nusing Flux\ndata = Flux.DataLoader((X,y), batchsize=1)\nn_hidden = 32\noutput_dim = size(y,1)\ninput_dim = 2\nactivation = σ\nmodel = Chain(\n Dense(input_dim, n_hidden, activation),\n Dropout(0.1),\n Dense(n_hidden, output_dim)\n) \nloss(x, y) = Flux.Losses.logitcrossentropy(model(x), y)\n\n# Flux model training:\nusing Flux.Optimise: update!, Adam\nopt = Adam()\nepochs = 50\nfor epoch = 1:epochs\n for d in data\n gs = gradient(Flux.params(model)) do\n l = loss(d...)\n end\n update!(opt, Flux.params(model), gs)\n end\nend","category":"page"},{"location":"how_to_guides/custom_models/","page":"... add custom models","title":"... add custom models","text":"The code below implements the two steps that were necessary to make Flux models compatible with the package. We first declare our new struct as a subtype of <:AbstractDifferentiableModel, which itself is an abstract subtype of <:AbstractModel. Computing logits amounts to just calling the model on inputs. Predicted probabilities for labels can in this case be computed by passing predicted logits through the softmax function. Finally, we just instantiate our model in the same way as always.","category":"page"},{"location":"how_to_guides/custom_models/","page":"... add custom models","title":"... add custom models","text":"# Step 1)\nstruct MyFluxModel <: AbstractDifferentiableModel\n model::Any\n likelihood::Symbol\nend\n\n# Step 2)\n# import functions in order to extend\nimport CounterfactualExplanations.Models: logits\nimport CounterfactualExplanations.Models: probs \nlogits(M::MyFluxModel, X::AbstractArray) = M.model(X)\nprobs(M::MyFluxModel, X::AbstractArray) = softmax(logits(M, X))\nM = MyFluxModel(model, :classification_multi)","category":"page"},{"location":"how_to_guides/custom_models/","page":"... add custom models","title":"... add custom models","text":"The code below implements the counterfactual search and plots the results:","category":"page"},{"location":"how_to_guides/custom_models/","page":"... add custom models","title":"... add custom models","text":"factual_label = 4\ntarget = 2\nchosen = rand(findall(predict_label(M, counterfactual_data) .== factual_label))\nx = select_factual(counterfactual_data, chosen) \n\n# Counterfactual search:\ngenerator = GenericGenerator()\nce = generate_counterfactual(x, target, counterfactual_data, M, generator)\nplot(ce)","category":"page"},{"location":"how_to_guides/custom_models/","page":"... add custom models","title":"... add custom models","text":"(Image: )","category":"page"},{"location":"how_to_guides/custom_models/#References","page":"... add custom models","title":"References","text":"","category":"section"},{"location":"how_to_guides/custom_models/","page":"... add custom models","title":"... add custom models","text":"Innes, Mike. 2018. “Flux: Elegant Machine Learning with Julia.” Journal of Open Source Software 3 (25): 602. https://doi.org/10.21105/joss.00602.","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/categorical/#Categorical-Features","page":"Categorical Features","title":"Categorical Features","text":"","category":"section"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"To illustrate how data is preprocessed under the hood, we consider a simple toy dataset with three categorical features (name, grade and sex) and one continuous feature (age):","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"X = (\n name=categorical([\"Danesh\", \"Lee\", \"Mary\", \"John\"]),\n grade=categorical([\"A\", \"B\", \"A\", \"C\"], ordered=true),\n sex=categorical([\"male\",\"female\",\"male\",\"male\"]),\n height=[1.85, 1.67, 1.5, 1.67],\n)\nschema(X)","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"Categorical features are expected to be one-hot or dummy encoded. To this end, we could use MLJ, for example:","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"hot = OneHotEncoder()\nmach = fit!(machine(hot, X))\nW = transform(mach, X)\nschema(W)","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"┌──────────────┬────────────┬─────────┐\n│ names │ scitypes │ types │\n├──────────────┼────────────┼─────────┤\n│ name__Danesh │ Continuous │ Float64 │\n│ name__John │ Continuous │ Float64 │\n│ name__Lee │ Continuous │ Float64 │\n│ name__Mary │ Continuous │ Float64 │\n│ grade__A │ Continuous │ Float64 │\n│ grade__B │ Continuous │ Float64 │\n│ grade__C │ Continuous │ Float64 │\n│ sex__female │ Continuous │ Float64 │\n│ sex__male │ Continuous │ Float64 │\n│ height │ Continuous │ Float64 │\n└──────────────┴────────────┴─────────┘","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"The matrix that will be perturbed during the counterfactual search looks as follows:","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"X = permutedims(MLJBase.matrix(W))","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"10×4 Matrix{Float64}:\n 1.0 0.0 0.0 0.0\n 0.0 0.0 0.0 1.0\n 0.0 1.0 0.0 0.0\n 0.0 0.0 1.0 0.0\n 1.0 0.0 1.0 0.0\n 0.0 1.0 0.0 0.0\n 0.0 0.0 0.0 1.0\n 0.0 1.0 0.0 0.0\n 1.0 0.0 1.0 1.0\n 1.85 1.67 1.5 1.67","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"The CounterfactualData constructor takes two optional arguments that can be used to specify the indices of categorical and continuous features. If nothing is supplied, all features are assumed to be continuous. For categorical features, the constructor expects and array of arrays of integers (Vector{Vector{Int}}) where each subarray includes the indices of a all one-hot encoded rows related to a single categorical feature. In the example above, the name feature is one-hot encoded across rows 1, 2 and 3 of X.","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"features_categorical = [\n [1,2,3,4], # name\n [5,6,7], # grade\n [8,9] # sex\n]\nfeatures_continuous = [10]","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"We propose the following simple logic for reconstructing categorical encodings after perturbations:","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"For one-hot encoded features with multiple classes, choose the maximum.\nFor binary features, clip the perturbed value to fall into 01 and round to the nearest of the two integers.","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"function reconstruct_cat_encoding(x)\n map(features_categorical) do cat_group_index\n if length(cat_group_index) > 1\n x[cat_group_index] = Int.(x[cat_group_index] .== maximum(x[cat_group_index]))\n if sum(x[cat_group_index]) > 1\n ties = findall(x[cat_group_index] .== 1)\n _x = zeros(length(x[cat_group_index]))\n winner = rand(ties,1)[1]\n _x[winner] = 1\n x[cat_group_index] = _x\n end\n else\n x[cat_group_index] = [round(clamp(x[cat_group_index][1],0,1))]\n end\n end\n return x\nend","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"Let’s look at a few simple examples to see how this function works. Firstly, consider the case of perturbing a single element:","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"x = X[:,1]\nx[1] = 1.1\nx","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"10-element Vector{Float64}:\n 1.1\n 0.0\n 0.0\n 0.0\n 1.0\n 0.0\n 0.0\n 0.0\n 1.0\n 1.85","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"The reconstructed one-hot-encoded vector will look like this:","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"reconstruct_cat_encoding(x)","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"10-element Vector{Float64}:\n 1.0\n 0.0\n 0.0\n 0.0\n 1.0\n 0.0\n 0.0\n 0.0\n 1.0\n 1.85","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"Next, consider the case of perturbing multiple elements:","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"x[2] = 1.1\nx[3] = -1.2\nx","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"10-element Vector{Float64}:\n 1.0\n 1.1\n -1.2\n 0.0\n 1.0\n 0.0\n 0.0\n 0.0\n 1.0\n 1.85","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"The reconstructed one-hot-encoded vector will look like this:","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"reconstruct_cat_encoding(x)","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"10-element Vector{Float64}:\n 0.0\n 1.0\n 0.0\n 0.0\n 1.0\n 0.0\n 0.0\n 0.0\n 1.0\n 1.85","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"Finally, let’s introduce a tie:","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"x[1] = 1.0\nx","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"10-element Vector{Float64}:\n 1.0\n 1.0\n 0.0\n 0.0\n 1.0\n 0.0\n 0.0\n 0.0\n 1.0\n 1.85","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"The reconstructed one-hot-encoded vector will look like this:","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"reconstruct_cat_encoding(x)","category":"page"},{"location":"explanation/categorical/","page":"Categorical Features","title":"Categorical Features","text":"10-element Vector{Float64}:\n 0.0\n 1.0\n 0.0\n 0.0\n 1.0\n 0.0\n 0.0\n 0.0\n 1.0\n 1.85","category":"page"},{"location":"explanation/evaluation/overview/","page":"Overview","title":"Overview","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/evaluation/overview/#Evaluation","page":"Overview","title":"Evaluation","text":"","category":"section"},{"location":"explanation/evaluation/overview/","page":"Overview","title":"Overview","text":"Evaluation of counterfactual explanations is an integral part of the counterfactual explanation process. It is important to evaluate the quality of the generated counterfactual explanations to ensure that they are meaningful and useful. The tutorial provides an overview of the evaluation metrics and methods that can be used to evaluate counterfactual explanations. In this part of the documentation, we dive deeper into specific evaluation metrics and methods that can be used to evaluate counterfactual explanations.","category":"page"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/generators/dice/#DiCEGenerator","page":"DiCE","title":"DiCEGenerator","text":"","category":"section"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"The DiCEGenerator can be used to generate multiple diverse counterfactuals for a single factual.","category":"page"},{"location":"explanation/generators/dice/#Description","page":"DiCE","title":"Description","text":"","category":"section"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"Counterfactual Explanations are not unique and there are therefore many different ways through which valid counterfactuals can be generated. In the context of Algorithmic Recourse this can be leveraged to offer individuals not one, but possibly many different ways to change a negative outcome into a positive one. One might argue that it makes sense for those different options to be as diverse as possible. This idea is at the core of DiCE, a counterfactual generator introduce by Mothilal, Sharma, and Tan (2020) that generate a diverse set of counterfactual explanations.","category":"page"},{"location":"explanation/generators/dice/#Defining-Diversity","page":"DiCE","title":"Defining Diversity","text":"","category":"section"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"To ensure that the generated counterfactuals are diverse, Mothilal, Sharma, and Tan (2020) add a diversity constraint to the counterfactual search objective. In particular, diversity is explicitly proxied via Determinantal Point Processes (DDP).","category":"page"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"We can implement DDP in Julia as follows:[1]","category":"page"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"using LinearAlgebra\nfunction ddp_diversity(X::AbstractArray{<:Real, 3})\n xs = eachslice(X, dims = ndims(X))\n K = [1/(1 + norm(x .- y)) for x in xs, y in xs]\n return det(K)\nend","category":"page"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"Below we generate some random points in mathbbR^2 and apply gradient ascent on this function evaluated at the whole array of points. As we can see in the animation below, the points are sent away from each other. In other words, diversity across the array of points increases as we ascend the ddp_diversity function.","category":"page"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"lims = 5\nN = 5\nX = rand(2,1,N)\nT = 50\nη = 0.1\nanim = @animate for t in 1:T\n X .+= gradient(ddp_diversity, X)[1]\n Z = reshape(X,2,N)\n scatter(\n Z[1,:],Z[2,:],ms=25, \n xlims=(-lims,lims),ylims=(-lims,lims),\n label=\"\",colour=1:N,\n size=(500,500),\n title=\"Diverse Counterfactuals\"\n )\nend\ngif(anim, joinpath(www_path, \"dice_intro.gif\"))","category":"page"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"(Image: )","category":"page"},{"location":"explanation/generators/dice/#Usage","page":"DiCE","title":"Usage","text":"","category":"section"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"The approach can be used in our package as follows:","category":"page"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"generator = DiCEGenerator()\nconv = CounterfactualExplanations.Convergence.GeneratorConditionsConvergence()\nce = generate_counterfactual(\n x, target, counterfactual_data, M, generator; \n num_counterfactuals=5, convergence=conv\n)\nplot(ce)","category":"page"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"(Image: )","category":"page"},{"location":"explanation/generators/dice/#Effect-of-Penalty","page":"DiCE","title":"Effect of Penalty","text":"","category":"section"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"Λ₂ = [0.1, 1.0, 5.0]\nces = []\nn_cf = 5\nusing Flux\nfor λ₂ ∈ Λ₂ \n λ = [0.00, λ₂]\n generator = DiCEGenerator(λ=λ)\n ces = vcat(\n ces...,\n generate_counterfactual(\n x, target, counterfactual_data, M, generator; \n num_counterfactuals=n_cf, convergence=conv\n )\n )\nend","category":"page"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"The figure below shows the resulting counterfactual paths. As expected, the resulting counterfactuals are more dispersed across the feature domain for higher choices of lambda_2","category":"page"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"(Image: )","category":"page"},{"location":"explanation/generators/dice/#References","page":"DiCE","title":"References","text":"","category":"section"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"Mothilal, Ramaravind K, Amit Sharma, and Chenhao Tan. 2020. “Explaining Machine Learning Classifiers Through Diverse Counterfactual Explanations.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 607–17. https://doi.org/10.1145/3351095.3372850.","category":"page"},{"location":"explanation/generators/dice/","page":"DiCE","title":"DiCE","text":"[1] With thanks to the respondents on Discourse","category":"page"},{"location":"how_to_guides/","page":"Overview","title":"Overview","text":"CurrentModule = CounterfactualExplanations","category":"page"},{"location":"how_to_guides/#How-To-Guides","page":"Overview","title":"How-To Guides","text":"","category":"section"},{"location":"how_to_guides/","page":"Overview","title":"Overview","text":"In this section, you will find a series of how-to-guides that showcase specific use cases of counterfactual explanations (CE).","category":"page"},{"location":"how_to_guides/","page":"Overview","title":"Overview","text":"How-to guides are directions that take the reader through the steps required to solve a real-world problem. How-to guides are goal-oriented.— Diátaxis","category":"page"},{"location":"how_to_guides/","page":"Overview","title":"Overview","text":"In other words, you come here because you may have some particular problem in mind, would like to see how it can be solved using CE and then most likely head off again 🫡.","category":"page"},{"location":"explanation/optimisers/jsma/","page":"JSMA","title":"JSMA","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/optimisers/jsma/#Jacobian-based-Saliency-Map-Attack","page":"JSMA","title":"Jacobian-based Saliency Map Attack","text":"","category":"section"},{"location":"explanation/optimisers/jsma/","page":"JSMA","title":"JSMA","text":"To search counterfactuals, Schut et al. (2021) propose to use a Jacobian-Based Saliency Map Attack (JSMA) inspired by the literature on adversarial attacks. It works by moving in the direction of the most salient feature at a fixed step size in each iteration. Schut et al. (2021) use this optimisation rule in the context of Bayesian classifiers and demonstrate good results in terms of plausibility — how realistic counterfactuals are — and redundancy — how sparse the proposed feature changes are.","category":"page"},{"location":"explanation/optimisers/jsma/#JSMADescent","page":"JSMA","title":"JSMADescent","text":"","category":"section"},{"location":"explanation/optimisers/jsma/","page":"JSMA","title":"JSMA","text":"To implement this approach in a reusable manner, we have added JSMA as a Flux optimiser. In particular, we have added a class JSMADescent<:Flux.Optimise.AbstractOptimiser, for which we have overloaded the Flux.Optimise.apply! method. This makes it possible to reuse JSMADescent as an optimiser in composable generators.","category":"page"},{"location":"explanation/optimisers/jsma/","page":"JSMA","title":"JSMA","text":"The optimiser can be used with with any generator as follows:","category":"page"},{"location":"explanation/optimisers/jsma/","page":"JSMA","title":"JSMA","text":"using CounterfactualExplanations.Generators: JSMADescent\ngenerator = GenericGenerator() |>\n gen -> @with_optimiser(gen,JSMADescent(;η=0.1))\nce = generate_counterfactual(x, target, counterfactual_data, M, generator)","category":"page"},{"location":"explanation/optimisers/jsma/","page":"JSMA","title":"JSMA","text":"The figure below compares the resulting counterfactual search outcome to the corresponding outcome with generic Descent.","category":"page"},{"location":"explanation/optimisers/jsma/","page":"JSMA","title":"JSMA","text":"plot(p1,p2,size=(1000,400))","category":"page"},{"location":"explanation/optimisers/jsma/","page":"JSMA","title":"JSMA","text":"(Image: )","category":"page"},{"location":"explanation/optimisers/jsma/","page":"JSMA","title":"JSMA","text":"Schut, Lisa, Oscar Key, Rory Mc Grath, Luca Costabello, Bogdan Sacaleanu, Yarin Gal, et al. 2021. “Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties.” In International Conference on Artificial Intelligence and Statistics, 1756–64. PMLR.","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"tutorials/data_catalogue/#Data-Catalogue","page":"Data Catalogue","title":"Data Catalogue","text":"","category":"section"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"To allow researchers and practitioners to test and compare counterfactual generators, the TAIJA environment includes the package TaijaData.jl which comes with pre-processed synthetic and real-world benchmark datasets from different domains. This page explains how to use TaijaData.jl in tandem with CounterfactualExplanations.jl.","category":"page"},{"location":"tutorials/data_catalogue/#Synthetic-Data","page":"Data Catalogue","title":"Synthetic Data","text":"","category":"section"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"The following dictionary can be used to inspect the available methods to generate synthetic datasets where the key indicates the name of the data and the value is the corresponding method:","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"TaijaData.data_catalogue[:synthetic]","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"Dict{Symbol, Function} with 6 entries:\n :overlapping => load_overlapping\n :linearly_separable => load_linearly_separable\n :blobs => load_blobs\n :moons => load_moons\n :circles => load_circles\n :multi_class => load_multi_class","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"The chart below shows the generated data using default parameters:","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"plts = []\n_height = 200\n_n = length(keys(data_catalogue[:synthetic]))\nfor (key, fun) in data_catalogue[:synthetic]\n data = fun()\n counterfactual_data = DataPreprocessing.CounterfactualData(data...)\n plt = plot()\n scatter!(counterfactual_data, title=key)\n plts = [plts..., plt]\nend\nplot(plts..., size=(_n * _height, _height), layout=(1, _n))","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"(Image: )","category":"page"},{"location":"tutorials/data_catalogue/#Real-World-Data","page":"Data Catalogue","title":"Real-World Data","text":"","category":"section"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"As for real-world data, the same dictionary can be used to inspect the available data from different domains.","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"TaijaData.data_catalogue[:tabular]","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"Dict{Symbol, Function} with 5 entries:\n :german_credit => load_german_credit\n :california_housing => load_california_housing\n :credit_default => load_credit_default\n :adult => load_uci_adult\n :gmsc => load_gmsc","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"TaijaData.data_catalogue[:vision]","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"Dict{Symbol, Function} with 3 entries:\n :fashion_mnist => load_fashion_mnist\n :mnist => load_mnist\n :cifar_10 => load_cifar_10","category":"page"},{"location":"tutorials/data_catalogue/#Loading-Data","page":"Data Catalogue","title":"Loading Data","text":"","category":"section"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"To load or generate any of the datasets listed above, you can just use the corresponding method, for example:","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"data = TaijaData.load_linearly_separable()\ncounterfactual_data = DataPreprocessing.CounterfactualData(data...)","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"Optionally, you can specify how many samples you want to generate like so:","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"n = 100\ndata = TaijaData.load_overlapping(n)\ncounterfactual_data = DataPreprocessing.CounterfactualData(data...)","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"This also applies to real-world datasets, which by default are loaded in their entirety. If n is supplied, the dataset will be randomly undersampled:","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"data = TaijaData.load_mnist(n)\ncounterfactual_data = DataPreprocessing.CounterfactualData(data...)","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"The undersampled dataset is automatically balanced:","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"sum(counterfactual_data.y; dims=2)","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"10×1 Matrix{Int64}:\n 10\n 10\n 10\n 10\n 10\n 10\n 10\n 10\n 10\n 10","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"We can also use a helper function to split the data into train and test sets:","category":"page"},{"location":"tutorials/data_catalogue/","page":"Data Catalogue","title":"Data Catalogue","text":"train_data, test_data = \n CounterfactualExplanations.DataPreprocessing.train_test_split(counterfactual_data)","category":"page"},{"location":"explanation/generators/clap_roar/","page":"ClaPROAR","title":"ClaPROAR","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/generators/clap_roar/#ClaPROARGenerator","page":"ClaPROAR","title":"ClaPROARGenerator","text":"","category":"section"},{"location":"explanation/generators/clap_roar/","page":"ClaPROAR","title":"ClaPROAR","text":"The ClaPROARGenerator was introduced in Altmeyer et al. (2023).","category":"page"},{"location":"explanation/generators/clap_roar/#Description","page":"ClaPROAR","title":"Description","text":"","category":"section"},{"location":"explanation/generators/clap_roar/","page":"ClaPROAR","title":"ClaPROAR","text":"The acronym Clap stands for classifier-preserving. The approach is loosely inspired by ROAR (Upadhyay, Joshi, and Lakkaraju 2021). Altmeyer et al. (2023) propose to explicitly penalize the loss incurred by the classifer when evaluated on the counterfactual x^prime at given parameter values. Formally, we have","category":"page"},{"location":"explanation/generators/clap_roar/","page":"ClaPROAR","title":"ClaPROAR","text":"beginaligned\ntextextcost(f(mathbfs^prime)) = l(M(f(mathbfs^prime))y^prime)\nendaligned","category":"page"},{"location":"explanation/generators/clap_roar/","page":"ClaPROAR","title":"ClaPROAR","text":"for each counterfactual k where l denotes the loss function used to train M. This approach is based on the intuition that (endogenous) model shifts will be triggered by counterfactuals that increase classifier loss (Altmeyer et al. 2023).","category":"page"},{"location":"explanation/generators/clap_roar/#Usage","page":"ClaPROAR","title":"Usage","text":"","category":"section"},{"location":"explanation/generators/clap_roar/","page":"ClaPROAR","title":"ClaPROAR","text":"The approach can be used in our package as follows:","category":"page"},{"location":"explanation/generators/clap_roar/","page":"ClaPROAR","title":"ClaPROAR","text":"generator = ClaPROARGenerator()\nce = generate_counterfactual(x, target, counterfactual_data, M, generator)\nplot(ce)","category":"page"},{"location":"explanation/generators/clap_roar/","page":"ClaPROAR","title":"ClaPROAR","text":"(Image: )","category":"page"},{"location":"explanation/generators/clap_roar/#Comparison-to-GenericGenerator","page":"ClaPROAR","title":"Comparison to GenericGenerator","text":"","category":"section"},{"location":"explanation/generators/clap_roar/","page":"ClaPROAR","title":"ClaPROAR","text":"The figure below compares the outcome for the GenericGenerator and the ClaPROARGenerator.","category":"page"},{"location":"explanation/generators/clap_roar/","page":"ClaPROAR","title":"ClaPROAR","text":"(Image: )","category":"page"},{"location":"explanation/generators/clap_roar/#References","page":"ClaPROAR","title":"References","text":"","category":"section"},{"location":"explanation/generators/clap_roar/","page":"ClaPROAR","title":"ClaPROAR","text":"Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia Liem. 2023. “Endogenous Macrodynamics in Algorithmic Recourse.” In First IEEE Conference on Secure and Trustworthy Machine Learning. https://doi.org/10.1109/satml54575.2023.00036.","category":"page"},{"location":"explanation/generators/clap_roar/","page":"ClaPROAR","title":"ClaPROAR","text":"Upadhyay, Sohini, Shalmali Joshi, and Himabindu Lakkaraju. 2021. “Towards Robust and Reliable Algorithmic Recourse.” https://arxiv.org/abs/2102.13620.","category":"page"},{"location":"explanation/","page":"Overview","title":"Overview","text":"CurrentModule = CounterfactualExplanations","category":"page"},{"location":"explanation/#Explanation","page":"Overview","title":"Explanation","text":"","category":"section"},{"location":"explanation/","page":"Overview","title":"Overview","text":"In this section you will find detailed explanations about the methodology and code.","category":"page"},{"location":"explanation/","page":"Overview","title":"Overview","text":"Explanation clarifies, deepens and broadens the reader’s understanding of a subject.— Diátaxis","category":"page"},{"location":"explanation/","page":"Overview","title":"Overview","text":"In other words, you come here because you are interested in understanding how all of this actually works 🤓.","category":"page"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/generators/feature_tweak/#FeatureTweakGenerator","page":"FeatureTweak","title":"FeatureTweakGenerator","text":"","category":"section"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"warning: Moved to extension\nAs of version 1.1.6, the functionality of the FeatureTweakGenerator has been moved to the DecisionTreeExt extension. This means it is lazily loaded only if the DecisionTree.jl package is loaded by the user, since the FeatureTweakGenerator is only compatible with tree-based models. ","category":"page"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"Feature Tweak refers to the generator introduced by Tolomei et al. (2017). Our implementation takes inspiration from the featureTweakPy library.","category":"page"},{"location":"explanation/generators/feature_tweak/#Description","page":"FeatureTweak","title":"Description","text":"","category":"section"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"Feature Tweak is a powerful recourse algorithm for ensembles of tree-based classifiers such as random forests. Though the problem of understanding how an input to an ensemble model could be transformed in such a way that the model changes its original prediction has been proven to be NP-hard (Tolomei et al. 2017), Feature Tweak provides an algorithm that manages to tractably solve this problem in multiple real-world applications. An example of a problem Feature Tweak is able to efficiently solve, explored in depth in Tolomei et al. (2017) is the problem of transforming an advertisement that has been classified by the ensemble model as a low-quality advertisement to a high-quality one through small changes to its features. With the help of Feature Tweak, advertisers can both learn about the reasons a particular ad was marked to have a low quality, as well as receive actionable suggestions about how to convert a low-quality ad into a high-quality one.","category":"page"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"Though Feature Tweak is a powerful way of avoiding brute-force search in an exponential search space, it does not come without disadvantages. The primary limitations of the approach are that it’s currently only applicable to tree-based classifiers and works only in the setting of binary classification. Another problem is that though the algorithm avoids exponential-time search, it is often still computationally expensive. The algorithm may be improved in the future to tackle all of these shortcomings.","category":"page"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"The following equation displays how a true negative instance x can be transformed into a positively predicted instance x’. To be more precise, x’ is the best possible transformation among all transformations **x***, computed with a cost function δ.","category":"page"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"beginaligned\nmathbfx^prime = arg_mathbfx^* min delta(mathbfx mathbfx^*) hatf(mathbfx) = -1 wedge hatf(mathbfx^*) = +1 \nendaligned","category":"page"},{"location":"explanation/generators/feature_tweak/#Example","page":"FeatureTweak","title":"Example","text":"","category":"section"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"To make use of the FeatureTweakGenerator, you need to have the DecisionTree.jl package installed. Loading the package will load the functionality of the FeatureTweakGenerator through the DecisionTreeExt extension:","category":"page"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"using DecisionTree","category":"page"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"In this example we apply the Feature Tweak algorithm to a decision tree and a random forest trained on the moons dataset. We first load the data and fit the models:","category":"page"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"n = 500\ncounterfactual_data = CounterfactualData(TaijaData.load_moons(n)...)\n\n# Classifiers\ndecision_tree = CounterfactualExplanations.Models.fit_model(\n counterfactual_data, :DecisionTree; max_depth=5, min_samples_leaf=3\n)\nforest = Models.fit_model(counterfactual_data, :RandomForest)","category":"page"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"Next, we select a point to explain and a target class to transform the point to. We then search for counterfactuals using the FeatureTweakGenerator:","category":"page"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"# Select a point to explain:\nx = float32.([1, -0.5])[:,:]\nfactual = Models.predict_label(forest, counterfactual_data, x)\ntarget = counterfactual_data.y_levels[findall(counterfactual_data.y_levels != factual)][1]\n\n# Search for counterfactuals:\ngenerator = FeatureTweakGenerator(ϵ=0.1)\ntree_counterfactual = generate_counterfactual(\n x, target, counterfactual_data, decision_tree, generator\n)\nforest_counterfactual = generate_counterfactual(\n x, target, counterfactual_data, forest, generator\n)","category":"page"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"The resulting counterfactuals are shown below:","category":"page"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"p1 = plot(\n tree_counterfactual;\n colorbar=false,\n title=\"Decision Tree\",\n)\n\np2 = plot(\n forest_counterfactual; title=\"Random Forest\",\n colorbar=false,\n)\n\ndisplay(plot(p1, p2; size=(800, 400)))","category":"page"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"(Image: )","category":"page"},{"location":"explanation/generators/feature_tweak/#References","page":"FeatureTweak","title":"References","text":"","category":"section"},{"location":"explanation/generators/feature_tweak/","page":"FeatureTweak","title":"FeatureTweak","text":"Tolomei, Gabriele, Fabrizio Silvestri, Andrew Haines, and Mounia Lalmas. 2017. “Interpretable Predictions of Tree-Based Ensembles via Actionable Feature Tweaking.” In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 465–74. https://doi.org/10.1145/3097983.3098039.","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"tutorials/model_catalogue/#Model-Catalogue","page":"Model Catalogue","title":"Model Catalogue","text":"","category":"section"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"While in general it is assumed that users will use this package to explain their pre-trained models, we provide out-of-the-box functionality to train various simple default models. In this tutorial, we will see how these models can be fitted to CounterfactualData.","category":"page"},{"location":"tutorials/model_catalogue/#Available-Models","page":"Model Catalogue","title":"Available Models","text":"","category":"section"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"The standard_models_catalogue can be used to inspect the available default models:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"standard_models_catalogue","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"Dict{Symbol, DataType} with 3 entries:\n :Linear => Linear\n :DeepEnsemble => FluxEnsemble\n :MLP => FluxModel","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"The dictionary keys correspond to the model names. In this case, the dictionary values are constructors that can be used called on instances of type CounterfactualData to fit the corresponding model. In most cases, users will find it most convenient to use the fit_model API call instead.","category":"page"},{"location":"tutorials/model_catalogue/#Fitting-Models","page":"Model Catalogue","title":"Fitting Models","text":"","category":"section"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"Models from the standard model catalogue are a core part of the package and thus compatible with all offered counterfactual generators and functionalities.","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"The all_models_catalogue can be used to inspect all models offered by the package:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"all_models_catalogue","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"However, when using models not included in the standard_models_catalogue, additional caution is advised: they might not be supported by all counterfactual generators or they might not be models native to Julia. Thus, a more thorough reading of their documentation may be necessary to make sure that they are used correctly.","category":"page"},{"location":"tutorials/model_catalogue/#Fitting-Flux-Models","page":"Model Catalogue","title":"Fitting Flux Models","text":"","category":"section"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"First, let’s load one of the synthetic datasets. For this, we’ll first need to import the TaijaData.jl package:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"n = 500\ndata = TaijaData.load_multi_class(n)\ncounterfactual_data = DataPreprocessing.CounterfactualData(data...)","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"We could use a Deep Ensemble (Lakshminarayanan, Pritzel, and Blundell 2017) as follows:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"M = fit_model(counterfactual_data, :DeepEnsemble)","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"The returned object is an instance of type FluxEnsemble <: AbstractModel and can be used in downstream tasks without further ado. For example, the resulting fit can be visualised using the generic plot() method as:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"plts = []\nfor target in counterfactual_data.y_levels\n plt = plot(M, counterfactual_data; target=target, title=\"p(y=$(target)|x,θ)\")\n plts = [plts..., plt]\nend\nplot(plts...)","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"(Image: )","category":"page"},{"location":"tutorials/model_catalogue/#Importing-PyTorch-models","page":"Model Catalogue","title":"Importing PyTorch models","text":"","category":"section"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"The package supports generating counterfactuals for any neural network that has been previously defined and trained using PyTorch, regardless of the specific architectural details of the model. To generate counterfactuals for a PyTorch model, save the model inside a .pt file and call the following function:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"model_loaded = TaijaInteroperability.pytorch_model_loader(\n \"$(pwd())/docs/src/tutorials/miscellaneous\",\n \"neural_network_class\",\n \"NeuralNetwork\",\n \"$(pwd())/docs/src/tutorials/miscellaneous/pretrained_model.pt\"\n)","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"The method pytorch_model_loader requires four arguments:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"The path to the folder with a .py file where the PyTorch model is defined\nThe name of the file where the PyTorch model is defined\nThe name of the class of the PyTorch model\nThe path to the Pickle file that holds the model weights","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"In the above case:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"The file defining the model is inside $(pwd())/docs/src/tutorials/miscellaneous\nThe name of the .py file holding the model definition is neural_network_class\nThe name of the model class is NeuralNetwork\nThe Pickle file is located at $(pwd())/docs/src/tutorials/miscellaneous/pretrained_model.pt","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"Though the model file and Pickle file are inside the same directory in this tutorial, this does not necessarily have to be the case.","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"The reason why the model file and Pickle file have to be provided separately is that the package expects an already trained PyTorch model as input. It is also possible to define new PyTorch models within the package, but since this is not the expected use of our package, special support is not offered for that. A guide for defining Python and PyTorch classes in Julia through PythonCall.jl can be found here.","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"Once the PyTorch model has been loaded into the package, wrap it inside the PyTorchModel class:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"model_pytorch = TaijaInteroperability.PyTorchModel(model_loaded, counterfactual_data.likelihood)","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"This model can now be passed into the generators like any other.","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"Please note that the functionality for generating counterfactuals for Python models is only available if your Julia version is 1.8 or above. For Julia 1.7 users, we recommend upgrading the version to 1.8 or 1.9 before loading a PyTorch model into the package.","category":"page"},{"location":"tutorials/model_catalogue/#Importing-R-torch-models","page":"Model Catalogue","title":"Importing R torch models","text":"","category":"section"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"warning: Not fully tested\nPlease note that due to the incompatibility between RCall and PythonCall, it is not feasible to test both PyTorch and RTorch implementations within the same pipeline. While the RTorch implementation has been manually tested, we cannot ensure its consistent functionality as it is inherently susceptible to bugs.","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"The CounterfactualExplanations package supports generating counterfactuals for neural networks that have been defined and trained using R torch. Regardless of the specific architectural details of the model, you can easily generate counterfactual explanations by following these steps.","category":"page"},{"location":"tutorials/model_catalogue/#Saving-the-R-torch-model","page":"Model Catalogue","title":"Saving the R torch model","text":"","category":"section"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"First, save your trained R torch model as a .pt file using the torch_save() function provided by the R torch library. This function allows you to serialize the model and save it to a file. For example:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"torch_save(model, file = \"$(pwd())/docs/src/tutorials/miscellaneous/r_model.pt\")","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"Make sure to specify the correct file path where you want to save the model.","category":"page"},{"location":"tutorials/model_catalogue/#Loading-the-R-torch-model","page":"Model Catalogue","title":"Loading the R torch model","text":"","category":"section"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"To import the R torch model into the CounterfactualExplanations package, use the rtorch_model_loader() function. This function loads the model from the previously saved .pt file. Here is an example of how to load the R torch model:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"model_loaded = TaijaInteroperability.rtorch_model_loader(\"$(pwd())/docs/src/tutorials/miscellaneous/r_model.pt\")","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"The rtorch_model_loader() function requires only one argument:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"model_path: The path to the .pt file that contains the trained R torch model.","category":"page"},{"location":"tutorials/model_catalogue/#Wrapping-the-R-torch-model","page":"Model Catalogue","title":"Wrapping the R torch model","text":"","category":"section"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"Once the R torch model has been loaded into the package, wrap it inside the RTorchModel class. This step prepares the model to be used by the counterfactual generators. Here is an example:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"model_R = TaijaInteroperability.RTorchModel(model_loaded, counterfactual_data.likelihood)","category":"page"},{"location":"tutorials/model_catalogue/#Generating-counterfactuals-with-the-R-torch-model","page":"Model Catalogue","title":"Generating counterfactuals with the R torch model","text":"","category":"section"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"Now that the R torch model has been wrapped inside the RTorchModel class, you can pass it into the counterfactual generators as you would with any other model.","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"Please note that RCall is not fully compatible with PythonCall. Therefore, it is advisable not to import both R torch and PyTorch models within the same Julia session. Additionally, it’s worth mentioning that the R torch integration is still untested in the CounterfactualExplanations package.","category":"page"},{"location":"tutorials/model_catalogue/#Tuning-Flux-Models","page":"Model Catalogue","title":"Tuning Flux Models","text":"","category":"section"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"By default, model architectures are very simple. Through optional arguments, users have some control over the neural network architecture and can choose to impose regularization through dropout. Let’s tackle a more challenging dataset: MNIST (LeCun 1998).","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"data = TaijaData.load_mnist(10000)\ncounterfactual_data = DataPreprocessing.CounterfactualData(data...)\ntrain_data, test_data = \n CounterfactualExplanations.DataPreprocessing.train_test_split(counterfactual_data)","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"(Image: )","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"In this case, we will use a Multi-Layer Perceptron (MLP) but we will adjust the model and training hyperparameters. Parameters related to training of Flux.jl models are currently stored in a mutable container:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"flux_training_params","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"CounterfactualExplanations.FluxModelParams(:logitbinarycrossentropy, :Adam, 100, 1, false)","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"In cases like this one, where model training can be expected to take a few moments, it can be useful to activate verbosity, so let’s set the corresponding field value to true. We’ll also impose mini-batch training:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"flux_training_params.verbose = true\nflux_training_params.batchsize = round(size(train_data.X,2)/10)","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"To account for the fact that this is a slightly more challenging task, we will use an appropriate number of hidden neurons per layer. We will also activate dropout regularization. To scale networks up further, it is also possible to adjust the number of hidden layers, which we will not do here.","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"model_params = (\n n_hidden = 32,\n dropout = true\n)","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"The model_params can be supplied to the familiar API call:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"M = fit_model(train_data, :MLP; model_params...)","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"CounterfactualExplanations.Models.Model(Chain(Dense(784 => 32, relu), Dropout(0.25, active=false), Dense(32 => 10)), :classification_multi, Chain(Dense(784 => 32, relu), Dropout(0.25, active=false), Dense(32 => 10)), MLP())","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"The model performance on our test set can be evaluated as follows:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"model_evaluation(M, test_data)","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"1-element Vector{Float64}:\n 0.9185","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"Finally, let’s restore the default training parameters:","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"CounterfactualExplanations.reset!(flux_training_params)","category":"page"},{"location":"tutorials/model_catalogue/#References","page":"Model Catalogue","title":"References","text":"","category":"section"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"Lakshminarayanan, Balaji, Alexander Pritzel, and Charles Blundell. 2017. “Simple and Scalable Predictive Uncertainty Estimation Using Deep Ensembles.” Advances in Neural Information Processing Systems 30.","category":"page"},{"location":"tutorials/model_catalogue/","page":"Model Catalogue","title":"Model Catalogue","text":"LeCun, Yann. 1998. “The MNIST Database of Handwritten Digits.” http://yann.lecun.com/exdb/mnist/.","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"tutorials/evaluation/#evaluation","page":"Evaluating Explanations","title":"Evaluation","text":"","category":"section"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"Now that we know how to generate counterfactual explanations in Julia, you may have a few follow-up questions: How do I know if the counterfactual search has been successful? How good is my counterfactual explanation? What does ‘good’ even mean in this context? In this tutorial, we will see how counterfactual explanations can be evaluated with respect to their performance.","category":"page"},{"location":"tutorials/evaluation/#Default-Measures","page":"Evaluating Explanations","title":"Default Measures","text":"","category":"section"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"Numerous evaluation measures for counterfactual explanations have been proposed. In what follows, we will cover some of the most important measures.","category":"page"},{"location":"tutorials/evaluation/#Single-Measure,-Single-Counterfactual","page":"Evaluating Explanations","title":"Single Measure, Single Counterfactual","text":"","category":"section"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"One of the most important measures is validity, which simply determines whether or not a counterfactual explanation x^prime is valid in the sense that it yields the target prediction: M(x^prime)=t. We can evaluate the validity of a single counterfactual explanation ce using the Evaluation.evaluate function as follows:","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"using CounterfactualExplanations.Evaluation: evaluate, validity\nevaluate(ce; measure=validity)","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"1-element Vector{Vector{Float64}}:\n [1.0]","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"For a single counterfactual explanation, this evaluation measure can only take two values: it is either equal to 1, if the explanation is valid or 0 otherwise. Another important measure is distance, which relates to the distance between the factual x and the counterfactual x^prime. In the context of Algorithmic Recourse, higher distances are typically associated with higher costs to individuals seeking recourse.","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"using CounterfactualExplanations.Objectives: distance\nevaluate(ce; measure=distance)","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"1-element Vector{Vector{Float32}}:\n [3.2160978]","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"By default, distance computes the L2 (Euclidean) distance.","category":"page"},{"location":"tutorials/evaluation/#Multiple-Measures,-Single-Counterfactual","page":"Evaluating Explanations","title":"Multiple Measures, Single Counterfactual","text":"","category":"section"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"You might be interested in computing not just the L2 distance, but various LP norms. This can be done by supplying a vector of functions to the measure key argument. For convenience, all default distance measures have already been collected in a vector:","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"using CounterfactualExplanations.Evaluation: distance_measures\ndistance_measures","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"4-element Vector{Function}:\n distance_l0 (generic function with 1 method)\n distance_l1 (generic function with 1 method)\n distance_l2 (generic function with 1 method)\n distance_linf (generic function with 1 method)","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"We can use this vector of evaluation measures as follows:","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"evaluate(ce; measure=distance_measures)","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"4-element Vector{Vector{Float32}}:\n [2.0]\n [3.2160978]\n [2.782144]\n [2.7413368]","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"If no measure is specified, the evaluate method will return all default measures,","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"evaluate(ce)","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"3-element Vector{Vector}:\n [1.0]\n Float32[3.2160978]\n [0.0]","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"which include:","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"CounterfactualExplanations.Evaluation.default_measures","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"3-element Vector{Function}:\n validity (generic function with 1 method)\n distance (generic function with 1 method)\n redundancy (generic function with 1 method)","category":"page"},{"location":"tutorials/evaluation/#Multiple-Measures-and-Counterfactuals","page":"Evaluating Explanations","title":"Multiple Measures and Counterfactuals","text":"","category":"section"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"We can also evaluate multiple counterfactual explanations at once:","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"generator = DiCEGenerator()\nces = generate_counterfactual(x, target, counterfactual_data, M, generator; num_counterfactuals=5)\nevaluate(ces)","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"3-element Vector{Vector}:\n [1.0]\n Float32[3.2186122]\n [[0.0, 0.0, 0.0, 0.0, 0.0]]","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"By default, each evaluation measure is aggregated across all counterfactual explanations. To return individual measures for each counterfactual explanation you can specify report_each=true","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"evaluate(ces; report_each=true)","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"3-element Vector{Vector}:\n BitVector[[1, 1, 1, 1, 1]]\n Vector{Float32}[[3.2230358, 3.1825113, 3.2527277, 3.2267833, 3.208004]]\n [[0.0, 0.0, 0.0, 0.0, 0.0]]","category":"page"},{"location":"tutorials/evaluation/#Custom-Measures","page":"Evaluating Explanations","title":"Custom Measures","text":"","category":"section"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"A measure is just a method that takes a CounterfactualExplanation as its only positional argument and agg::Function as a key argument specifying how measures should be aggregated across counterfactuals. Defining custom measures is therefore straightforward. For example, we could define a measure to compute the inverse target probability as follows:","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"my_measure(ce::CounterfactualExplanation; agg=mean) = agg(1 .- CounterfactualExplanations.target_probs(ce))\nevaluate(ce; measure=my_measure)","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"1-element Vector{Vector{Float32}}:\n [0.40882105]","category":"page"},{"location":"tutorials/evaluation/#Tidy-Output","page":"Evaluating Explanations","title":"Tidy Output","text":"","category":"section"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"By default, evaluate returns vectors of evaluation measures. The optional key argument output_format::Symbol can be used to post-process the output in two ways: firstly, to return the output as a dictionary, specify output_format=:Dict:","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"evaluate(ces; output_format=:Dict, report_each=true)","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"Dict{Symbol, Vector} with 3 entries:\n :validity => BitVector[[1, 1, 1, 1, 1]]\n :redundancy => [[0.0, 0.0, 0.0, 0.0, 0.0]]\n :distance => Vector{Float32}[[3.22304, 3.18251, 3.25273, 3.22678, 3.208]]","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"Secondly, to return the output as a data frame, specify output_format=:DataFrame.","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"evaluate(ces; output_format=:DataFrame, report_each=true)","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"By default, data frames are pivoted to long format using individual counterfactuals as the id column. This behaviour can be suppressed by specifying pivot_longer=false.","category":"page"},{"location":"tutorials/evaluation/#Multiple-Counterfactual-Explanations","page":"Evaluating Explanations","title":"Multiple Counterfactual Explanations","text":"","category":"section"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"It may be necessary to generate counterfactual explanations for multiple individuals.","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"Below, for example, we first select multiple samples (5) from the non-target class and then generate counterfactual explanations for all of them.","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"This can be done using broadcasting:","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"# Factual and target:\nids = rand(findall(predict_label(M, counterfactual_data) .== factual), n_individuals)\nxs = select_factual(counterfactual_data, ids)\nces = generate_counterfactual(xs, target, counterfactual_data, M, generator; num_counterfactuals=5)\nevaluation = evaluate.(ces)","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"5-element Vector{Vector{Vector}}:\n [[0.8], Float32[3.2487042], [[0.0, 0.0, 0.0, 0.0, 0.0]]]\n [[0.8], Float32[4.185718], [[0.0, 0.0, 0.0, 0.0, 0.0]]]\n [[1.0], Float32[4.0083566], [[0.0, 0.0, 0.0, 0.0, 0.0]]]\n [[1.0], Float32[2.9578466], [[0.0, 0.0, 0.0, 0.0, 0.0]]]\n [[0.8], Float32[2.6089585], [[0.0, 0.0, 0.0, 0.0, 0.0]]]\n\nVector{Vector}[[[0.8], Float32[3.2487042], [[0.0, 0.0, 0.0, 0.0, 0.0]]], [[0.8], Float32[4.185718], [[0.0, 0.0, 0.0, 0.0, 0.0]]], [[1.0], Float32[4.0083566], [[0.0, 0.0, 0.0, 0.0, 0.0]]], [[1.0], Float32[2.9578466], [[0.0, 0.0, 0.0, 0.0, 0.0]]], [[0.8], Float32[2.6089585], [[0.0, 0.0, 0.0, 0.0, 0.0]]]]","category":"page"},{"location":"tutorials/evaluation/","page":"Evaluating Explanations","title":"Evaluating Explanations","text":"This leads us to our next topic: Performance Benchmarks.","category":"page"},{"location":"extensions/","page":"Overview","title":"Overview","text":"CurrentModule = CounterfactualExplanations","category":"page"},{"location":"extensions/#Extensions","page":"Overview","title":"⛓️ Extensions","text":"","category":"section"},{"location":"extensions/","page":"Overview","title":"Overview","text":"In this section, you will find information about package extensions of the CounterfactualExplanations package. Extensions are a relatively new feature of Julia that allows users to conditionally load code based on the presence of other packages. This is useful for creating packages that extend the functionality of other packages, without requiring the user to install the package being extended.","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/evaluation/faithfulness/#Faithfulness-and-Plausibility","page":"Plausibility and Faithfulness","title":"Faithfulness and Plausibility","text":"","category":"section"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"warning: Warning\nThe implementation of our faithfulness and plausibility metrics is based on our AAAI 2024 paper. There is no consensus on the best way to measure faithfulness and plausibility and we are still conducting research on this. This tutorial is therefore also a work in progress. Current limitations are discussed below.","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"We begin by loading some dependencies:","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"# Packages\nusing CounterfactualExplanations\nusing CounterfactualExplanations.Evaluation\nusing CounterfactualExplanations.Convergence\nusing CounterfactualExplanations.Models\nusing Flux\nusing JointEnergyModels\nusing MLJFlux\nusing EnergySamplers: PMC, SGLD, ImproperSGLD\nusing TaijaData","category":"page"},{"location":"explanation/evaluation/faithfulness/#Sample-Based-Metrics","page":"Plausibility and Faithfulness","title":"Sample-Based Metrics","text":"","category":"section"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"In Altmeyer et al. (2024), we defined two sample-based metrics for plausibility and faithfulness. The metrics rely on the premise of comparing the counterfactual to samples drawn from some target distribution. To assess plausibility, we compare the counterfactual to samples drawn from the training data that fall into the target class. To assess faithfulness, we compare the counterfactual to samples drawn from the model posterior conditional through Stochastic Gradient Langevin Dynamics (SGLD). For details specific to posterior sampling, please consult our documentation Taija’s EnergySamplers.jl. For broader details on this topic, please consult Altmeyer et al. (2024).","category":"page"},{"location":"explanation/evaluation/faithfulness/#Simple-Example","page":"Plausibility and Faithfulness","title":"Simple Example","text":"","category":"section"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"Below we generate a simple synthetic dataset with two output classes, both Gaussian clusters with different centers. We then train a joint energy-based model (JEM) using Taija’s JointEnergyModels.jl package to both discriminate between output classes and generate inputs.","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"n_obs = 1000\nX, y = TaijaData.load_blobs(n_obs; cluster_std=0.1, center_box=(-1. => 1.))\ndata = CounterfactualData(X, y)\n\nn_hidden = 16\n_batch_size = Int(round(n_obs/10))\nepochs = 100\nM = Models.fit_model(\n data,:JEM;\n builder=MLJFlux.MLP(\n hidden=(n_hidden, n_hidden, n_hidden), \n σ=Flux.swish\n ),\n batch_size=_batch_size,\n finaliser=Flux.softmax,\n loss=Flux.Losses.crossentropy,\n jem_training_params=(\n α=[1.0,1.0,1e-1],\n verbosity=10,\n ),\n epochs=epochs,\n sampling_steps=30,\n)","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"Next, we generate counterfactuals for a randomly drawn sampler using two different generators: firstly, the GenericGenerator and, secondly, the ECCoGenerator. The latter was proposed in Altmeyer et al. (2024) to generate faithful counterfactuals by constraining their energy with respect to the model. In both cases, we generate multiple counterfactuals for the same factual. Each time the search is initialized by adding a small random perturbation to the features following (Slack et al. 2021). For both generators, we then compute the average plausibility and faithfulness of the generated counterfactuals as defined above and plot the counterfactual paths in the figure below. The estimated values for the plausibility and faithfulness are shown in the plot titles and indicate that the ECCoGenerator performs better in both regards.","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"To better understand why the ECCoGenerator generates more faithful counterfactuals, we have also plotted samples drawn from the model posterior p_theta(Xy=1) in green: these largely overlap with training data in the target distribution, which indicates that the JEM has succeeded on both tasks—discriminating and generating—for this simple data set. The energy constraint of the ECCoGenerator ensures that counterfactuals remain anchored by the learned model posterior conditional distribution. As demonstrated in Altmeyer et al. (2024), faithful counterfactuals will also be plausible if the underlying model has learned plausible explanations for the data as in this case. For the GenericGenerator, counterfactuals end up outside of that target distribution, because the distance penalty pulls counterfactuals back to their original starting values.","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"using Measures\n\n# Select a factual instance:\ntarget = 1\nfactual = 2\nchosen = rand(findall(predict_label(M, data) .== factual))\nx = select_factual(data, chosen)\n\n# Search parameters:\nopt = Adam(0.005)\nconv = GeneratorConditionsConvergence()\n\n# Generic Generator:\nλ₁ = 0.1\ngenerator = GenericGenerator(opt=opt, λ=λ₁)\nce = generate_counterfactual(x, target, data, M, generator; convergence=conv, num_counterfactuals=5)\nfaith = Evaluation.faithfulness(ce)\nplaus = Evaluation.plausibility(ce)\np1 = plot(ce; zoom=-1, target=target)\nX̂ = ce.search[:energy_sampler][ce.target].posterior\ntitle = \"Generic Generator\\nplaus.: $(round(plaus, digits=2)); faith.: $(round(faith, digits=2))\"\nscatter!(X̂[1, :], X̂[2, :]; label=\"X|y=$target\", shape=:star5, ms=10, title=title, color=3, alpha=0.1)\nscatter!(ce.x′[1,:], ce.x′[2,:]; label=\"Counterfactual\", shape=:star1, ms=20, color=4)\n\n# Search:\nλ₂ = 1.0\ngenerator = ECCoGenerator(opt=opt; λ=[λ₁, λ₂])\nce = generate_counterfactual(x, target, data, M, generator; convergence=conv, num_counterfactuals=5)\nfaith = Evaluation.faithfulness(ce)\nplaus = Evaluation.plausibility(ce)\np2 = plot(ce; zoom=-1, target=target)\nX̂ = ce.search[:energy_sampler][ce.target].posterior\ntitle = \"ECCo Generator\\nplaus.: $(round(plaus, digits=2)); faith.: $(round(faith, digits=2))\"\nscatter!(X̂[1, :], X̂[2, :]; label=\"X|y=$target\", shape=:star5, ms=10, title=title, color=3, alpha=0.1)\nscatter!(ce.x′[1,:], ce.x′[2,:]; label=\"Counterfactual\", shape=:star1, ms=20, color=4)\n\nplot(p1, p2; size=(1000, 400), topmargin=5mm)","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"(Image: )","category":"page"},{"location":"explanation/evaluation/faithfulness/#Current-Limitations","page":"Plausibility and Faithfulness","title":"Current Limitations","text":"","category":"section"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"But things do not always turn out this well. Our next example demonstrates an important shortcoming of the framework proposed in Altmeyer et al. (2024). Instead of training a JEM, we now train a simpler, purely discriminative model:","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"n_obs = 1000\nX, y = TaijaData.load_blobs(n_obs; cluster_std=0.1, center_box=(-1. => 1.))\ndata = CounterfactualData(X, y)\nflux_training_params.n_epochs = 1\nM = Models.fit_model(data,:DeepEnsemble)\nCounterfactualExplanations.reset!(flux_training_params)","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"Next, we repeat the same process above for generating counterfactuals. This time we can observe in the figure below that the GenericGenerator produces much more plausible though apparently less faithful counterfactuals than the ECCoGenerator. Looking at the top row only, it is not obvious why the counterfactual produced by the GenericGenerator should be considered as less faithful to the model: conditional samples drawn from p_theta(Xy=1) through SGLD are just scattered all across the target domain on the expected side of the decision boundary. When zooming out (bottom row), it becomes clear that the learned posterior conditional is far away from the observed training data in the target class. Our definition and measure of faithfulness is in that sense very strict, quite possibly too strict in some cases.","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"# Select a factual instance:\ntarget = 2\nfactual = 1\nchosen = rand(findall(predict_label(M, data) .== factual))\nx = select_factual(data, chosen)\n\n# Search parameters:\nopt = Adam(0.1)\nconv = GeneratorConditionsConvergence()\n\n# Generic Generator:\ngenerator = GenericGenerator(opt=opt)\nce = generate_counterfactual(x, target, data, M, generator; convergence=conv, initialization=:identity)\nplaus = Evaluation.plausibility(ce)\nfaith = Evaluation.faithfulness(ce)\nX̂ = ce.search[:energy_sampler][ce.target].posterior\ntitle = \"Generic Generator\\nplaus.: $(round(plaus, digits=2)); faith.: $(round(faith, digits=2))\"\np1 = plot(ce, zoom=-1, target=target)\nscatter!(X̂[1, :], X̂[2, :]; label=\"X|y=$target\", shape=:star5, ms=10, title=title, color=3, alpha=0.2)\nscatter!(ce.x′[1,:], ce.x′[2,:]; label=\"Counterfactual\", shape=:star1, ms=20, color=4)\n_lim = maximum(abs.(X̂))\nxlims, ylims = (-_lim, _lim), (-_lim, _lim)\np3 = plot(ce; xlims=xlims, ylims=ylims, target=target)\nscatter!(X̂[1, :], X̂[2, :]; label=\"X|y=$target\", shape=:star5, ms=10, title=title, color=3, alpha=0.2)\nscatter!(ce.x′[1,:], ce.x′[2,:]; label=\"Counterfactual\", shape=:star1, ms=20, color=4)\n\n# Search:\ngenerator = ECCoGenerator(opt=opt; λ=[0.1, 1.0])\nce = generate_counterfactual(x, target, data, M, generator; convergence=conv, initialization=:identity)\nplaus = Evaluation.plausibility(ce)\nfaith = Evaluation.faithfulness(ce)\nX̂ = ce.search[:energy_sampler][ce.target].posterior\ntitle = \"ECCo Generator\\nplaus.: $(round(plaus, digits=2)); faith.: $(round(faith, digits=2))\"\np2 = plot(ce, zoom=-1, target=target)\nscatter!(X̂[1, :], X̂[2, :]; label=\"X|y=$target\", shape=:star5, ms=10, title=title, color=3, alpha=0.2)\nscatter!(ce.x′[1,:], ce.x′[2,:]; label=\"Counterfactual\", shape=:star1, ms=20, color=4)\n_lim = maximum(abs.(X̂))\nxlims, ylims = (-_lim, _lim), (-_lim, _lim)\np4 = plot(ce; xlims=xlims, ylims=ylims, target=target)\nscatter!(X̂[1, :], X̂[2, :]; label=\"X|y=$target\", shape=:star5, ms=10, title=title, color=3, alpha=0.2)\nscatter!(ce.x′[1,:], ce.x′[2,:]; label=\"Counterfactual\", shape=:star1, ms=20, color=4)\n\nplot(p1, p2, p3, p4; size=(1000, 800), topmargin=5mm)","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"(Image: )","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"Looking at a different domain like images demonstrates another limitation of the sample-based metrics. Below we generate counterfactuals for turning an 8 into a 3 using our two generators from above for a simple MNIST (LeCun 1998) classifier. Looking at the figure below, arguably the ECCoGenerator generates a more plausible counterfactual in this case. Unfortunately, according to the sample-based plausibility metric, this is not the case.","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"_nrow = 3\n\nRandom.seed!(42)\nX, y = TaijaData.load_mnist()\ndata = CounterfactualData(X, y)\n\nusing CounterfactualExplanations.Models: load_mnist_model\nusing CounterfactualExplanations: JEM\nM = load_mnist_model(MLP())\n\n# Select a factual instance:\ntarget = 3\nfactual = 8\nchosen = rand(findall(predict_label(M, data) .== factual))\nx = select_factual(data, chosen)\n\n# Search parameters:\nopt = Adam(0.1)\nconv = GeneratorConditionsConvergence()\nλ₁ = 0.0\nλ₂ = 0.5\n\n# Factual:\nfactual = convert2image(MNIST, reshape(x, 28, 28))\np1 = plot(factual; title=\"\\nFactual\", axis=([], false))\n\n# Generic Generator:\ngenerator = GenericGenerator(opt=opt; λ=λ₁)\nce = generate_counterfactual(x, target, data, M, generator; convergence=conv, initialization=:identity)\nfaith = Evaluation.faithfulness(ce; nsamples=_nrow^2, niter_final=10000)\nplaus = Evaluation.plausibility(ce)\nimg = convert2image(MNIST, reshape(ce.x′, 28, 28))\ntitle = \"Generic Generator\\nplaus.: $(round(plaus, digits=2))\\nfaith.: $(round(faith, digits=2))\"\np2 = plot(img, title=title, axis=([], false))\n\n# Search:\ngenerator = ECCoGenerator(opt=opt; λ=[λ₁, λ₂])\nce = generate_counterfactual(x, target, data, M, generator; convergence=conv, initialization=:identity)\nfaith = Evaluation.faithfulness(ce; nsamples=_nrow^2, niter_final=10000)\nplaus = Evaluation.plausibility(ce)\nimg = convert2image(MNIST, reshape(ce.x′, 28, 28))\ntitle = \"ECCo Generator\\nplaus.: $(round(plaus, digits=2))\\nfaith.: $(round(faith, digits=2))\"\np3 = plot(img, title=title, axis=([], false))\n\nplot(p1, p2, p3; size=(600, 200), layout=(1, 3), topmargin=15mm)","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"(Image: )","category":"page"},{"location":"explanation/evaluation/faithfulness/#References","page":"Plausibility and Faithfulness","title":"References","text":"","category":"section"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"Altmeyer, Patrick, Mojtaba Farmanbar, Arie van Deursen, and Cynthia CS Liem. 2024. “Faithful Model Explanations Through Energy-Constrained Conformal Counterfactuals.” In Proceedings of the AAAI Conference on Artificial Intelligence, 38:10829–37. 10.","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"LeCun, Yann. 1998. “The MNIST Database of Handwritten Digits.” http://yann.lecun.com/exdb/mnist/.","category":"page"},{"location":"explanation/evaluation/faithfulness/","page":"Plausibility and Faithfulness","title":"Plausibility and Faithfulness","text":"Slack, Dylan, Anna Hilgard, Himabindu Lakkaraju, and Sameer Singh. 2021. “Counterfactual Explanations Can Be Manipulated.” Advances in Neural Information Processing Systems 34.","category":"page"},{"location":"explanation/generators/generic/","page":"Generic","title":"Generic","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/generators/generic/#GenericGenerator","page":"Generic","title":"GenericGenerator","text":"","category":"section"},{"location":"explanation/generators/generic/","page":"Generic","title":"Generic","text":"We use the term generic to relate to the basic counterfactual generator proposed by Wachter, Mittelstadt, and Russell (2017) with L1-norm regularization. There is also a variant of this generator that uses the distance metric proposed in Wachter, Mittelstadt, and Russell (2017), which we call WachterGenerator.","category":"page"},{"location":"explanation/generators/generic/#Description","page":"Generic","title":"Description","text":"","category":"section"},{"location":"explanation/generators/generic/","page":"Generic","title":"Generic","text":"As the term indicates, this approach is simple: it forms the baseline approach for gradient-based counterfactual generators. Wachter, Mittelstadt, and Russell (2017) were among the first to realise that","category":"page"},{"location":"explanation/generators/generic/","page":"Generic","title":"Generic","text":"[…] explanations can, in principle, be offered without opening the “black box.”— Wachter, Mittelstadt, and Russell (2017)","category":"page"},{"location":"explanation/generators/generic/","page":"Generic","title":"Generic","text":"Gradient descent is performed directly in the feature space. Concerning the cost heuristic, the authors choose to penalize the distance of counterfactuals from the factual value. This is based on the intuitive notion that larger feature perturbations require greater effort.","category":"page"},{"location":"explanation/generators/generic/#Usage","page":"Generic","title":"Usage","text":"","category":"section"},{"location":"explanation/generators/generic/","page":"Generic","title":"Generic","text":"The approach can be used in our package as follows:","category":"page"},{"location":"explanation/generators/generic/","page":"Generic","title":"Generic","text":"generator = GenericGenerator()\nce = generate_counterfactual(x, target, counterfactual_data, M, generator)\nplot(ce)","category":"page"},{"location":"explanation/generators/generic/","page":"Generic","title":"Generic","text":"(Image: )","category":"page"},{"location":"explanation/generators/generic/#References","page":"Generic","title":"References","text":"","category":"section"},{"location":"explanation/generators/generic/","page":"Generic","title":"Generic","text":"Wachter, Sandra, Brent Mittelstadt, and Chris Russell. 2017. “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR.” Harv. JL & Tech. 31: 841. https://doi.org/10.2139/ssrn.3063289.","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"tutorials/benchmarking/#Performance-Benchmarks","page":"Benchmarking Explanations","title":"Performance Benchmarks","text":"","category":"section"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"In the previous tutorial, we have seen how counterfactual explanations can be evaluated. An important follow-up task is to compare the performance of different counterfactual generators is an important task. Researchers can use benchmarks to test new ideas they want to implement. Practitioners can find the right counterfactual generator for their specific use case through benchmarks. In this tutorial, we will see how to run benchmarks for counterfactual generators.","category":"page"},{"location":"tutorials/benchmarking/#Post-Hoc-Benchmarking","page":"Benchmarking Explanations","title":"Post Hoc Benchmarking","text":"","category":"section"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"We begin by continuing the discussion from the previous tutorial: suppose you have generated multiple counterfactual explanations for multiple individuals, like below:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"# Factual and target:\nn_individuals = 5\nids = rand(findall(predict_label(M, counterfactual_data) .== factual), n_individuals)\nxs = select_factual(counterfactual_data, ids)\nces = generate_counterfactual(xs, target, counterfactual_data, M, generator; num_counterfactuals=5)","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"You may be interested in comparing the outcomes across individuals. To benchmark the various counterfactual explanations using default evaluation measures, you can simply proceed as follows:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"bmk = benchmark(ces)","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"Under the hood, the benchmark(counterfactual_explanations::Vector{CounterfactualExplanation}) uses CounterfactualExplanations.Evaluation.evaluate(ce::CounterfactualExplanation) to generate a Benchmark object, which contains the evaluation in its most granular form as a DataFrame.","category":"page"},{"location":"tutorials/benchmarking/#Working-with-Benchmarks","page":"Benchmarking Explanations","title":"Working with Benchmarks","text":"","category":"section"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"For convenience, the DataFrame containing the evaluation can be returned by simply calling the Benchmark object. By default, the aggregated evaluation measures across id (in line with the default behaviour of evaluate).","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"bmk()","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"15×7 DataFrame\n Row │ sample variable value generator ⋯\n │ Base.UUID String Float64 Symbol ⋯\n─────┼──────────────────────────────────────────────────────────────────────────\n 1 │ 239104d0-f59f-11ee-3d0c-d1db071927ff distance 3.17243 GradientBase ⋯\n 2 │ 239104d0-f59f-11ee-3d0c-d1db071927ff redundancy 0.0 GradientBase\n 3 │ 239104d0-f59f-11ee-3d0c-d1db071927ff validity 1.0 GradientBase\n 4 │ 2398b3e2-f59f-11ee-3323-13d53fb7e75b distance 3.07148 GradientBase\n 5 │ 2398b3e2-f59f-11ee-3323-13d53fb7e75b redundancy 0.0 GradientBase ⋯\n 6 │ 2398b3e2-f59f-11ee-3323-13d53fb7e75b validity 1.0 GradientBase\n 7 │ 2398b916-f59f-11ee-3f13-bd00858a39af distance 3.62159 GradientBase\n 8 │ 2398b916-f59f-11ee-3f13-bd00858a39af redundancy 0.0 GradientBase\n 9 │ 2398b916-f59f-11ee-3f13-bd00858a39af validity 1.0 GradientBase ⋯\n 10 │ 2398bce8-f59f-11ee-37c1-ef7c6de27b6b distance 2.62783 GradientBase\n 11 │ 2398bce8-f59f-11ee-37c1-ef7c6de27b6b redundancy 0.0 GradientBase\n 12 │ 2398bce8-f59f-11ee-37c1-ef7c6de27b6b validity 1.0 GradientBase\n 13 │ 2398c08a-f59f-11ee-175b-81c155750752 distance 2.91985 GradientBase ⋯\n 14 │ 2398c08a-f59f-11ee-175b-81c155750752 redundancy 0.0 GradientBase\n 15 │ 2398c08a-f59f-11ee-175b-81c155750752 validity 1.0 GradientBase\n 4 columns omitted","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"To retrieve the granular dataset, simply do:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"bmk(agg=nothing)","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"75×8 DataFrame\n Row │ sample num_counterfactual variable v ⋯\n │ Base.UUID Int64 String F ⋯\n─────┼──────────────────────────────────────────────────────────────────────────\n 1 │ 239104d0-f59f-11ee-3d0c-d1db071927ff 1 distance 3 ⋯\n 2 │ 239104d0-f59f-11ee-3d0c-d1db071927ff 2 distance 3\n 3 │ 239104d0-f59f-11ee-3d0c-d1db071927ff 3 distance 3\n 4 │ 239104d0-f59f-11ee-3d0c-d1db071927ff 4 distance 3\n 5 │ 239104d0-f59f-11ee-3d0c-d1db071927ff 5 distance 3 ⋯\n 6 │ 239104d0-f59f-11ee-3d0c-d1db071927ff 1 redundancy 0\n 7 │ 239104d0-f59f-11ee-3d0c-d1db071927ff 2 redundancy 0\n 8 │ 239104d0-f59f-11ee-3d0c-d1db071927ff 3 redundancy 0\n 9 │ 239104d0-f59f-11ee-3d0c-d1db071927ff 4 redundancy 0 ⋯\n 10 │ 239104d0-f59f-11ee-3d0c-d1db071927ff 5 redundancy 0\n 11 │ 239104d0-f59f-11ee-3d0c-d1db071927ff 1 validity 1\n ⋮ │ ⋮ ⋮ ⋮ ⋱\n 66 │ 2398c08a-f59f-11ee-175b-81c155750752 1 redundancy 0\n 67 │ 2398c08a-f59f-11ee-175b-81c155750752 2 redundancy 0 ⋯\n 68 │ 2398c08a-f59f-11ee-175b-81c155750752 3 redundancy 0\n 69 │ 2398c08a-f59f-11ee-175b-81c155750752 4 redundancy 0\n 70 │ 2398c08a-f59f-11ee-175b-81c155750752 5 redundancy 0\n 71 │ 2398c08a-f59f-11ee-175b-81c155750752 1 validity 1 ⋯\n 72 │ 2398c08a-f59f-11ee-175b-81c155750752 2 validity 1\n 73 │ 2398c08a-f59f-11ee-175b-81c155750752 3 validity 1\n 74 │ 2398c08a-f59f-11ee-175b-81c155750752 4 validity 1\n 75 │ 2398c08a-f59f-11ee-175b-81c155750752 5 validity 1 ⋯\n 5 columns and 54 rows omitted","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"Since benchmarks return a DataFrame object on call, post-processing is straightforward. For example, we could use Tidier.jl:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"using Tidier\n@chain bmk() begin\n @filter(variable == \"distance\")\n @select(sample, variable, value)\nend","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"5×3 DataFrame\n Row │ sample variable value \n │ Base.UUID String Float64 \n─────┼─────────────────────────────────────────────────────────\n 1 │ 239104d0-f59f-11ee-3d0c-d1db071927ff distance 3.17243\n 2 │ 2398b3e2-f59f-11ee-3323-13d53fb7e75b distance 3.07148\n 3 │ 2398b916-f59f-11ee-3f13-bd00858a39af distance 3.62159\n 4 │ 2398bce8-f59f-11ee-37c1-ef7c6de27b6b distance 2.62783\n 5 │ 2398c08a-f59f-11ee-175b-81c155750752 distance 2.91985","category":"page"},{"location":"tutorials/benchmarking/#Metadata-for-Counterfactual-Explanations","page":"Benchmarking Explanations","title":"Metadata for Counterfactual Explanations","text":"","category":"section"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"Benchmarks always report metadata for each counterfactual explanation, which is automatically inferred by default. The default metadata concerns the explained model and the employed generator. In the current example, we used the same model and generator for each individual:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"@chain bmk() begin\n @group_by(sample)\n @select(sample, model, generator)\n @summarize(model=first(model),generator=first(generator))\n @ungroup\nend","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"5×3 DataFrame\n Row │ sample model ⋯\n │ Base.UUID Symbol ⋯\n─────┼──────────────────────────────────────────────────────────────────────────\n 1 │ 239104d0-f59f-11ee-3d0c-d1db071927ff FluxModel(Chain(Dense(2 => 2)), … ⋯\n 2 │ 2398b3e2-f59f-11ee-3323-13d53fb7e75b FluxModel(Chain(Dense(2 => 2)), …\n 3 │ 2398b916-f59f-11ee-3f13-bd00858a39af FluxModel(Chain(Dense(2 => 2)), …\n 4 │ 2398bce8-f59f-11ee-37c1-ef7c6de27b6b FluxModel(Chain(Dense(2 => 2)), …\n 5 │ 2398c08a-f59f-11ee-175b-81c155750752 FluxModel(Chain(Dense(2 => 2)), … ⋯\n 1 column omitted","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"Metadata can also be provided as an optional key argument.","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"meta_data = Dict(\n :generator => \"Generic\",\n :model => \"MLP\",\n)\nmeta_data = [meta_data for i in 1:length(ces)]\nbmk = benchmark(ces; meta_data=meta_data)\n@chain bmk() begin\n @group_by(sample)\n @select(sample, model, generator)\n @summarize(model=first(model),generator=first(generator))\n @ungroup\nend","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"5×3 DataFrame\n Row │ sample model generator \n │ Base.UUID String String \n─────┼─────────────────────────────────────────────────────────\n 1 │ 27fae496-f59f-11ee-2c30-f35d1025a6d4 MLP Generic\n 2 │ 27fdcc6a-f59f-11ee-030b-152c9794c5f1 MLP Generic\n 3 │ 27fdd04a-f59f-11ee-2010-e1732ff5d8d2 MLP Generic\n 4 │ 27fdd340-f59f-11ee-1d20-050a69dcacef MLP Generic\n 5 │ 27fdd5fc-f59f-11ee-02e8-d198e436abb3 MLP Generic","category":"page"},{"location":"tutorials/benchmarking/#Ad-Hoc-Benchmarking","page":"Benchmarking Explanations","title":"Ad Hoc Benchmarking","text":"","category":"section"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"So far we have assumed the following workflow:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"Fit some machine learning model.\nGenerate counterfactual explanations for some individual(s) (generate_counterfactual).\nEvaluate and benchmark them (benchmark(ces::Vector{CounterfactualExplanation})).","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"In many cases, it may be preferable to combine these steps. To this end, we have added support for two scenarios of Ad Hoc Benchmarking.","category":"page"},{"location":"tutorials/benchmarking/#Pre-trained-Models","page":"Benchmarking Explanations","title":"Pre-trained Models","text":"","category":"section"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"In the first scenario, it is assumed that the machine learning models have been pre-trained and so the workflow can be summarized as follows:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"Fit some machine learning model(s).\nGenerate counterfactual explanations and benchmark them.","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"We suspect that this is the most common workflow for practitioners who are interested in benchmarking counterfactual explanations for the pre-trained machine learning models. Let’s go through this workflow using a simple example. We first train some models and store them in a dictionary:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"models = Dict(\n :MLP => fit_model(counterfactual_data, :MLP),\n :Linear => fit_model(counterfactual_data, :Linear),\n)","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"Next, we store the counterfactual generators of interest in a dictionary as well:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"generators = Dict(\n :Generic => GenericGenerator(),\n :Gravitational => GravitationalGenerator(),\n :Wachter => WachterGenerator(),\n :ClaPROAR => ClaPROARGenerator(),\n)","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"Then we can run a benchmark for individual(s) x, a pre-specified target and counterfactual_data as follows:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"bmk = benchmark(x, target, counterfactual_data; models=models, generators=generators)","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"In this case, metadata is automatically inferred from the dictionaries:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"@chain bmk() begin\n @filter(variable == \"distance\")\n @select(sample, variable, value, model, generator)\nend","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"8×5 DataFrame\n Row │ sample variable value model ⋯\n │ Base.UUID String Float64 Tuple… ⋯\n─────┼──────────────────────────────────────────────────────────────────────────\n 1 │ 2cba5eee-f59f-11ee-1844-cbc7a8372a38 distance 4.38877 (:Linear, Flux ⋯\n 2 │ 2cd740fe-f59f-11ee-35c3-1157eb1b7583 distance 4.17021 (:Linear, Flux\n 3 │ 2cd741e2-f59f-11ee-2b09-0d55ef9892b9 distance 4.31145 (:Linear, Flux\n 4 │ 2cd7420c-f59f-11ee-1996-6fa75e23bb57 distance 4.17035 (:Linear, Flux\n 5 │ 2cd74234-f59f-11ee-0ad0-9f21949f5932 distance 5.73182 (:MLP, FluxMod ⋯\n 6 │ 2cd7425c-f59f-11ee-3eb4-af34f85ffd3d distance 5.50606 (:MLP, FluxMod\n 7 │ 2cd7427a-f59f-11ee-10d3-a1df6c8dc125 distance 5.2114 (:MLP, FluxMod\n 8 │ 2cd74298-f59f-11ee-32d1-f501c104fea8 distance 5.3623 (:MLP, FluxMod\n 2 columns omitted","category":"page"},{"location":"tutorials/benchmarking/#Everything-at-once","page":"Benchmarking Explanations","title":"Everything at once","text":"","category":"section"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"Researchers, in particular, may be interested in combining all steps into one. This is the second scenario of Ad Hoc Benchmarking:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"Fit some machine learning model(s), generate counterfactual explanations and benchmark them.","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"It involves calling benchmark directly on counterfactual data (the only positional argument):","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"bmk = benchmark(counterfactual_data)","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"This will use the default models from standard_models_catalogue and train them on the data. All available generators from generator_catalogue will also be used:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"@chain bmk() begin\n @filter(variable == \"validity\")\n @select(sample, variable, value, model, generator)\nend","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"200×5 DataFrame\n Row │ sample variable value model genera ⋯\n │ Base.UUID String Float64 Symbol Symbol ⋯\n─────┼──────────────────────────────────────────────────────────────────────────\n 1 │ 32d1817e-f59f-11ee-152f-a30b18c2e6f7 validity 1.0 Linear gravit ⋯\n 2 │ 32d1817e-f59f-11ee-152f-a30b18c2e6f7 validity 1.0 Linear growin\n 3 │ 32d1817e-f59f-11ee-152f-a30b18c2e6f7 validity 1.0 Linear revise\n 4 │ 32d1817e-f59f-11ee-152f-a30b18c2e6f7 validity 1.0 Linear clue\n 5 │ 32d1817e-f59f-11ee-152f-a30b18c2e6f7 validity 1.0 Linear probe ⋯\n 6 │ 32d1817e-f59f-11ee-152f-a30b18c2e6f7 validity 1.0 Linear dice\n 7 │ 32d1817e-f59f-11ee-152f-a30b18c2e6f7 validity 1.0 Linear clapro\n 8 │ 32d1817e-f59f-11ee-152f-a30b18c2e6f7 validity 1.0 Linear wachte\n 9 │ 32d1817e-f59f-11ee-152f-a30b18c2e6f7 validity 1.0 Linear generi ⋯\n 10 │ 32d1817e-f59f-11ee-152f-a30b18c2e6f7 validity 1.0 Linear greedy\n 11 │ 32d255e8-f59f-11ee-3e8d-a9e9f6e23ea8 validity 1.0 Linear gravit\n ⋮ │ ⋮ ⋮ ⋮ ⋮ ⋱\n 191 │ 3382d08a-f59f-11ee-10b3-f7d18cf7d3b5 validity 1.0 MLP gravit\n 192 │ 3382d08a-f59f-11ee-10b3-f7d18cf7d3b5 validity 1.0 MLP growin ⋯\n 193 │ 3382d08a-f59f-11ee-10b3-f7d18cf7d3b5 validity 1.0 MLP revise\n 194 │ 3382d08a-f59f-11ee-10b3-f7d18cf7d3b5 validity 1.0 MLP clue\n 195 │ 3382d08a-f59f-11ee-10b3-f7d18cf7d3b5 validity 1.0 MLP probe\n 196 │ 3382d08a-f59f-11ee-10b3-f7d18cf7d3b5 validity 1.0 MLP dice ⋯\n 197 │ 3382d08a-f59f-11ee-10b3-f7d18cf7d3b5 validity 1.0 MLP clapro\n 198 │ 3382d08a-f59f-11ee-10b3-f7d18cf7d3b5 validity 1.0 MLP wachte\n 199 │ 3382d08a-f59f-11ee-10b3-f7d18cf7d3b5 validity 1.0 MLP generi\n 200 │ 3382d08a-f59f-11ee-10b3-f7d18cf7d3b5 validity 1.0 MLP greedy ⋯\n 1 column and 179 rows omitted","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"Optionally, you can instead provide a dictionary of models and generators as before. Each value in the models dictionary should be one of two things:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"Either be an object M of type AbstractModel that implements the Models.train method.\nOr a DataType that can be called on CounterfactualData to create an object M as in (a).","category":"page"},{"location":"tutorials/benchmarking/#Multiple-Datasets","page":"Benchmarking Explanations","title":"Multiple Datasets","text":"","category":"section"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"Benchmarks are run on single instances of type CounterfactualData. This is our design choice for two reasons:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"We want to avoid the loops inside the benchmark method(s) from getting too nested and convoluted.\nWhile it is straightforward to infer metadata for models and generators, this is not the case for datasets.","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"Fortunately, it is very easy to run benchmarks for multiple datasets anyway, since Benchmark instances can be concatenated. To see how, let’s consider an example involving multiple datasets, models and generators:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"# Data:\ndatasets = Dict(\n :moons => CounterfactualData(load_moons()...),\n :circles => CounterfactualData(load_circles()...),\n)\n\n# Models:\nmodels = Dict(\n :MLP => FluxModel,\n :Linear => Linear,\n)\n\n# Generators:\ngenerators = Dict(\n :Generic => GenericGenerator(),\n :Greedy => GreedyGenerator(),\n)","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"Then we can simply loop over the datasets and eventually concatenate the results like so:","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"using CounterfactualExplanations.Evaluation: distance_measures\nbmks = []\nfor (dataname, dataset) in datasets\n bmk = benchmark(dataset; models=models, generators=generators, measure=distance_measures)\n push!(bmks, bmk)\nend\nbmk = vcat(bmks[1], bmks[2]; ids=collect(keys(datasets)))","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"When ids are supplied, then a new id column is added to the evaluation data frame that contains unique identifiers for the different benchmarks. The optional idcol_name argument can be used to specify the name for that indicator column (defaults to \"dataset\"):","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"@chain bmk() begin\n @group_by(dataset, generator)\n @filter(model == :MLP)\n @filter(variable == \"distance_l1\")\n @summarize(L1_norm=mean(value))\n @ungroup\nend","category":"page"},{"location":"tutorials/benchmarking/","page":"Benchmarking Explanations","title":"Benchmarking Explanations","text":"4×3 DataFrame\n Row │ dataset generator L1_norm \n │ Symbol Symbol Float32 \n─────┼──────────────────────────────\n 1 │ moons Generic 1.56555\n 2 │ moons Greedy 0.819269\n 3 │ circles Generic 1.83524\n 4 │ circles Greedy 0.498953","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"tutorials/models/#Handling-Models","page":"Handling Models","title":"Handling Models","text":"","category":"section"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"The typical use-case for Counterfactual Explanations and Algorithmic Recourse is as follows: users have trained some supervised model that is not inherently interpretable and are looking for a way to explain it. In this tutorial, we will see how pre-trained models can be used with this package.","category":"page"},{"location":"tutorials/models/#Models-trained-in-Flux.jl","page":"Handling Models","title":"Models trained in Flux.jl","text":"","category":"section"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"We will train a simple binary classifier in Flux.jl on the popular Moons dataset:","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"n = 500\ndata = TaijaData.load_moons(n)\ncounterfactual_data = DataPreprocessing.CounterfactualData(data...)\nX = counterfactual_data.X\ny = counterfactual_data.y\nplt = plot()\nscatter!(counterfactual_data)","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"(Image: )","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"The following code chunk sets up a Deep Neural Network for the task at hand:","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"data = Flux.DataLoader((X,y),batchsize=1)\ninput_dim = size(X,1)\nn_hidden = 32\nactivation = relu\noutput_dim = 2\nnn = Chain(\n Dense(input_dim, n_hidden, activation),\n Dropout(0.1),\n Dense(n_hidden, output_dim)\n)\nloss(yhat, y) = Flux.Losses.logitcrossentropy(nn(yhat), y)","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"Next, we fit the network to the data:","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"using Flux.Optimise: update!, Adam\nopt = Adam()\nepochs = 100\navg_loss(data) = mean(map(d -> loss(d[1],d[2]), data))\nshow_every = epochs/5\n# Training:\nfor epoch = 1:epochs\n for d in data\n gs = gradient(Flux.params(nn)) do\n l = loss(d...)\n end\n update!(opt, Flux.params(nn), gs)\n end\n if epoch % show_every == 0\n println(\"Epoch \" * string(epoch))\n @show avg_loss(data)\n end\nend","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"Epoch 20\navg_loss(data) = 0.1407434f0\nEpoch 40\navg_loss(data) = 0.11345118f0\nEpoch 60\navg_loss(data) = 0.046319224f0\nEpoch 80\navg_loss(data) = 0.011847609f0\nEpoch 100\navg_loss(data) = 0.007242911f0","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"To prepare the fitted model for use with our package, we need to wrap it inside a container. For plain-vanilla models trained in Flux.jl, the corresponding constructor is called MLP. There is also a separate constructor called DeepEnsemble, which applies to Deep Ensembles. Deep Ensembles are a popular approach to approximate Bayesian Deep Learning and have been shown to generate good predictive uncertainty estimates (Lakshminarayanan, Pritzel, and Blundell 2017).","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"The appropriate API call to wrap our simple network in a container follows below:","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"M = MLP(nn)","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"CounterfactualExplanations.Models.Model(Chain(Dense(2 => 32, relu), Dropout(0.1, active=false), Dense(32 => 2)), :classification_binary, Chain(Dense(2 => 32, relu), Dropout(0.1, active=false), Dense(32 => 2)), MLP())","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"The likelihood function of the output variable is automatically inferred from the data. The generic plot() method can be called on the model and data to visualise the results:","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"plot(M, counterfactual_data)","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"(Image: )","category":"page"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"Our model M is now ready for use with the package.","category":"page"},{"location":"tutorials/models/#References","page":"Handling Models","title":"References","text":"","category":"section"},{"location":"tutorials/models/","page":"Handling Models","title":"Handling Models","text":"Lakshminarayanan, Balaji, Alexander Pritzel, and Charles Blundell. 2017. “Simple and Scalable Predictive Uncertainty Estimation Using Deep Ensembles.” Advances in Neural Information Processing Systems 30.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"CurrentModule = CounterfactualExplanations","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"(Image: )","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Documentation for CounterfactualExplanations.jl.","category":"page"},{"location":"#CounterfactualExplanations","page":"🏠 Home","title":"CounterfactualExplanations","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Counterfactual Explanations and Algorithmic Recourse in Julia.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"(Image: Stable) (Image: Dev) (Image: Build Status) (Image: Coverage) (Image: Code Style: Blue) (Image: License) (Image: Package Downloads) (Image: Aqua QA)","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"CounterfactualExplanations.jl is a package for generating Counterfactual Explanations (CE) and Algorithmic Recourse (AR) for black-box algorithms. Both CE and AR are related tools for explainable artificial intelligence (XAI). While the package is written purely in Julia, it can be used to explain machine learning algorithms developed and trained in other popular programming languages like Python and R. See below for a short introduction and other resources or dive straight into the docs.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"There is also a corresponding paper, Explaining Black-Box Models through Counterfactuals, which has been published in JuliaCon Proceedings. Please consider citing the paper, if you use this package in your work:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"(Image: DOI) (Image: DOI)","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"@article{Altmeyer2023,\n doi = {10.21105/jcon.00130},\n url = {https://doi.org/10.21105/jcon.00130},\n year = {2023},\n publisher = {The Open Journal},\n volume = {1},\n number = {1},\n pages = {130},\n author = {Patrick Altmeyer and Arie van Deursen and Cynthia C. S. Liem},\n title = {Explaining Black-Box Models through Counterfactuals},\n journal = {Proceedings of the JuliaCon Conferences}\n}","category":"page"},{"location":"#Installation","page":"🏠 Home","title":"🚩 Installation","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"You can install the stable release from Julia’s General Registry as follows:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"using Pkg\nPkg.add(\"CounterfactualExplanations\")","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"CounterfactualExplanations.jl is under active development. To install the development version of the package you can run the following command:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"using Pkg\nPkg.add(url=\"https://github.com/juliatrustworthyai/CounterfactualExplanations.jl\")","category":"page"},{"location":"#Background-and-Motivation","page":"🏠 Home","title":"🤔 Background and Motivation","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Machine learning models like Deep Neural Networks have become so complex, opaque and underspecified in the data that they are generally considered Black Boxes. Nonetheless, such models often play a key role in data-driven decision-making systems. This creates the following problem: human operators in charge of such systems have to rely on them blindly, while those individuals subject to them generally have no way of challenging an undesirable outcome:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"“You cannot appeal to (algorithms). They do not listen. Nor do they bend.”— Cathy O’Neil in Weapons of Math Destruction, 2016","category":"page"},{"location":"#Enter:-Counterfactual-Explanations","page":"🏠 Home","title":"🔮 Enter: Counterfactual Explanations","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Counterfactual Explanations can help human stakeholders make sense of the systems they develop, use or endure: they explain how inputs into a system need to change for it to produce different decisions. Explainability benefits internal as well as external quality assurance.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Counterfactual Explanations have a few properties that are desirable in the context of Explainable Artificial Intelligence (XAI). These include:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Full fidelity to the black-box model, since no proxy is involved.\nNo need for (reasonably) interpretable features as opposed to LIME and SHAP.\nClear link to Algorithmic Recourse and Causal Inference.\nLess susceptible to adversarial attacks than LIME and SHAP.","category":"page"},{"location":"#Simple-Usage-Example","page":"🏠 Home","title":"Simple Usage Example","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"To get started, try out this simple usage example with synthetic data:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"using CounterfactualExplanations\nusing CounterfactualExplanations.Models\nusing Plots\nusing TaijaData\nusing TaijaPlotting\n\n# Data and Model:\ndata = load_linearly_separable()\ncounterfactual_data = CounterfactualData(data...)\nM = fit_model(counterfactual_data, :Linear)\n\n# Choose factual:\ntarget = 2\nfactual = 1\nchosen = findall(predict_label(M, counterfactual_data) .== factual) |>\n rand\nx = select_factual(counterfactual_data, chosen)\n\n# Generate counterfactuals\ngenerator = WachterGenerator()\nce = generate_counterfactual(\n x, # factual\n target, # target\n counterfactual_data, # data\n M, # model\n generator # counterfactual generator\n)\nplot(ce)","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"(Image: )","category":"page"},{"location":"#Example:-Give-Me-Some-Credit","page":"🏠 Home","title":"Example: Give Me Some Credit","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Consider the following real-world scenario: a retail bank is using a black-box model trained on their clients’ credit history to decide whether they will provide credit to new applicants. To simulate this scenario, we have pre-trained a binary classifier on the publicly available Give Me Some Credit dataset that ships with this package (Kaggle 2011).","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"The figure below shows counterfactuals for 10 randomly chosen individuals that would have been denied credit initially.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"(Image: )","category":"page"},{"location":"#Example:-MNIST","page":"🏠 Home","title":"Example: MNIST","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"The figure below shows a counterfactual generated for an image classifier trained on MNIST: in particular, it demonstrates which pixels need to change in order for the classifier to predict 3 instead of 8.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Since v0.1.9 counterfactual generators are fully composable. Here we have composed a generator that combines ideas from Wachter, Mittelstadt, and Russell (2017) and Altmeyer et al. (2023):","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"# Compose generator:\nusing CounterfactualExplanations.Objectives: distance_mad, distance_from_target\ngenerator = GradientBasedGenerator()\n@chain generator begin\n @objective logitcrossentropy + 0.2distance_mad + 0.1distance_from_target\n @with_optimiser Adam(0.1) \nend","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"(Image: )","category":"page"},{"location":"#Usage-example","page":"🏠 Home","title":"🔍 Usage example","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Generating counterfactuals will typically look like follows. Below we first fit a simple model to a synthetic dataset with linearly separable features and then draw a random sample:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"# Data and Classifier:\ncounterfactual_data = CounterfactualData(load_linearly_separable()...)\nM = fit_model(counterfactual_data, :Linear)\n\n# Select random sample:\ntarget = 2\nfactual = 1\nchosen = rand(findall(predict_label(M, counterfactual_data) .== factual))\nx = select_factual(counterfactual_data, chosen)","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"To this end, we specify a counterfactual generator of our choice:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"# Counterfactual search:\ngenerator = DiCEGenerator(λ=[0.1,0.3])","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Here, we have chosen to use the GradientBasedGenerator to move the individual from its factual label 1 to the target label 2.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"With all of our ingredients specified, we finally generate counterfactuals using a simple API call:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"conv = conv = CounterfactualExplanations.Convergence.GeneratorConditionsConvergence()\nce = generate_counterfactual(\n x, target, counterfactual_data, M, generator; \n num_counterfactuals=3, convergence=conv,\n)","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"The plot below shows the resulting counterfactual path:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"(Image: )","category":"page"},{"location":"#Implemented-Counterfactual-Generators","page":"🏠 Home","title":"☑️ Implemented Counterfactual Generators","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Currently, the following counterfactual generators are implemented:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"ClaPROAR (Altmeyer et al. 2023)\nCLUE (Antorán et al. 2020)\nDiCE (Mothilal, Sharma, and Tan 2020)\nECCCo (Altmeyer et al. 2024)\nFeatureTweak (Tolomei et al. 2017)\nGeneric\nGravitationalGenerator (Altmeyer et al. 2023)\nGreedy (Schut et al. 2021)\nGrowingSpheres (Laugel et al. 2017)\nMINT (Karimi et al. 2020) (causal CE)\nPROBE (Pawelczyk et al. 2023)\nREVISE (Joshi et al. 2019)\nT-CREx (Bewley et al. 2024) (global CE)\nWachter (Wachter, Mittelstadt, and Russell 2017)","category":"page"},{"location":"#Goals-and-limitations","page":"🏠 Home","title":"🎯 Goals and limitations","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"The goal of this library is to contribute to efforts towards trustworthy machine learning in Julia. The Julia language has an edge when it comes to trustworthiness: it is very transparent. Packages like this one are generally written in pure Julia, which makes it easy for users and developers to understand and contribute to open-source code. Eventually, this project aims to offer a one-stop-shop of counterfactual explanations.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Our ambition is to enhance the package through the following features:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Support for all supervised machine learning models trained in MLJ.jl.\nSupport for regression models.","category":"page"},{"location":"#Contribute","page":"🏠 Home","title":"🛠 Contribute","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Contributions of any kind are very much welcome! Take a look at the issue to see what things we are currently working on. If you have an idea for a new feature or want to report a bug, please open a new issue.","category":"page"},{"location":"#Development","page":"🏠 Home","title":"Development","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"If your looking to contribute code, it may be helpful to check out the Explanation section of the docs.","category":"page"},{"location":"#Testing","page":"🏠 Home","title":"Testing","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Please always make sure to add tests for any new features or changes.","category":"page"},{"location":"#Documentation","page":"🏠 Home","title":"Documentation","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"If you add new features or change existing ones, please make sure to update the documentation accordingly. The documentation is written in Documenter.jl and is located in the docs/src folder.","category":"page"},{"location":"#Log-Changes","page":"🏠 Home","title":"Log Changes","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"As of version 1.1.1, we have tried to be more stringent about logging changes. Please make sure to add a note to the CHANGELOG.md file for any changes you make. It is sufficient to add a note under the Unreleased section.","category":"page"},{"location":"#General-Pointers","page":"🏠 Home","title":"General Pointers","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"There are also some general pointers for people looking to contribute to any of our Taija packages here.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Please follow the SciML ColPrac guide.","category":"page"},{"location":"#Citation","page":"🏠 Home","title":"🎓 Citation","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"If you want to use this codebase, please consider citing the corresponding paper:","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"@article{Altmeyer2023,\n doi = {10.21105/jcon.00130},\n url = {https://doi.org/10.21105/jcon.00130},\n year = {2023},\n publisher = {The Open Journal},\n volume = {1},\n number = {1},\n pages = {130},\n author = {Patrick Altmeyer and Arie van Deursen and Cynthia C. s. Liem},\n title = {Explaining Black-Box Models through Counterfactuals},\n journal = {Proceedings of the JuliaCon Conferences}\n}","category":"page"},{"location":"#References","page":"🏠 Home","title":"📚 References","text":"","category":"section"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia CS Liem. 2023. “Endogenous Macrodynamics in Algorithmic Recourse.” In 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML), 418–31. IEEE.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Altmeyer, Patrick, Mojtaba Farmanbar, Arie van Deursen, and Cynthia CS Liem. 2024. “Faithful Model Explanations Through Energy-Constrained Conformal Counterfactuals.” In Proceedings of the AAAI Conference on Artificial Intelligence, 38:10829–37. 10.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Antorán, Javier, Umang Bhatt, Tameem Adel, Adrian Weller, and José Miguel Hernández-Lobato. 2020. “Getting a Clue: A Method for Explaining Uncertainty Estimates.” https://arxiv.org/abs/2006.06848.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Bewley, Tom, Salim I. Amoukou, Saumitra Mishra, Daniele Magazzeni, and Manuela Veloso. 2024. “Counterfactual Metarules for Local and Global Recourse.” https://arxiv.org/abs/2405.18875.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Joshi, Shalmali, Oluwasanmi Koyejo, Warut Vijitbenjaronk, Been Kim, and Joydeep Ghosh. 2019. “Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems.” https://arxiv.org/abs/1907.09615.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Kaggle. 2011. “Give Me Some Credit, Improve on the State of the Art in Credit Scoring by Predicting the Probability That Somebody Will Experience Financial Distress in the Next Two Years.” https://www.kaggle.com/c/GiveMeSomeCredit; Kaggle. https://www.kaggle.com/c/GiveMeSomeCredit.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Karimi, Amir-Hossein, Julius Von Kügelgen, Bernhard Schölkopf, and Isabel Valera. 2020. “Algorithmic Recourse Under Imperfect Causal Knowledge: A Probabilistic Approach.” https://arxiv.org/abs/2006.06831.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Laugel, Thibault, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, and Marcin Detyniecki. 2017. “Inverse Classification for Comparison-Based Interpretability in Machine Learning.” https://arxiv.org/abs/1712.08443.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Mothilal, Ramaravind K, Amit Sharma, and Chenhao Tan. 2020. “Explaining Machine Learning Classifiers Through Diverse Counterfactual Explanations.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 607–17. https://doi.org/10.1145/3351095.3372850.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Pawelczyk, Martin, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci, and Himabindu Lakkaraju. 2023. “Probabilistically Robust Recourse: Navigating the Trade-Offs Between Costs and Robustness in Algorithmic Recourse.” https://arxiv.org/abs/2203.06768.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Schut, Lisa, Oscar Key, Rory Mc Grath, Luca Costabello, Bogdan Sacaleanu, Yarin Gal, et al. 2021. “Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties.” In International Conference on Artificial Intelligence and Statistics, 1756–64. PMLR.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Tolomei, Gabriele, Fabrizio Silvestri, Andrew Haines, and Mounia Lalmas. 2017. “Interpretable Predictions of Tree-Based Ensembles via Actionable Feature Tweaking.” In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 465–74. https://doi.org/10.1145/3097983.3098039.","category":"page"},{"location":"","page":"🏠 Home","title":"🏠 Home","text":"Wachter, Sandra, Brent Mittelstadt, and Chris Russell. 2017. “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR.” Harv. JL & Tech. 31: 841. https://doi.org/10.2139/ssrn.3063289.","category":"page"},{"location":"tutorials/","page":"Overview","title":"Overview","text":"CurrentModule = CounterfactualExplanation","category":"page"},{"location":"tutorials/#Tutorials","page":"Overview","title":"Tutorials","text":"","category":"section"},{"location":"tutorials/","page":"Overview","title":"Overview","text":"In this section, you will find a series of tutorials that should help you gain a basic understanding of Counterfactual Explanations and how to apply it in Julia using this package.","category":"page"},{"location":"tutorials/","page":"Overview","title":"Overview","text":"Tutorials are lessons that take the reader by the hand through a series of steps to complete a project of some kind. Tutorials are learning-oriented.— Diátaxis","category":"page"},{"location":"tutorials/","page":"Overview","title":"Overview","text":"In other words, you come here because you are new to this topic and are looking for a first peek at the methodology and code 🫣.","category":"page"},{"location":"explanation/generators/growing_spheres/","page":"GrowingSpheres","title":"GrowingSpheres","text":"CurrentModule = CounterfactualExplanations","category":"page"},{"location":"explanation/generators/growing_spheres/#GrowingSpheres","page":"GrowingSpheres","title":"GrowingSpheres","text":"","category":"section"},{"location":"explanation/generators/growing_spheres/","page":"GrowingSpheres","title":"GrowingSpheres","text":"Growing Spheres refers to the generator introduced by Laugel et al. (2017). Our implementation takes inspiration from the CARLA library.","category":"page"},{"location":"explanation/generators/growing_spheres/#Principle-of-the-Proposed-Approach","page":"GrowingSpheres","title":"Principle of the Proposed Approach","text":"","category":"section"},{"location":"explanation/generators/growing_spheres/","page":"GrowingSpheres","title":"GrowingSpheres","text":"In order to interpret a prediction through comparison, the Growing Spheres algorithm focuses on finding an observation belonging to the other class and answers the question: “Considering an observation and a classifier, what is the minimal change we need to apply in order to change the prediction of this observation?”. This problem is similar to inverse classification but applied to interpretability.","category":"page"},{"location":"explanation/generators/growing_spheres/","page":"GrowingSpheres","title":"GrowingSpheres","text":"Explaining how to change a prediction can help the user understand what the model considers as locally important. The Growing Spheres approach provides insights into the classifier’s behavior without claiming any causal knowledge. It differs from other interpretability approaches and is not concerned with the global behavior of the model. Instead, it aims to provide local insights into the classifier’s decision-making process.","category":"page"},{"location":"explanation/generators/growing_spheres/","page":"GrowingSpheres","title":"GrowingSpheres","text":"The algorithm finds the closest “ennemy” observation, which is an observation classified into the other class than the input observation. The final explanation is the difference vector between the input observation and the ennemy.","category":"page"},{"location":"explanation/generators/growing_spheres/#Finding-the-Closest-Ennemy","page":"GrowingSpheres","title":"Finding the Closest Ennemy","text":"","category":"section"},{"location":"explanation/generators/growing_spheres/","page":"GrowingSpheres","title":"GrowingSpheres","text":"The algorithm solves the following minimization problem to find the closest ennemy:","category":"page"},{"location":"explanation/generators/growing_spheres/","page":"GrowingSpheres","title":"GrowingSpheres","text":"e^* = arg min_e in X c(x e) f(e) neq f(x) ","category":"page"},{"location":"explanation/generators/growing_spheres/","page":"GrowingSpheres","title":"GrowingSpheres","text":"The cost function c(x, e) is defined as:","category":"page"},{"location":"explanation/generators/growing_spheres/","page":"GrowingSpheres","title":"GrowingSpheres","text":"c(x e) = x - e_2 + gamma x - e_0","category":"page"},{"location":"explanation/generators/growing_spheres/","page":"GrowingSpheres","title":"GrowingSpheres","text":"where ||.||_2 is the Euclidean norm and ||.||_0 is the sparsity measure. The weight gamma balances the importance of sparsity in the cost function.","category":"page"},{"location":"explanation/generators/growing_spheres/","page":"GrowingSpheres","title":"GrowingSpheres","text":"To approximate the solution, the Growing Spheres algorithm uses a two-step heuristic approach. The first step is the Generation phase, where observations are generated in spherical layers around the input observation. The second step is the Feature Selection phase, where the generated observation with the smallest change in each feature is selected.","category":"page"},{"location":"explanation/generators/growing_spheres/#Example","page":"GrowingSpheres","title":"Example","text":"","category":"section"},{"location":"explanation/generators/growing_spheres/","page":"GrowingSpheres","title":"GrowingSpheres","text":"generator = GrowingSpheresGenerator()\nM = fit_model(counterfactual_data, :DeepEnsemble)\nce = generate_counterfactual(\n x, target, counterfactual_data, M, generator)\nplot(ce)","category":"page"},{"location":"explanation/generators/growing_spheres/","page":"GrowingSpheres","title":"GrowingSpheres","text":"(Image: )","category":"page"},{"location":"explanation/generators/growing_spheres/#References","page":"GrowingSpheres","title":"References","text":"","category":"section"},{"location":"explanation/generators/growing_spheres/","page":"GrowingSpheres","title":"GrowingSpheres","text":"Laugel, Thibault, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, and Marcin Detyniecki. 2017. “Inverse Classification for Comparison-Based Interpretability in Machine Learning.” arXiv. https://doi.org/10.48550/arXiv.1712.08443.","category":"page"},{"location":"contribute/#Contribute","page":"🛠 Contribute","title":"🛠 Contribute","text":"","category":"section"},{"location":"contribute/","page":"🛠 Contribute","title":"🛠 Contribute","text":"Contributions of any kind are very much welcome! Take a look at the issue to see what things we are currently working on. If you have an idea for a new feature or want to report a bug, please open a new issue.","category":"page"},{"location":"contribute/#Development","page":"🛠 Contribute","title":"Development","text":"","category":"section"},{"location":"contribute/","page":"🛠 Contribute","title":"🛠 Contribute","text":"If your looking to contribute code, it may be helpful to check out the Explanation section of the docs.","category":"page"},{"location":"contribute/#Testing","page":"🛠 Contribute","title":"Testing","text":"","category":"section"},{"location":"contribute/","page":"🛠 Contribute","title":"🛠 Contribute","text":"Please always make sure to add tests for any new features or changes.","category":"page"},{"location":"contribute/#Documentation","page":"🛠 Contribute","title":"Documentation","text":"","category":"section"},{"location":"contribute/","page":"🛠 Contribute","title":"🛠 Contribute","text":"If you add new features or change existing ones, please make sure to update the documentation accordingly. The documentation is written in Documenter.jl and is located in the docs/src folder.","category":"page"},{"location":"contribute/#Log-Changes","page":"🛠 Contribute","title":"Log Changes","text":"","category":"section"},{"location":"contribute/","page":"🛠 Contribute","title":"🛠 Contribute","text":"As of version 1.1.1, we have tried to be more stringent about logging changes. Please make sure to add a note to the CHANGELOG.md file for any changes you make. It is sufficient to add a note under the Unreleased section.","category":"page"},{"location":"contribute/#General-Pointers","page":"🛠 Contribute","title":"General Pointers","text":"","category":"section"},{"location":"contribute/","page":"🛠 Contribute","title":"🛠 Contribute","text":"There are also some general pointers for people looking to contribute to any of our Taija packages here.","category":"page"},{"location":"contribute/","page":"🛠 Contribute","title":"🛠 Contribute","text":"Please follow the SciML ColPrac guide.","category":"page"},{"location":"explanation/architecture/","page":"Package Architecture","title":"Package Architecture","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/architecture/#Package-Architecture","page":"Package Architecture","title":"Package Architecture","text":"","category":"section"},{"location":"explanation/architecture/","page":"Package Architecture","title":"Package Architecture","text":"The diagram below provides an overview of the package architecture. It is built around two core modules that are designed to be as extensible as possible through dispatch: 1) Models is concerned with making any arbitrary model compatible with the package; 2) Generators is used to implement arbitrary counterfactual search algorithms.[1]","category":"page"},{"location":"explanation/architecture/","page":"Package Architecture","title":"Package Architecture","text":"The core function of the package, generate_counterfactual, uses an instance of type AbstractModel produced by the Models module and an instance of type AbstractGenerator produced by the Generators module.","category":"page"},{"location":"explanation/architecture/","page":"Package Architecture","title":"Package Architecture","text":"Metapackages from the Taija ecosystem provide additional functionality such as datasets, language interoperability, parallelization, and plotting. The CounterfactualExplanations package is designed to be used in conjunction with these metapackages, but can also be used as a standalone package.","category":"page"},{"location":"explanation/architecture/","page":"Package Architecture","title":"Package Architecture","text":"(Image: )","category":"page"},{"location":"explanation/architecture/","page":"Package Architecture","title":"Package Architecture","text":"[1] We have made an effort to keep the code base a flexible and extensible as possible, but cannot guarantee at this point that any counterfactual generator can be implemented without further adaptation.","category":"page"},{"location":"explanation/optimisers/overview/","page":"Overview","title":"Overview","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/optimisers/overview/#Optimisation-Rules","page":"Overview","title":"Optimisation Rules","text":"","category":"section"},{"location":"explanation/optimisers/overview/","page":"Overview","title":"Overview","text":"Counterfactual search is an optimization problem. Consequently, the choice of the optimisation rule affects the generated counterfactuals. In the short term, we aim to enable users to choose any of the available Flux optimisers. This has not been sufficiently tested yet, and you may run into issues.","category":"page"},{"location":"explanation/optimisers/overview/#Custom-Optimisation-Rules","page":"Overview","title":"Custom Optimisation Rules","text":"","category":"section"},{"location":"explanation/optimisers/overview/","page":"Overview","title":"Overview","text":"Flux optimisers are specifically designed for deep learning, and in particular, for learning model parameters. In counterfactual search, the features are the free parameters that we are optimising over. To this end, some custom optimisation rules are necessary to incorporate ideas presented in the literature. In the following, we introduce those rules.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"tutorials/data_preprocessing/#Handling-Data","page":"Handling Data","title":"Handling Data","text":"","category":"section"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"The package works with custom data containers that contain the input and output data as well as information about the type and mutability of features. In this tutorial, we will see how data can be prepared for use with the package.","category":"page"},{"location":"tutorials/data_preprocessing/#Basic-Functionality","page":"Handling Data","title":"Basic Functionality","text":"","category":"section"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"To demonstrate the basic way to prepare data, let’s look at a standard benchmark dataset: Fisher’s classic iris dataset. We can use MLDatasets to load this data.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"dataset = Iris()","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"Our data constructor CounterfactualData needs at least two inputs: features X and targets y.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"X = dataset.features\ny = dataset.targets","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"Next, we convert the input data to a Tables.MatrixTable (following MLJ.jl) convention. Concerning the target variable, we just assign grab the first column of the data frame.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"X = table(Tables.matrix(X))\ny = y[:,1]","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"Now we can feed these two ingredients to our constructor:","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"counterfactual_data = CounterfactualData(X, y)","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"Under the hood, the constructor performs basic preprocessing steps. For example, the output variable y is automatically one-hot encoded:","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"counterfactual_data.y","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"3×150 Matrix{Bool}:\n 1 1 1 1 1 1 1 1 1 1 1 1 1 … 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"Similarly, a transformer used to scale continuous input features is automatically fitted:","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"counterfactual_data.input_encoder","category":"page"},{"location":"tutorials/data_preprocessing/#Categorical-Features","page":"Handling Data","title":"Categorical Features","text":"","category":"section"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"For the counterfactual search, it is important to distinguish between continuous and categorical features. This is because categorical features cannot be perturbed arbitrarily: they can take specific discrete values, but not just any value on the real line.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"Consider the following example:","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"y = rand([1,0],4)\nX = (\n name=categorical([\"Danesh\", \"Lee\", \"Mary\", \"John\"]),\n grade=categorical([\"A\", \"B\", \"A\", \"C\"], ordered=true),\n sex=categorical([\"male\",\"female\",\"male\",\"male\"]),\n height=[1.85, 1.67, 1.5, 1.67],\n)\nschema(X)","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"┌────────┬──────────────────┬──────────────────────────────────┐\n│ names │ scitypes │ types │\n├────────┼──────────────────┼──────────────────────────────────┤\n│ name │ Multiclass{4} │ CategoricalValue{String, UInt32} │\n│ grade │ OrderedFactor{3} │ CategoricalValue{String, UInt32} │\n│ sex │ Multiclass{2} │ CategoricalValue{String, UInt32} │\n│ height │ Continuous │ Float64 │\n└────────┴──────────────────┴──────────────────────────────────┘","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"Typically, in the context of Unserpervised Learning, categorical features are one-hot or dummy encoded. To this end, we could use MLJ, for example:","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"hot = OneHotEncoder()\nmach = MLJBase.fit!(machine(hot, X))\nW = MLJBase.transform(mach, X)\nX = permutedims(MLJBase.matrix(W))","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"In all likelihood, this pre-processing step already happens at the stage, when the supervised model is trained. Since our counterfactual generators need to work in the same feature domain as the model they are intended to explain, we assume that categorical features are already encoded.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"The CounterfactualData constructor takes two optional arguments that can be used to specify the indices of categorical and continuous features. By default, all features are assumed to be continuous. For categorical features, the constructor expects an array of arrays of integers (Vector{Vector{Int}}) where each subarray includes the indices of all one-hot encoded rows related to a single categorical feature. In the example above, the name feature is one-hot encoded across rows 1, 2, 3 and 4 of X, the grade feature is encoded across the following three rows, etc.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"schema(W)","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"┌──────────────┬────────────┬─────────┐\n│ names │ scitypes │ types │\n├──────────────┼────────────┼─────────┤\n│ name__Danesh │ Continuous │ Float64 │\n│ name__John │ Continuous │ Float64 │\n│ name__Lee │ Continuous │ Float64 │\n│ name__Mary │ Continuous │ Float64 │\n│ grade__A │ Continuous │ Float64 │\n│ grade__B │ Continuous │ Float64 │\n│ grade__C │ Continuous │ Float64 │\n│ sex__female │ Continuous │ Float64 │\n│ sex__male │ Continuous │ Float64 │\n│ height │ Continuous │ Float64 │\n└──────────────┴────────────┴─────────┘","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"The code chunk below assigns the categorical and continuous feature indices:","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"features_categorical = [\n [1,2,3,4], # name\n [5,6,7], # grade\n [8,9] # sex\n]\nfeatures_continuous = [10]","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"When instantiating the data container, these indices just need to be supplied as keyword arguments:","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"counterfactual_data = CounterfactualData(\n X,y;\n features_categorical = features_categorical,\n features_continuous = features_continuous\n)","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"This will ensure that the discrete domain of categorical features is respected in the counterfactual search. We achieve this through a form of Projected Gradient Descent and it works for any of our counterfactual generators.","category":"page"},{"location":"tutorials/data_preprocessing/#Example","page":"Handling Data","title":"Example","text":"","category":"section"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"To see this in action, let’s load some synthetic data using MLJ:","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"N = 1000\nX, ys = MLJBase.make_blobs(N, 2; centers=2, as_table=false, center_box=(-5 => 5), cluster_std=0.5)\nys .= ys.==2","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"Next, we generate a synthetic categorical feature based on the output variable. First, we define the discrete levels:","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"cat_values = [\"X\",\"Y\",\"Z\"]","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"Next, we impose that the categorical feature is most likely to take the first discrete level, namely X, whenever y is equal to 1.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"xcat = map(ys) do y\n if y==1\n x = sample(cat_values, Weights([0.8,0.1,0.1]))\n else\n x = sample(cat_values, Weights([0.1,0.1,0.8]))\n end\nend\nxcat = categorical(xcat)\nX = (\n x1 = X[:,1],\n x2 = X[:,2],\n x3 = xcat\n)\nschema(X)","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"As above, we use a OneHotEncoder to transform the data:","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"hot = OneHotEncoder()\nmach = MLJBase.fit!(machine(hot, X))\nW = MLJBase.transform(mach, X)\nschema(W)\nX = permutedims(MLJBase.matrix(W))","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"Finally, we assign the categorical indices and instantiate our data container:","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"features_categorical = [collect(3:size(X,1))]\ncounterfactual_data = CounterfactualData(\n X,ys';\n features_categorical = features_categorical,\n)","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"With the data pre-processed we can use the fit_model function to train a simple classifier:","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"M = fit_model(counterfactual_data, :Linear)","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"Now it is finally time to generate counterfactuals. We first define 1 as our target and then choose a random sample from the non-target class:","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"target = 1\nfactual = 0\nchosen = rand(findall(predict_label(M, counterfactual_data) .== factual))\nx = select_factual(counterfactual_data, chosen) ","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"5×1 Matrix{Float32}:\n -3.879591\n 1.7199689\n 0.0\n 0.0\n 1.0","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"The factual x belongs to group Z.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"We generate a counterfactual for x using the standard API call:","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"generator = GenericGenerator()\nce = generate_counterfactual(x, target, counterfactual_data, M, generator)","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"CounterfactualExplanation\nConvergence: ✅ after 1 steps.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"The search yields the following counterfactual:","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"x′ = counterfactual(ce)","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"5-element Vector{Float32}:\n -3.89187\n 0.25591564\n 1.0\n 0.0\n 0.0","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"It belongs to group X.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"This is intuitive because by construction the categorical variable is most likely to take that value when y is equal to the target outcome.","category":"page"},{"location":"tutorials/data_preprocessing/#Immutable-Features","page":"Handling Data","title":"Immutable Features","text":"","category":"section"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"In practice, features usually cannot be perturbed arbitrarily. Suppose, for example, that one of the features used by a bank to predict the creditworthiness of its clients is gender. If a counterfactual explanation for the prediction model indicates that female clients should change their gender to improve their creditworthiness, then this is an interesting insight (it reveals gender bias), but it is not usually an actionable transformation in practice. In such cases, we may want to constrain the mutability of features to ensure actionable and realistic recourse.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"To illustrate how this can be implemented in CounterfactualExplanations.jl we will continue to work with the synthetic data from the previous section. Mutability of features can be defined in terms of four different options: 1) the feature is mutable in both directions, 2) the feature can only increase (e.g. age), 3) the feature can only decrease (e.g. time left until your next deadline) and 4) the feature is not mutable (e.g. skin colour, ethnicity, …). To specify which category a feature belongs to, you can pass a vector of symbols containing the mutability constraints at the pre-processing stage. For each feature you can choose from these four options: :both (mutable in both directions), :increase (only up), :decrease (only down) and :none (immutable). By default, nothing is passed to that keyword argument and it is assumed that all features are mutable in both directions.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"Below we impose that the second feature is immutable.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"counterfactual_data = CounterfactualData(load_linearly_separable()...)\nM = fit_model(counterfactual_data, :Linear)\ncounterfactual_data.mutability = [:both, :none]","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"target = 2\nfactual = 1\nchosen = rand(findall(predict_label(M, counterfactual_data) .== factual))\nx = select_factual(counterfactual_data, chosen) \nce = generate_counterfactual(x, target, counterfactual_data, M, generator)","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"The resulting counterfactual path is shown in the chart below. Since only the first feature can be perturbed, the sample can only move along the horizontal axis.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"plot(ce)","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"(Image: )","category":"page"},{"location":"tutorials/data_preprocessing/#Domain-constraints","page":"Handling Data","title":"Domain constraints","text":"","category":"section"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"In some cases, we may also want to constrain the domain of some feature. For example, age as a feature is constrained to a range from 0 to some upper bound corresponding perhaps to the average life expectancy of humans. Below, for example, we impose a lower bound of 05 for our two features.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"counterfactual_data.mutability = [:both, :both]\ncounterfactual_data.domain = [(0.5,Inf) for var in counterfactual_data.features_continuous]","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"This results in the counterfactual path shown below: since features are not allowed to be perturbed beyond the upper bound, the resulting counterfactual falls just short of the threshold probability gamma.","category":"page"},{"location":"tutorials/data_preprocessing/","page":"Handling Data","title":"Handling Data","text":"ce = generate_counterfactual(x, target, counterfactual_data, M, generator)\nplot(ce)","category":"page"},{"location":"assets/resources/#Further-Resources","page":"📚 Additional Resources","title":"Further Resources","text":"","category":"section"},{"location":"assets/resources/#JuliaCon-2022","page":"📚 Additional Resources","title":"JuliaCon 2022","text":"","category":"section"},{"location":"assets/resources/","page":"📚 Additional Resources","title":"📚 Additional Resources","text":"Slides: link","category":"page"},{"location":"assets/resources/#JuliaCon-Proceedings-Paper","page":"📚 Additional Resources","title":"JuliaCon Proceedings Paper","text":"","category":"section"},{"location":"assets/resources/","page":"📚 Additional Resources","title":"📚 Additional Resources","text":"TBD","category":"page"},{"location":"explanation/generators/greedy/","page":"Greedy","title":"Greedy","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/generators/greedy/#GreedyGenerator","page":"Greedy","title":"GreedyGenerator","text":"","category":"section"},{"location":"explanation/generators/greedy/","page":"Greedy","title":"Greedy","text":"We use the term greedy to describe the counterfactual generator introduced by Schut et al. (2021).","category":"page"},{"location":"explanation/generators/greedy/#Description","page":"Greedy","title":"Description","text":"","category":"section"},{"location":"explanation/generators/greedy/","page":"Greedy","title":"Greedy","text":"The Greedy generator works under the premise of generating realistic counterfactuals by minimizing predictive uncertainty. Schut et al. (2021) show that for models that incorporates predictive uncertainty in their predictions, maximizing the predictive probability corresponds to minimizing the predictive uncertainty: by construction, the generated counterfactual will therefore be realistic (low epistemic uncertainty) and unambiguous (low aleatoric uncertainty).","category":"page"},{"location":"explanation/generators/greedy/","page":"Greedy","title":"Greedy","text":"For the counterfactual search Schut et al. (2021) propose using a Jacobian-based Saliency Map Attack(JSMA). It is greedy in the sense that it is an “iterative algorithm that updates the most salient feature, i.e. the feature that has the largest influence on the classification, by delta at each step” (Schut et al. 2021).","category":"page"},{"location":"explanation/generators/greedy/#Usage","page":"Greedy","title":"Usage","text":"","category":"section"},{"location":"explanation/generators/greedy/","page":"Greedy","title":"Greedy","text":"The approach can be used in our package as follows:","category":"page"},{"location":"explanation/generators/greedy/","page":"Greedy","title":"Greedy","text":"M = fit_model(counterfactual_data, :DeepEnsemble)\ngenerator = GreedyGenerator()\nce = generate_counterfactual(x, target, counterfactual_data, M, generator)\nplot(ce)","category":"page"},{"location":"explanation/generators/greedy/","page":"Greedy","title":"Greedy","text":"(Image: )","category":"page"},{"location":"explanation/generators/greedy/#References","page":"Greedy","title":"References","text":"","category":"section"},{"location":"explanation/generators/greedy/","page":"Greedy","title":"Greedy","text":"Schut, Lisa, Oscar Key, Rory Mc Grath, Luca Costabello, Bogdan Sacaleanu, Yarin Gal, et al. 2021. “Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties.” In International Conference on Artificial Intelligence and Statistics, 1756–64. PMLR.","category":"page"},{"location":"explanation/generators/probe/","page":"PROBE","title":"PROBE","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/generators/probe/#ProbeGenerator","page":"PROBE","title":"ProbeGenerator","text":"","category":"section"},{"location":"explanation/generators/probe/","page":"PROBE","title":"PROBE","text":"The ProbeGenerator is designed to navigate the trade-offs between costs and robustness in Algorithmic Recourse (Pawelczyk et al. 2022).","category":"page"},{"location":"explanation/generators/probe/#Description","page":"PROBE","title":"Description","text":"","category":"section"},{"location":"explanation/generators/probe/","page":"PROBE","title":"PROBE","text":"The goal of ProbeGenerator is to find a recourse x’ whose prediction at any point y within some set around x’ belongs to the positive class with probability 1 - r, where r is the recourse invalidation rate. It minimizes the gap between the achieved and desired recourse invalidation rates, minimizes recourse costs, and also ensures that the resulting recourse achieves a positive model prediction.","category":"page"},{"location":"explanation/generators/probe/#Explanation","page":"PROBE","title":"Explanation","text":"","category":"section"},{"location":"explanation/generators/probe/","page":"PROBE","title":"PROBE","text":"The loss function this generator is defined below. R is a hinge loss parameter which helps control for robustness. The loss and penalty functions can still be chosen freely.","category":"page"},{"location":"explanation/generators/probe/","page":"PROBE","title":"PROBE","text":"beginaligned\nR(x sigma^2 I) + l(f(x) s) + lambda d_c(x x)\nendaligned","category":"page"},{"location":"explanation/generators/probe/","page":"PROBE","title":"PROBE","text":"R uses the following formula to control for noise. It generates small perturbations and checks how often the counterfactual explanation flips back to a factual one, when small amounts of noise are added to it.","category":"page"},{"location":"explanation/generators/probe/","page":"PROBE","title":"PROBE","text":"beginaligned\nDelta(x^hatE) = E_varepsilonh(x^hatE) - h(x^hatE + varepsilon)\nendaligned","category":"page"},{"location":"explanation/generators/probe/","page":"PROBE","title":"PROBE","text":"The above formula is not differentiable. For this reason the generator uses the closed form version of the formula below.","category":"page"},{"location":"explanation/generators/probe/","page":"PROBE","title":"PROBE","text":"beginequation\nDelta tilde(x^hatE sigma^2 I) = 1 - Phi left(fracsqrtf(x^hatE)sqrtnabla f(x^hatE)^T sigma^2 I nabla f(x^hatE)right) \nendequation","category":"page"},{"location":"explanation/generators/probe/#Usage","page":"PROBE","title":"Usage","text":"","category":"section"},{"location":"explanation/generators/probe/","page":"PROBE","title":"PROBE","text":"Generating a counterfactual with the data loaded and generator chosen works as follows:","category":"page"},{"location":"explanation/generators/probe/","page":"PROBE","title":"PROBE","text":"Note: It is important to set the convergence to “:invalidation_rate” here.","category":"page"},{"location":"explanation/generators/probe/","page":"PROBE","title":"PROBE","text":"M = fit_model(counterfactual_data, :DeepEnsemble)\nopt = Descent(0.01)\ngenerator = CounterfactualExplanations.Generators.ProbeGenerator(opt=opt)\nconv = CounterfactualExplanations.Convergence.InvalidationRateConvergence(;invalidation_rate=0.5)\nce = generate_counterfactual(x, target, counterfactual_data, M, generator, convergence=conv)\nplot(ce)","category":"page"},{"location":"explanation/generators/probe/","page":"PROBE","title":"PROBE","text":"Choosing different invalidation rates makes the counterfactual more or less robust. The following plot shows the counterfactuals generated for different invalidation rates.","category":"page"},{"location":"explanation/generators/probe/","page":"PROBE","title":"PROBE","text":"(Image: )","category":"page"},{"location":"explanation/generators/probe/#References","page":"PROBE","title":"References","text":"","category":"section"},{"location":"explanation/generators/probe/","page":"PROBE","title":"PROBE","text":"Pawelczyk, Martin, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci, and Himabindu Lakkaraju. 2022. “Probabilistically Robust Recourse: Navigating the Trade-Offs Between Costs and Robustness in Algorithmic Recourse.” arXiv Preprint arXiv:2203.06768.","category":"page"},{"location":"extensions/neurotree/","page":"NeuroTrees","title":"NeuroTrees","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"extensions/neurotree/#[NeuroTreeModels.jl](https://evovest.github.io/NeuroTreeModels.jl/dev/)","page":"NeuroTrees","title":"NeuroTreeModels.jl","text":"","category":"section"},{"location":"extensions/neurotree/","page":"NeuroTrees","title":"NeuroTrees","text":"NeuroTreeModels.jl is a package that provides a framework for training differentiable tree-based models. This is relevant to the work on counterfactual explanations (CE), which often assumes that the underlying black-box model is differentiable with respect to its input. The literature on CE therefore regularly focuses exclusively on explaining deep learning models. This is at odds with the fact that the literature also typically focuses on tabular data, which is often best modeled by tree-based models (Grinsztajn, Oyallon, and Varoquaux 2022). The extension for NeuroTreeModels.jl provides a way to bridge this gap by allowing users to apply existing gradient-based CE methods to differentiable tree-based models.","category":"page"},{"location":"extensions/neurotree/","page":"NeuroTrees","title":"NeuroTrees","text":"warning: Experimental Feature\nPlease note that this extension is still experimental. Neither the behaviour of differentiable tree-based models nor their interplay with counterfactual explanations is well understood at this point. If you encounter any issues, please report them to the package maintainers. Your feedback is highly appreciated.Please also note that this extension is only tested on Julia 1.9 and higher, due to compatibility issues.","category":"page"},{"location":"extensions/neurotree/#Example","page":"NeuroTrees","title":"Example","text":"","category":"section"},{"location":"extensions/neurotree/","page":"NeuroTrees","title":"NeuroTrees","text":"The extension will be loaded automatically when loading the NeuroTreeModels package (assuming the CounterfactualExplanations package is also loaded).","category":"page"},{"location":"extensions/neurotree/","page":"NeuroTrees","title":"NeuroTrees","text":"using NeuroTreeModels","category":"page"},{"location":"extensions/neurotree/","page":"NeuroTrees","title":"NeuroTrees","text":"Next, we will fit a NeuroTree model to the moons dataset using our standard package API for doing so.","category":"page"},{"location":"extensions/neurotree/","page":"NeuroTrees","title":"NeuroTrees","text":"# Fit model to data:\ndata = CounterfactualData(load_moons()...)\nM = fit_model(\n data, :NeuroTree; \n depth=2, lr=5e-2, nrounds=50, batchsize=10\n)","category":"page"},{"location":"extensions/neurotree/","page":"NeuroTrees","title":"NeuroTrees","text":"NeuroTreeExt.NeuroTreeModel(NeuroTreeRegressor(loss = mlogloss, …), :classification_multi, NeuroTreeModels.NeuroTreeModel{NeuroTreeModels.MLogLoss, Chain{Tuple{BatchNorm{typeof(identity), Vector{Float32}, Float32, Vector{Float32}}, NeuroTreeModels.StackTree}}}(NeuroTreeModels.MLogLoss, Chain(BatchNorm(2, active=false), NeuroTreeModels.StackTree(NeuroTree[NeuroTree{Matrix{Float32}, Vector{Float32}, Array{Float32, 3}}(Float32[1.8824593 -0.28222033; -2.680499 0.67347014; … ; -1.0722864 1.3651229; -2.0926774 1.63557], Float32[-3.4070241, 4.545113, 1.0882677, -0.3497498, -2.766766, 1.9072449, -0.9736261, 3.9750721, 1.726214, 3.7279263 … -0.0664266, -0.4214582, -2.3816268, -3.1371245, 0.76548636, 2.636373, 2.4558601, 0.893434, -1.9484522, 4.793434], Float32[3.44271 -6.334693 -0.6308845 3.385659; -3.4316056 6.297003 0.7254221 -3.3283486;;; -3.7011054 -0.17596768 0.15429471 2.270125; 3.4926674 0.026218029 -0.19753197 -2.2337704;;; 1.1795454 -4.315231 0.28486454 1.9995956; -0.9651108 4.0999455 -0.05312265 -1.8039354;;; … ;;; 2.5076811 -0.46358463 -3.5438805 0.0686823; -2.592356 0.47884527 3.781507 -0.022692114;;; -0.59115165 -3.234046 0.09896194 2.375202; 0.5592871 3.3082843 -0.014032216 -2.1876256;;; 2.039389 -0.10134532 2.6637273 -4.999703; -2.0289893 0.3368772 -2.5739825 5.069934], tanh)])), Dict{Symbol, Any}(:feature_names => [:x1, :x2], :nrounds => 50, :device => :cpu)))","category":"page"},{"location":"extensions/neurotree/","page":"NeuroTrees","title":"NeuroTrees","text":"Finally, we select a factual instance and generate a counterfactual explanation for it using the generic gradient-based CE method.","category":"page"},{"location":"extensions/neurotree/","page":"NeuroTrees","title":"NeuroTrees","text":"# Select a factual instance:\ntarget = 1\nfactual = 0\nchosen = rand(findall(predict_label(M, data) .== factual))\nx = select_factual(data, chosen)\n\n# Generate counterfactual explanation:\nη = 0.01\ngenerator = GenericGenerator(; opt=Descent(η), λ=0.01)\nconv = CounterfactualExplanations.Convergence.DecisionThresholdConvergence(;\n decision_threshold=0.9, max_iter=100\n)\nce = generate_counterfactual(x, target, data, M, generator; convergence=conv)\nplot(ce, alpha=0.1)","category":"page"},{"location":"extensions/neurotree/","page":"NeuroTrees","title":"NeuroTrees","text":"(Image: )","category":"page"},{"location":"extensions/neurotree/#References","page":"NeuroTrees","title":"References","text":"","category":"section"},{"location":"extensions/neurotree/","page":"NeuroTrees","title":"NeuroTrees","text":"Grinsztajn, Léo, Edouard Oyallon, and Gaël Varoquaux. 2022. “Why Do Tree-Based Models Still Outperform Deep Learning on Tabular Data?” https://arxiv.org/abs/2207.08815.","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"tutorials/generators/#Handling-Generators","page":"Handling Generators","title":"Handling Generators","text":"","category":"section"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"Generating Counterfactual Explanations can be seen as a generative modelling task because it involves generating samples in the input space: x sim mathcalX. In this tutorial, we will introduce how Counterfactual GradientBasedGenerators are used. They are discussed in more detail in the explanatory section of the documentation.","category":"page"},{"location":"tutorials/generators/#Composable-Generators","page":"Handling Generators","title":"Composable Generators","text":"","category":"section"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"warning: Breaking Changes Expected\nWork on this feature is still in its very early stages and breaking changes should be expected. ","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"One of the key objectives for this package is Composability. It turns out that many of the various counterfactual generators that have been proposed in the literature, essentially do the same thing: they optimize an objective function. Formally we have,","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"\nbeginaligned\nmathbfs^prime = arg min_mathbfs^prime in mathcalS left textyloss(M(f(mathbfs^prime))y^*)+ lambda textcost(f(mathbfs^prime)) right \nendaligned \n qquad(1)","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"where textyloss denotes the main loss function and textcost is a penalty term (Altmeyer et al. 2023).","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"Without going into further detail here, the important thing to mention is that Equation 1 very closely describes how counterfactual search is actually implemented in the package. In other words, all off-the-shelf generators currently implemented work with that same objective. They just vary in the way that penalties are defined, for example. This gives rise to an interesting idea:","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"Why not compose generators that combine ideas from different off-the-shelf generators?","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"The GradientBasedGenerator class provides a straightforward way to do this, without requiring users to build custom GradientBasedGenerators from scratch. It can be instantiated as follows:","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"generator = GradientBasedGenerator()","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"By default, this creates a generator that simply performs gradient descent without any penalties. To modify the behaviour of the generator, you can define the counterfactual search objective function using the @objective macro:","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"@objective(generator, logitbinarycrossentropy + 0.1distance_l2 + 1.0ddp_diversity)","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"Here we have essentially created a version of the DiCEGenerator:","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"ce = generate_counterfactual(x, target, counterfactual_data, M, generator; num_counterfactuals=5)\nplot(ce)","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"(Image: )","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"Multiple macros can be chained using Chains.jl making it easy to create entirely new flavours of counterfactual generators. The following generator, for example, combines ideas from DiCE (Mothilal, Sharma, and Tan 2020) and REVISE (Joshi et al. 2019):","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"@chain generator begin\n @objective logitcrossentropy + 1.0ddp_diversity # DiCE (Mothilal et al. 2020)\n @with_optimiser Flux.Adam(0.1) \n @search_latent_space # REVISE (Joshi et al. 2019)\nend","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"Let’s take this generator to our MNIST dataset and generate a counterfactual explanation for turning a 0 into a 8.","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"(Image: )","category":"page"},{"location":"tutorials/generators/#Off-the-Shelf-Generators","page":"Handling Generators","title":"Off-the-Shelf Generators","text":"","category":"section"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"Off-the-shelf generators are just default recipes for counterfactual generators. Currently, the following off-the-shelf counterfactual generators are implemented in the package:","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"generator_catalogue","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"Dict{Symbol, Any} with 11 entries:\n :gravitational => GravitationalGenerator\n :growing_spheres => GrowingSpheresGenerator\n :revise => REVISEGenerator\n :clue => CLUEGenerator\n :probe => ProbeGenerator\n :dice => DiCEGenerator\n :feature_tweak => FeatureTweakGenerator\n :claproar => ClaPROARGenerator\n :wachter => WachterGenerator\n :generic => GenericGenerator\n :greedy => GreedyGenerator","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"To specify the type of generator you want to use, you can simply instantiate it:","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"# Search:\ngenerator = GenericGenerator()\nce = generate_counterfactual(x, target, counterfactual_data, M, generator)\nplot(ce)","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"(Image: )","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"We generally make an effort to follow the literature as closely as possible when implementing off-the-shelf generators.","category":"page"},{"location":"tutorials/generators/#References","page":"Handling Generators","title":"References","text":"","category":"section"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia Liem. 2023. “Endogenous Macrodynamics in Algorithmic Recourse.” In First IEEE Conference on Secure and Trustworthy Machine Learning. https://doi.org/10.1109/satml54575.2023.00036.","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"Joshi, Shalmali, Oluwasanmi Koyejo, Warut Vijitbenjaronk, Been Kim, and Joydeep Ghosh. 2019. “Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems.” https://arxiv.org/abs/1907.09615.","category":"page"},{"location":"tutorials/generators/","page":"Handling Generators","title":"Handling Generators","text":"Mothilal, Ramaravind K, Amit Sharma, and Chenhao Tan. 2020. “Explaining Machine Learning Classifiers Through Diverse Counterfactual Explanations.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 607–17. https://doi.org/10.1145/3351095.3372850.","category":"page"},{"location":"explanation/generators/mint/","page":"MINT","title":"MINT","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/generators/mint/#MINT-Generator","page":"MINT","title":"MINT Generator","text":"","category":"section"},{"location":"explanation/generators/mint/","page":"MINT","title":"MINT","text":"In this tutorial, we introduce the MINT generator, a counterfactual generator based on the Recourse through Minimal Intervention (MINT) method proposed by Karimi, Schölkopf, and Valera (2021).","category":"page"},{"location":"explanation/generators/mint/","page":"MINT","title":"MINT","text":"note: Note\nThere is currently no custom type for this generator, because we anticipate changes to the API for composable generators. This tutorial explains how counterfactuals can nonetheless be generated consistently with the MINT framework.","category":"page"},{"location":"explanation/generators/mint/#Description","page":"MINT","title":"Description","text":"","category":"section"},{"location":"explanation/generators/mint/","page":"MINT","title":"MINT","text":"The MINT generator incorporates causal reasoning in algorithm recourse to achieve minimal interventions when generating a counterfactual explanation. In this sense, the main ideia is that just perturbating a black box model without taking into account the causal relations in the data can guide to misleading recommendations. Here we now shift to a perspective where every action/pertubation is an intervetion in the causal graph of the problem, thus the change is not made just in the intervened upon variable, but also in its childs in the causal structure. The generator utilizes a Structural Causal Model(SCM) to encode the variables in a way that causal effects are propagated and uses a generic gradient-based generator to create the search path, that is, any gradient-base generator (ECCo, REVISE, Watcher, …) can be used with the MNIT SCM encoder to generate counterfactual samples in latent space for minimal intervetions algorithm recourse.","category":"page"},{"location":"explanation/generators/mint/","page":"MINT","title":"MINT","text":"The MNIT algorithm minimizes a loss function that combines the causal constraints of the SCM and the distance between the generated counterfactual and the original input. Since we want a gradient-based generator, we need to pass the constrained optimizaiton problem into an unconstrained one and we do this by using the Lagrangian. Initially, as defined in Karimi, Schölkopf, and Valera (2021), we aim to aim to find the minimal cost set of actions A (in the form of structural interventions) that results in a counterfactual instance yielding the favorable output from h,","category":"page"},{"location":"explanation/generators/mint/","page":"MINT","title":"MINT","text":"beginaligned\nA^* in argmin_A textcost(A mathbfx_F)\ntextrmst quad h(mathbfx_SCF) neq h(mathbfx_F)\nendaligned","category":"page"},{"location":"explanation/generators/mint/","page":"MINT","title":"MINT","text":"where mathbfx_F is the original input, mathbfx_SCF is the counterfactual instance, and h is the black-box model. We use the mathbfx_SCF terminology because the counterfactual is derived from the SCM,","category":"page"},{"location":"explanation/generators/mint/","page":"MINT","title":"MINT","text":"beginequation\n\nx_SCF_i = \nbegincases\nx_F_i + delta_i textif i in I \nx_F_i + f_i(textpa_SCF_i) - f_i(textpa_F_i) textif i notin I text\nendcases \n\nendequation","category":"page"},{"location":"explanation/generators/mint/","page":"MINT","title":"MINT","text":"where I is the set of intervened upon variables, f_i is the function that generates the value of the variable i given its parents, and textpa_SCF_i and textpa_F_i are the parents of the variable i in the counterfactual and original instance, respectively. This closed formula for the decision variable mathbfx_SCF is what makes possible to use a gradient-based generator, since the lagrangian is differentiable,","category":"page"},{"location":"explanation/generators/mint/","page":"MINT","title":"MINT","text":"beginequation\nmathcalL_textttMINT(mathbfx_SCF) = lambda textcost(mathbfx_SCF mathbfx_F) + textyloss(mathbfx_SCFy^*) text\nendequation","category":"page"},{"location":"explanation/generators/mint/#Usage","page":"MINT","title":"Usage","text":"","category":"section"},{"location":"explanation/generators/mint/","page":"MINT","title":"MINT","text":"As we already stated, the MINT generator is not yet implemented as a custom type in the package. However, the MINT algorithm can be implemented using the generic generator and the SCM encoder, that we implement using CausalInference.jl package. The following code snippet shows how to use the MINT algorithm to generate counterfactuals using any gradient-based generator:","category":"page"},{"location":"explanation/generators/mint/","page":"MINT","title":"MINT","text":"using CausalInference\nusing CounterfactualExplanations\nusing CounterfactualExplanations.DataPreprocessing: fit_transformer\n\nN = 2000\ndf = (\n x = randn(N), \n v = randn(N) .^ 2 + randn(N) * 0.25, \n w = cos.(randn(N)) + randn(N) * 0.25, \n z = randn(N) .^ 2 + cos.(randn(N)) + randn(N) * 0.25 + randn(N) * 0.25, \n s = sin.(randn(N) .^ 2 + cos.(randn(N)) + randn(N) * 0.25 + randn(N) * 0.25) + randn(N) * 0.25\n)\ny_lab = rand(0:2, N)\ncounterfactual_data_scm = CounterfactualData(Tables.matrix(df; transpose=true), y_lab)\n\nM = fit_model(counterfactual_data_scm, :Linear)\nchosen = rand(findall(predict_label(M, counterfactual_data_scm) .== 1))\nx = select_factual(counterfactual_data_scm, chosen)\n\ndata_scm = deepcopy(counterfactual_data_scm)\ndata_scm.input_encoder = fit_transformer(data_scm, CausalInference.SCM)\n\nce = generate_counterfactual(x, 2, data_scm, M, GenericGenerator(); initialization=:identity)","category":"page"},{"location":"explanation/generators/mint/","page":"MINT","title":"MINT","text":"CounterfactualExplanation\nConvergence: ❌ after 100 steps.","category":"page"},{"location":"explanation/generators/mint/","page":"MINT","title":"MINT","text":"note: Note\nThe above documentation is based on the information provided in the MINT paper. Please refer to the original paper for more detailed explanations and implementation specifics.","category":"page"},{"location":"explanation/generators/mint/#References","page":"MINT","title":"References","text":"","category":"section"},{"location":"explanation/generators/mint/","page":"MINT","title":"MINT","text":"Karimi, Amir-Hossein, Bernhard Schölkopf, and Isabel Valera. 2021. “Algorithmic Recourse: From Counterfactual Explanations to Interventions.” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 353–62.","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"tutorials/parallelization/#Parallelization","page":"Parallelization","title":"Parallelization","text":"","category":"section"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"Version 0.1.15 adds support for parallelization through multi-processing. Currently, the only available backend for parallelization is MPI.jl.","category":"page"},{"location":"tutorials/parallelization/#Available-functions","page":"Parallelization","title":"Available functions","text":"","category":"section"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"Parallelization is only available for certain functions. To check if a function is parallelizable, you can use parallelizable function:","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"using CounterfactualExplanations.Evaluation: evaluate, benchmark\nprintln(parallelizable(generate_counterfactual))\nprintln(parallelizable(evaluate))\nprintln(parallelizable(predict_label))","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"true\ntrue\nfalse","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"In the following, we will generate multiple counterfactuals and evaluate them in parallel:","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"chosen = rand(findall(predict_label(M, counterfactual_data) .== factual), 1000)\nxs = select_factual(counterfactual_data, chosen)","category":"page"},{"location":"tutorials/parallelization/#Multi-threading","page":"Parallelization","title":"Multi-threading","text":"","category":"section"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"We first instantiate an ThreadParallelizer object:","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"parallelizer = ThreadsParallelizer()","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"ThreadsParallelizer()","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"To generate counterfactuals in parallel, we use the parallelize function:","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"ces = @with_parallelizer parallelizer begin\n generate_counterfactual(\n xs,\n target,\n counterfactual_data,\n M,\n generator\n )\nend","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"Generating counterfactuals ... 0%| | ETA: 0:01:29 (89.14 ms/it)Generating counterfactuals ... 100%|███████| Time: 0:00:01 ( 1.59 ms/it)\n\n1000-element Vector{AbstractCounterfactualExplanation}:\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 8 steps.\n CounterfactualExplanation\nConvergence: ✅ after 6 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 8 steps.\n CounterfactualExplanation\nConvergence: ✅ after 8 steps.\n CounterfactualExplanation\nConvergence: ✅ after 8 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 8 steps.\n CounterfactualExplanation\nConvergence: ✅ after 6 steps.\n ⋮\n CounterfactualExplanation\nConvergence: ✅ after 9 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 6 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 8 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 8 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 8 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"To evaluate counterfactuals in parallel, we again use the parallelize function:","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"@with_parallelizer parallelizer evaluate(ces)","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"Evaluating counterfactuals ... 0%| | ETA: 0:07:03 ( 0.42 s/it)Evaluating counterfactuals ... 100%|███████| Time: 0:00:00 ( 0.86 ms/it)\n\n1000-element Vector{Any}:\n Vector[[1.0], Float32[3.2939816], [0.0]]\n Vector[[1.0], Float32[3.019046], [0.0]]\n Vector[[1.0], Float32[3.701171], [0.0]]\n Vector[[1.0], Float32[2.5611918], [0.0]]\n Vector[[1.0], Float32[2.9027307], [0.0]]\n Vector[[1.0], Float32[3.7893882], [0.0]]\n Vector[[1.0], Float32[3.5026522], [0.0]]\n Vector[[1.0], Float32[3.6317568], [0.0]]\n Vector[[1.0], Float32[3.084984], [0.0]]\n Vector[[1.0], Float32[3.2268934], [0.0]]\n Vector[[1.0], Float32[2.834947], [0.0]]\n Vector[[1.0], Float32[3.656587], [0.0]]\n Vector[[1.0], Float32[2.5985842], [0.0]]\n ⋮\n Vector[[1.0], Float32[4.067538], [0.0]]\n Vector[[1.0], Float32[3.02231], [0.0]]\n Vector[[1.0], Float32[2.748292], [0.0]]\n Vector[[1.0], Float32[2.9483426], [0.0]]\n Vector[[1.0], Float32[3.066149], [0.0]]\n Vector[[1.0], Float32[3.6018147], [0.0]]\n Vector[[1.0], Float32[3.0138078], [0.0]]\n Vector[[1.0], Float32[3.5724509], [0.0]]\n Vector[[1.0], Float32[3.117551], [0.0]]\n Vector[[1.0], Float32[2.9670508], [0.0]]\n Vector[[1.0], Float32[3.4107168], [0.0]]\n Vector[[1.0], Float32[3.0252533], [0.0]]","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"Benchmarks can also be run with parallelization by specifying parallelizer argument:","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"# Models:\nbmk = benchmark(counterfactual_data; parallelizer = parallelizer)","category":"page"},{"location":"tutorials/parallelization/#MPI","page":"Parallelization","title":"MPI","text":"","category":"section"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"note: Note\nTo use MPI, you need to have MPI installed on your machine. Running the following code straight from a running Julia session will work if you have MPI installed on your machine, but it will be run on a single process. To execute the code on multiple processes, you need to run it from the command line with mpirun or mpiexec. For example, to run a script on 4 processes, you can run the following command from the command line:\n\nmpiexecjl --project -n 4 julia -e 'include(\"docs/src/srcipts/mpi.jl\")'For more information, see MPI.jl. ","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"We first instantiate an MPIParallelizer object:","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"import MPI\nMPI.Init()\nparallelizer = MPIParallelizer(MPI.COMM_WORLD; threaded=true)","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"Precompiling MPIExt\n ✓ TaijaParallel → MPIExt\n 1 dependency successfully precompiled in 3 seconds. 255 already precompiled.\n[ Info: Precompiling MPIExt [48137b38-b316-530b-be8a-261f41e68c23]\n┌ Warning: Module TaijaParallel with build ID ffffffff-ffff-ffff-0001-2d458926c256 is missing from the cache.\n│ This may mean TaijaParallel [bf1c2c22-5e42-4e78-8b6b-92e6c673eeb0] does not support precompilation but is imported by a module that does.\n└ @ Base loading.jl:1948\n[ Info: Skipping precompilation since __precompile__(false). Importing MPIExt [48137b38-b316-530b-be8a-261f41e68c23].\n[ Info: Using `MPI.jl` for multi-processing.\n\nRunning on 1 processes.\n\nMPIExt.MPIParallelizer(MPI.Comm(1140850688), 0, 1, nothing, true)","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"To generate counterfactuals in parallel, we use the parallelize function:","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"ces = @with_parallelizer parallelizer begin\n generate_counterfactual(\n xs,\n target,\n counterfactual_data,\n M,\n generator\n )\nend","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"Generating counterfactuals ... 9%|▋ | ETA: 0:00:01 ( 1.15 ms/it)Generating counterfactuals ... 19%|█▍ | ETA: 0:00:01 ( 1.07 ms/it)Generating counterfactuals ... 29%|██ | ETA: 0:00:01 ( 1.10 ms/it)Generating counterfactuals ... 39%|██▊ | ETA: 0:00:01 ( 1.08 ms/it)Generating counterfactuals ... 49%|███▍ | ETA: 0:00:01 ( 1.08 ms/it)Generating counterfactuals ... 59%|████▏ | ETA: 0:00:00 ( 1.08 ms/it)Generating counterfactuals ... 69%|████▊ | ETA: 0:00:00 ( 1.08 ms/it)Generating counterfactuals ... 79%|█████▌ | ETA: 0:00:00 ( 1.07 ms/it)Generating counterfactuals ... 89%|██████▎| ETA: 0:00:00 ( 1.07 ms/it)Generating counterfactuals ... 99%|██████▉| ETA: 0:00:00 ( 1.06 ms/it)Generating counterfactuals ... 100%|███████| Time: 0:00:01 ( 1.06 ms/it)\n\n1000-element Vector{AbstractCounterfactualExplanation}:\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 8 steps.\n CounterfactualExplanation\nConvergence: ✅ after 6 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 8 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 8 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 6 steps.\n CounterfactualExplanation\nConvergence: ✅ after 8 steps.\n CounterfactualExplanation\nConvergence: ✅ after 6 steps.\n ⋮\n CounterfactualExplanation\nConvergence: ✅ after 9 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 6 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 8 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 8 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.\n CounterfactualExplanation\nConvergence: ✅ after 7 steps.","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"To evaluate counterfactuals in parallel, we again use the parallelize function:","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"@with_parallelizer parallelizer evaluate(ces)","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"1000-element Vector{Any}:\n Vector[[1.0], Float32[3.0941274], [0.0]]\n Vector[[1.0], Float32[3.0894346], [0.0]]\n Vector[[1.0], Float32[3.5737448], [0.0]]\n Vector[[1.0], Float32[2.6201036], [0.0]]\n Vector[[1.0], Float32[2.8519764], [0.0]]\n Vector[[1.0], Float32[3.7762523], [0.0]]\n Vector[[1.0], Float32[3.4162796], [0.0]]\n Vector[[1.0], Float32[3.6095932], [0.0]]\n Vector[[1.0], Float32[3.1347957], [0.0]]\n Vector[[1.0], Float32[3.0313473], [0.0]]\n Vector[[1.0], Float32[2.7612567], [0.0]]\n Vector[[1.0], Float32[3.6191392], [0.0]]\n Vector[[1.0], Float32[2.610616], [0.0]]\n ⋮\n Vector[[1.0], Float32[4.0844703], [0.0]]\n Vector[[1.0], Float32[3.0119], [0.0]]\n Vector[[1.0], Float32[2.4461186], [0.0]]\n Vector[[1.0], Float32[3.071967], [0.0]]\n Vector[[1.0], Float32[3.132917], [0.0]]\n Vector[[1.0], Float32[3.5403214], [0.0]]\n Vector[[1.0], Float32[3.0588162], [0.0]]\n Vector[[1.0], Float32[3.5600657], [0.0]]\n Vector[[1.0], Float32[3.2205954], [0.0]]\n Vector[[1.0], Float32[2.896302], [0.0]]\n Vector[[1.0], Float32[3.2603998], [0.0]]\n Vector[[1.0], Float32[3.1369917], [0.0]]","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"tip: Tip\nNote that parallelizable processes can be supplied as input to the macro either as a block or directly as an expression.","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"Benchmarks can also be run with parallelization by specifying parallelizer argument:","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"# Models:\nbmk = benchmark(counterfactual_data; parallelizer = parallelizer)","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"The following code snippet shows a complete example script that uses MPI for running a benchmark in parallel:","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"using CounterfactualExplanations\nusing CounterfactualExplanations.Evaluation: benchmark\nusing CounterfactualExplanations.Models\nimport MPI\n\nMPI.Init()\n\ndata = TaijaData.load_linearly_separable()\ncounterfactual_data = DataPreprocessing.CounterfactualData(data...)\nM = fit_model(counterfactual_data, :Linear)\nfactual = 1\ntarget = 2\nchosen = rand(findall(predict_label(M, counterfactual_data) .== factual), 100)\nxs = select_factual(counterfactual_data, chosen)\ngenerator = GenericGenerator()\n\nparallelizer = MPIParallelizer(MPI.COMM_WORLD)\n\nbmk = benchmark(counterfactual_data; parallelizer=parallelizer)\n\nMPI.Finalize()","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"The file can be executed from the command line as follows:","category":"page"},{"location":"tutorials/parallelization/","page":"Parallelization","title":"Parallelization","text":"mpiexecjl --project -n 4 julia -e 'include(\"docs/src/srcipts/mpi.jl\")'","category":"page"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"EditURL = \"https://github.com/juliatrustworthyai/CounterfactualExplanations.jl/blob/master/CHANGELOG.md\"","category":"page"},{"location":"release-notes/#Changelog","page":"Release Notes","title":"Changelog","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"All notable changes to this project will be documented in this file.","category":"page"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.","category":"page"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Note: We try to adhere to these practices as of version v1.1.1.","category":"page"},{"location":"release-notes/#Version-[1.3.4]-2024-10-22","page":"Release Notes","title":"Version [1.3.4] - 2024-10-22","text":"","category":"section"},{"location":"release-notes/#Changed","page":"Release Notes","title":"Changed","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Fixed a bug in the find_potential_neighbours method. ","category":"page"},{"location":"release-notes/#Version-[1.3.3]-2024-09-30","page":"Release Notes","title":"Version [1.3.3] - 2024-09-30","text":"","category":"section"},{"location":"release-notes/#Changed-2","page":"Release Notes","title":"Changed","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Fixed a remaining bug in NeuroTreeExt extensions. #475","category":"page"},{"location":"release-notes/#Version-[1.3.2]-2024-09-24","page":"Release Notes","title":"Version [1.3.2] - 2024-09-24","text":"","category":"section"},{"location":"release-notes/#Added","page":"Release Notes","title":"Added","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Added support for using a random forest as a surrogate model for the T-CREx generator. #483","category":"page"},{"location":"release-notes/#Changed-3","page":"Release Notes","title":"Changed","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Improved the T-CREx documentation further by bringing example even closer to the example in the paper. #483\nInclude citation linking to ICML paper in T-CREx documentation and docstrings. #480","category":"page"},{"location":"release-notes/#Version-[1.3.1]-2024-09-24","page":"Release Notes","title":"Version [1.3.1] - 2024-09-24","text":"","category":"section"},{"location":"release-notes/#Changed-4","page":"Release Notes","title":"Changed","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Fixed a remaining bug in NeuroTreeExt extensions. #475","category":"page"},{"location":"release-notes/#Version-[1.3.0]-2024-09-16","page":"Release Notes","title":"Version [1.3.0] - 2024-09-16","text":"","category":"section"},{"location":"release-notes/#Changed-5","page":"Release Notes","title":"Changed","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Fixed bug in NeuroTreeExt extensions. #475","category":"page"},{"location":"release-notes/#Added-2","page":"Release Notes","title":"Added","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Added basic support for the T-CREx counterfactual generator. #473\nAdded docstrings for package extensions to documentation. #475","category":"page"},{"location":"release-notes/#Version-[1.2.0]-2024-09-10","page":"Release Notes","title":"Version [1.2.0] - 2024-09-10","text":"","category":"section"},{"location":"release-notes/#Added-3","page":"Release Notes","title":"Added","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Added documentation for generating counterfactuals consistent with the MINT framework. #467\nAdded tests for new evaluation metrics and JEM extension. #471\nAdded support for gradient-based causal algorithm-recourse (MNIT) as described in Karimi et al. (2020). This incorporates an input encoder that is based on a Structural Causal Model #457 \nAdded out-of-the-box support for training joint energy models (JEM). #454\nAdded new evaluation metric to measure faithfulness of counterfactual explanations as in Altmeyer et al. (2024). #454\nA tutorial in the documentation (\"Explanation\" section) explaining the faithfulness metric in detail. #454\nAdded support for an energy constraint as in Altmeyer et al. (2024). This is the first step towards adding functionality for ECCCo. #387 ","category":"page"},{"location":"release-notes/#Changed-6","page":"Release Notes","title":"Changed","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"The fitresult field of Model now takes a concrete Fitresult type, for which some basic methods have been defined. This mutable struct has a field called other that accepts a dictionary Dict that can be filled with additional objects. #454\nRegenerated pre-trained model artifacts. #454\nUpdated the tutorial on \"Handling Data\". #454","category":"page"},{"location":"release-notes/#Removed","page":"Release Notes","title":"Removed","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Removed bug in find_potential_neighbours method. #454","category":"page"},{"location":"release-notes/#Version-[1.1.6]-2024-05-19","page":"Release Notes","title":"Version [1.1.6] - 2024-05-19","text":"","category":"section"},{"location":"release-notes/#Removed-2","page":"Release Notes","title":"Removed","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Removed the call to the Iris function in the test suite because of HTTPs issues. #452\nRemoved the mlj_models_catalogue because it served no obvious purpose. In the future, we may instead add meta information to the all_models_catalogue. #444","category":"page"},{"location":"release-notes/#Added-4","page":"Release Notes","title":"Added","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"New general Model struct that wraps empty concrete types. This adds a more general interface that is still flexible enough by simply using multiple dispatch on the empty concrete types. #444\nA new incompatible(::AbstractGenerator, ::AbstractCounterfactualExplanation) function has been added to avoid running a counterfactual search if the generator is incompatible with any other specification (e.g. the model). #444","category":"page"},{"location":"release-notes/#Changed-7","page":"Release Notes","title":"Changed","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"No longer exporting many of the deprecated functions. #452\nUpdated pre-trained model artifacts. #444\nSome function signatures have been deprecated, e.g. NeuroTreeModel to NeuroTree, LaplaceReduxModel to LaplaceNN. #444\nSupport for DecisionTree.jl models and the FeatureTweakGenerator have been moved to an extension (DecisionTreeExt). #444\nUpdates to NeuroTreeModels extensions to incorporate breaking changes to package. #444\nNo longer running alloc test on Windows. #441\nSlight change to doctests. #447","category":"page"},{"location":"release-notes/#Version-[v1.1.5](https://github.com/juliatrustworthyai/CounterfactualExplanations.jl/releases/tag/v1.1.5)-2024-04-30","page":"Release Notes","title":"Version v1.1.5 - 2024-04-30","text":"","category":"section"},{"location":"release-notes/#Added-5","page":"Release Notes","title":"Added","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Unit tests: adds a simple performance benchmark to test that for a small problem, generating a counterfactual using the generic generator takes at most 4700 allocations. Only run on julia v1.10 and higher. #436","category":"page"},{"location":"release-notes/#Changed-8","page":"Release Notes","title":"Changed","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"The find_potential_neighbours is now only triggered if one of the penalties of the generator requires access to samples from the target domain. This improves scalability because calling the function can be computationally costly (forward-pass). #436 \nThe target variable encodings are now handled more efficiently. Previously certain tasks were repeated, which was not necessary. #436","category":"page"},{"location":"release-notes/#Removed-3","page":"Release Notes","title":"Removed","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Removed the assertion checking that the model ever predicts the target value. While this assertion is useful, it is not essential. For large enough models and datasets, this forward pass can be very costly. #436\nRemoved redundant distance_from_targets function. #436","category":"page"},{"location":"release-notes/#Version-[v1.1.4](https://github.com/juliatrustworthyai/CounterfactualExplanations.jl/releases/tag/v1.1.4)-2024-04-25","page":"Release Notes","title":"Version v1.1.4 - 2024-04-25","text":"","category":"section"},{"location":"release-notes/#Changed-9","page":"Release Notes","title":"Changed","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Refactors the encodings and decodings such that it is now more streamlined. Instead of conditional statements, encodings are now dispatched on the type of a new unifying data.input_encoder field. #432\nRefactors the check for redundancy. This is now based on the convergence type and done right before the counterfactual search begins, if not redundant. #432","category":"page"},{"location":"release-notes/#Added-6","page":"Release Notes","title":"Added","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Added additional unit tests. #437","category":"page"},{"location":"release-notes/#Version-[v1.1.3](https://github.com/juliatrustworthyai/CounterfactualExplanations.jl/releases/tag/v1.1.3)-2024-04-17","page":"Release Notes","title":"Version v1.1.3 - 2024-04-17","text":"","category":"section"},{"location":"release-notes/#Added-7","page":"Release Notes","title":"Added","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Adds a section on Convergence to the documentation, Changelog.jl functionality and a few doc tests. #429","category":"page"},{"location":"release-notes/#Changed-10","page":"Release Notes","title":"Changed","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Changes style of taking gradients for the counterfactual search from implicit to explicit. #430\nRemoved all implicit imports. #430","category":"page"},{"location":"release-notes/#Removed-4","page":"Release Notes","title":"Removed","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Removed CUDA.jl dependency, because redundant. #430\nRemoved Parameters.jl dependency, because redundant. #430","category":"page"},{"location":"release-notes/#Version-[v1.1.2](https://github.com/juliatrustworthyai/CounterfactualExplanations.jl/releases/tag/v1.1.2)-2024-04-16","page":"Release Notes","title":"Version v1.1.2 - 2024-04-16","text":"","category":"section"},{"location":"release-notes/#Changed-11","page":"Release Notes","title":"Changed","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Replaces the GIF in the README and introduction of docs for a static image. ","category":"page"},{"location":"release-notes/#Version-[v1.1.1](https://github.com/juliatrustworthyai/CounterfactualExplanations.jl/releases/tag/v1.1.1)-2024-04-15","page":"Release Notes","title":"Version v1.1.1 - 2024-04-15","text":"","category":"section"},{"location":"release-notes/#Added-8","page":"Release Notes","title":"Added","text":"","category":"section"},{"location":"release-notes/","page":"Release Notes","title":"Release Notes","text":"Added tests for LaplaceRedux extension. Bumped upper compat bound for LaplaceRedux.jl. #428","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/generators/revise/#REVISEGenerator","page":"REVISE","title":"REVISEGenerator","text":"","category":"section"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"REVISE is a Latent Space generator introduced by Joshi et al. (2019).","category":"page"},{"location":"explanation/generators/revise/#Description","page":"REVISE","title":"Description","text":"","category":"section"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"The current consensus in the literature is that Counterfactual Explanations should be realistic: the generated counterfactuals should look like they were generated by the data-generating process (DGP) that governs the problem at hand. With respect to Algorithmic Recourse, it is certainly true that counterfactuals should be realistic in order to be actionable for individuals.[1] To address this need, researchers have come up with various approaches in recent years. Among the most popular approaches is Latent Space Search, which was first proposed in Joshi et al. (2019): instead of traversing the feature space directly, this approach relies on a separate generative model that learns a latent space representation of the DGP. Assuming the generative model is well-specified, access to the learned latent embeddings of the data comes with two advantages:","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"Since the learned DGP is encoded in the latent space, the generated counterfactuals will respect the learned representation of the data. In practice, this means that counterfactuals will be realistic.\nThe latent space is typically a compressed (i.e. lower dimensional) version of the feature space. This makes the counterfactual search less costly.","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"There are also certain disadvantages though:","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"Learning generative models is (typically) an expensive task, which may well outweigh the benefits associated with utlimately traversing a lower dimensional space.\nIf the generative model is poorly specified, this will affect the quality of the counterfactuals.[2]","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"Anyway, traversing latent embeddings is a powerful idea that may be very useful depending on the specific context. This tutorial introduces the concept and how it is implemented in this package.","category":"page"},{"location":"explanation/generators/revise/#Usage","page":"REVISE","title":"Usage","text":"","category":"section"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"The approach can be used in our package as follows:","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"generator = REVISEGenerator()\nce = generate_counterfactual(x, target, counterfactual_data, M, generator)\nplot(ce)","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"(Image: )","category":"page"},{"location":"explanation/generators/revise/#Worked-2D-Examples","page":"REVISE","title":"Worked 2D Examples","text":"","category":"section"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"Below we load 2D data and train a VAE on it and plot the original samples against their reconstructions.","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"# output: true\n\ncounterfactual_data = CounterfactualData(load_overlapping()...)\nX = counterfactual_data.X\ny = counterfactual_data.y\ninput_dim = size(X, 1)\nusing CounterfactualExplanations.GenerativeModels: VAE, train!, reconstruct\nvae = VAE(input_dim; nll=Flux.Losses.mse, epochs=100, λ=0.01, latent_dim=2, hidden_dim=32)\nflux_training_params.verbose = true\ntrain!(vae, X)\nX̂ = reconstruct(vae, X)[1]\np0 = scatter(X[1, :], X[2, :], color=:blue, label=\"Original\", xlab=\"x₁\", ylab=\"x₂\")\nscatter!(X̂[1, :], X̂[2, :], color=:orange, label=\"Reconstructed\", xlab=\"x₁\", ylab=\"x₂\")\np1 = scatter(X[1, :], X̂[1, :], color=:purple, label=\"\", xlab=\"x₁\", ylab=\"x̂₁\")\np2 = scatter(X[2, :], X̂[2, :], color=:purple, label=\"\", xlab=\"x₂\", ylab=\"x̂₂\")\nplt2 = plot(p1,p2, layout=(1,2), size=(800, 400))\nplot(p0, plt2, layout=(2,1), size=(800, 600))","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"Next, we train a simple MLP for the classification task. Then we determine a target and factual class for our counterfactual search and select a random factual instance to explain.","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"M = fit_model(counterfactual_data, :MLP)\ntarget = 2\nfactual = 1\nchosen = rand(findall(predict_label(M, counterfactual_data) .== factual))\nx = select_factual(counterfactual_data, chosen)","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"Finally, we generate and visualize the generated counterfactual:","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"# Search:\ngenerator = REVISEGenerator()\nce = generate_counterfactual(x, target, counterfactual_data, M, generator)\nplot(ce)","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"(Image: )","category":"page"},{"location":"explanation/generators/revise/#3D-Example","page":"REVISE","title":"3D Example","text":"","category":"section"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"To illustrate the notion of Latent Space search, let’s look at an example involving 3-dimensional input data, which we can still visualize. The code chunk below loads the data and implements the counterfactual search.","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"# Data and Classifier:\ncounterfactual_data = CounterfactualData(load_blobs(k=3)...)\nX = counterfactual_data.X\nys = counterfactual_data.output_encoder.labels.refs\nM = fit_model(counterfactual_data, :MLP)\n\n# Randomly selected factual:\nx = select_factual(counterfactual_data,rand(1:size(counterfactual_data.X,2)))\ny = predict_label(M, counterfactual_data, x)[1]\ntarget = counterfactual_data.y_levels[counterfactual_data.y_levels .!= y][1]\n\n# Generate recourse:\nce = generate_counterfactual(x, target, counterfactual_data, M, generator)","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"The figure below demonstrates the idea of searching counterfactuals in a lower-dimensional latent space: on the left, we can see the counterfactual search in the 3-dimensional feature space, while on the right we can see the corresponding search in the latent space.","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"(Image: )","category":"page"},{"location":"explanation/generators/revise/#MNIST-data","page":"REVISE","title":"MNIST data","text":"","category":"section"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"Let’s carry the ideas introduced above over to a more complex example. The code below loads MNIST data as well as a pre-trained classifier and generative model for the data.","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"using CounterfactualExplanations.Models: load_mnist_mlp, load_mnist_ensemble, load_mnist_vae\ncounterfactual_data = CounterfactualData(load_mnist()...)\nX, y = CounterfactualExplanations.DataPreprocessing.unpack_data(counterfactual_data)\ninput_dim, n_obs = size(counterfactual_data.X)\nM = load_mnist_mlp()\nvae = load_mnist_vae()","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"The F1-score of our pre-trained image classifier on test data is: 0.94","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"Before continuing, we supply the pre-trained generative model to our data container:","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"counterfactual_data.input_encoder = vae # assign generative model","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"Now let’s define a factual and target label:","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"# Randomly selected factual:\nRandom.seed!(2023)\nfactual_label = 8\nx = reshape(X[:,rand(findall(predict_label(M, counterfactual_data).==factual_label))],input_dim,1)\ntarget = 3\nfactual = predict_label(M, counterfactual_data, x)[1]","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"Using REVISE, we are going to turn a randomly drawn 8 into a 3.","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"The API call is the same as always:","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"γ = 0.95\nconv = \n CounterfactualExplanations.Convergence.DecisionThresholdConvergence(decision_threshold=γ)\n# Define generator:\ngenerator = REVISEGenerator(opt=Flux.Adam(0.1))\n# Generate recourse:\nce = generate_counterfactual(x, target, counterfactual_data, M, generator; convergence=conv)","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"The chart below shows the results:","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"(Image: )","category":"page"},{"location":"explanation/generators/revise/#References","page":"REVISE","title":"References","text":"","category":"section"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"Joshi, Shalmali, Oluwasanmi Koyejo, Warut Vijitbenjaronk, Been Kim, and Joydeep Ghosh. 2019. “Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems.” https://arxiv.org/abs/1907.09615.","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"[1] In general, we believe that there may be a trade-off between creating counterfactuals that respect the DGP vs. counterfactuals reflect the behaviour of the black-model in question - both accurately and complete.","category":"page"},{"location":"explanation/generators/revise/","page":"REVISE","title":"REVISE","text":"[2] We believe that there is another potentially crucial disadvantage of relying on a separate generative model: it reallocates the task of learning realistic explanations for the data from the black-box model to the generative model.","category":"page"},{"location":"explanation/generators/overview/","page":"Overview","title":"Overview","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/generators/overview/#generators_explanation","page":"Overview","title":"Counterfactual Generators","text":"","category":"section"},{"location":"explanation/generators/overview/","page":"Overview","title":"Overview","text":"Counterfactual generators form the very core of this package. The generator_catalogue can be used to inspect the available generators:","category":"page"},{"location":"explanation/generators/overview/","page":"Overview","title":"Overview","text":"generator_catalogue","category":"page"},{"location":"explanation/generators/overview/","page":"Overview","title":"Overview","text":"Dict{Symbol, Any} with 11 entries:\n :gravitational => GravitationalGenerator\n :growing_spheres => GrowingSpheresGenerator\n :revise => REVISEGenerator\n :clue => CLUEGenerator\n :probe => ProbeGenerator\n :dice => DiCEGenerator\n :feature_tweak => FeatureTweakGenerator\n :claproar => ClaPROARGenerator\n :wachter => WachterGenerator\n :generic => GenericGenerator\n :greedy => GreedyGenerator","category":"page"},{"location":"explanation/generators/overview/","page":"Overview","title":"Overview","text":"The following sections provide brief descriptions of all of them.","category":"page"},{"location":"explanation/generators/overview/#Gradient-based-Counterfactual-Generators","page":"Overview","title":"Gradient-based Counterfactual Generators","text":"","category":"section"},{"location":"explanation/generators/overview/","page":"Overview","title":"Overview","text":"At the time of writing, all generators are gradient-based: that is, counterfactuals are searched through gradient descent. In Altmeyer et al. (2023) we lay out a general methodological framework that can be applied to all of these generators:","category":"page"},{"location":"explanation/generators/overview/","page":"Overview","title":"Overview","text":"beginaligned\nmathbfs^prime = arg min_mathbfs^prime in mathcalS left textyloss(M(f(mathbfs^prime))y^*)+ lambda textcost(f(mathbfs^prime)) right \nendaligned ","category":"page"},{"location":"explanation/generators/overview/","page":"Overview","title":"Overview","text":"“Here mathbfs^prime=lefts_k^primeright_K is a K-dimensional array of counterfactual states and f mathcalS mapsto mathcalX maps from the counterfactual state space to the feature space.” (Altmeyer et al. 2023)","category":"page"},{"location":"explanation/generators/overview/","page":"Overview","title":"Overview","text":"For most generators, the state space is the feature space (f is the identity function) and the number of counterfactuals K is one. Latent Space generators instead search counterfactuals in some latent space mathcalS. In this case, f corresponds to the decoder part of the generative model, that is the function that maps back from the latent space to inputs.","category":"page"},{"location":"explanation/generators/overview/#References","page":"Overview","title":"References","text":"","category":"section"},{"location":"explanation/generators/overview/","page":"Overview","title":"Overview","text":"Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia Liem. 2023. “Endogenous Macrodynamics in Algorithmic Recourse.” In First IEEE Conference on Secure and Trustworthy Machine Learning. https://doi.org/10.1109/satml54575.2023.00036.","category":"page"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"CurrentModule = CounterfactualExplanations ","category":"page"},{"location":"explanation/generators/gravitational/#GravitationalGenerator","page":"Gravitational","title":"GravitationalGenerator","text":"","category":"section"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"The GravitationalGenerator was introduced in Altmeyer et al. (2023). It is named so because it generates counterfactuals that gravitate towards some sensible point in the target domain.","category":"page"},{"location":"explanation/generators/gravitational/#Description","page":"Gravitational","title":"Description","text":"","category":"section"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"Altmeyer et al. (2023) extend the general framework as follows,","category":"page"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"beginaligned\nmathbfs^prime = arg min_mathbfs^prime in mathcalS textyloss(M(f(mathbfs^prime))y^*) + lambda_1 textcost(f(mathbfs^prime)) + lambda_2 textextcost(f(mathbfs^prime)) \nendaligned ","category":"page"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"where textcost(f(mathbfs^prime)) denotes the proxy for costs faced by the individual. “The newly introduced term textextcost(f(mathbfs^prime)) is meant to capture and address external costs incurred by the collective of individuals in response to changes in mathbfs^prime.” (Altmeyer et al. 2023)","category":"page"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"For the GravitationalGenerator we have,","category":"page"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"beginaligned\ntextextcost(f(mathbfs^prime)) = textdist(f(mathbfs^prime)barx^*) \nendaligned","category":"page"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"where barx is some sensible point in the target domain, for example, the subsample average barx^*=textmean(x), x in mathcalD_1.","category":"page"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"There is a tradeoff then, between the distance of counterfactuals from their factual value and the chosen point in the target domain. The chart below illustrates how the counterfactual outcome changes as the penalty lambda_2 on the distance to the point in the target domain is increased from left to right (holding the other penalty term constant).","category":"page"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"(Image: )","category":"page"},{"location":"explanation/generators/gravitational/#Usage","page":"Gravitational","title":"Usage","text":"","category":"section"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"The approach can be used in our package as follows:","category":"page"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"generator = GravitationalGenerator()\nce = generate_counterfactual(x, target, counterfactual_data, M, generator)\ndisplay(plot(ce))","category":"page"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"(Image: )","category":"page"},{"location":"explanation/generators/gravitational/#Comparison-to-GenericGenerator","page":"Gravitational","title":"Comparison to GenericGenerator","text":"","category":"section"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"The figure below compares the outcome for the GenericGenerator and the GravitationalGenerator.","category":"page"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"(Image: )","category":"page"},{"location":"explanation/generators/gravitational/#References","page":"Gravitational","title":"References","text":"","category":"section"},{"location":"explanation/generators/gravitational/","page":"Gravitational","title":"Gravitational","text":"Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia Liem. 2023. “Endogenous Macrodynamics in Algorithmic Recourse.” In First IEEE Conference on Secure and Trustworthy Machine Learning. https://doi.org/10.1109/satml54575.2023.00036.","category":"page"}] } diff --git a/dev/tutorials/benchmarking/index.html b/dev/tutorials/benchmarking/index.html index 674342029..80e57dbaf 100644 --- a/dev/tutorials/benchmarking/index.html +++ b/dev/tutorials/benchmarking/index.html @@ -180,4 +180,4 @@ 1 │ moons Generic 1.56555 2 │ moons Greedy 0.819269 3 │ circles Generic 1.83524 - 4 │ circles Greedy 0.498953 + 4 │ circles Greedy 0.498953 diff --git a/dev/tutorials/convergence/index.html b/dev/tutorials/convergence/index.html index 6ceb0d353..237781202 100644 --- a/dev/tutorials/convergence/index.html +++ b/dev/tutorials/convergence/index.html @@ -10,4 +10,4 @@ for (ce, titl) in zip([ce_gen, ce_dec, ce_max], ["Gradient Convergence", "Decision Threshold Convergence", "Max Iterations Convergence"]) push!(plts, plot(ce; title=titl, cbar=false)) end -plot(plts..., layout=(1,3), size=(1200, 380))

References

Altmeyer, Patrick, Mojtaba Farmanbar, Arie van Deursen, and Cynthia CS Liem. 2024. “Faithful Model Explanations Through Energy-Constrained Conformal Counterfactuals.” In Proceedings of the AAAI Conference on Artificial Intelligence, 38:10829–37. 10.

+plot(plts..., layout=(1,3), size=(1200, 380))

References

Altmeyer, Patrick, Mojtaba Farmanbar, Arie van Deursen, and Cynthia CS Liem. 2024. “Faithful Model Explanations Through Energy-Constrained Conformal Counterfactuals.” In Proceedings of the AAAI Conference on Artificial Intelligence, 38:10829–37. 10.

diff --git a/dev/tutorials/data_catalogue/index.html b/dev/tutorials/data_catalogue/index.html index 60fa32e79..2b5e866f2 100644 --- a/dev/tutorials/data_catalogue/index.html +++ b/dev/tutorials/data_catalogue/index.html @@ -38,4 +38,4 @@ 10 10 10

We can also use a helper function to split the data into train and test sets:

train_data, test_data = 
-    CounterfactualExplanations.DataPreprocessing.train_test_split(counterfactual_data)
+ CounterfactualExplanations.DataPreprocessing.train_test_split(counterfactual_data) diff --git a/dev/tutorials/data_preprocessing/index.html b/dev/tutorials/data_preprocessing/index.html index 890e70b95..4f8ac5df9 100644 --- a/dev/tutorials/data_preprocessing/index.html +++ b/dev/tutorials/data_preprocessing/index.html @@ -89,4 +89,4 @@ x = select_factual(counterfactual_data, chosen) ce = generate_counterfactual(x, target, counterfactual_data, M, generator)

The resulting counterfactual path is shown in the chart below. Since only the first feature can be perturbed, the sample can only move along the horizontal axis.

plot(ce)

Domain constraints

In some cases, we may also want to constrain the domain of some feature. For example, age as a feature is constrained to a range from 0 to some upper bound corresponding perhaps to the average life expectancy of humans. Below, for example, we impose a lower bound of $0.5$ for our two features.

counterfactual_data.mutability = [:both, :both]
 counterfactual_data.domain = [(0.5,Inf) for var in counterfactual_data.features_continuous]

This results in the counterfactual path shown below: since features are not allowed to be perturbed beyond the upper bound, the resulting counterfactual falls just short of the threshold probability $\gamma$.

ce = generate_counterfactual(x, target, counterfactual_data, M, generator)
-plot(ce)
+plot(ce) diff --git a/dev/tutorials/evaluation/index.html b/dev/tutorials/evaluation/index.html index 522ca70dc..beaf4cdc2 100644 --- a/dev/tutorials/evaluation/index.html +++ b/dev/tutorials/evaluation/index.html @@ -42,4 +42,4 @@ [[1.0], Float32[2.9578466], [[0.0, 0.0, 0.0, 0.0, 0.0]]] [[0.8], Float32[2.6089585], [[0.0, 0.0, 0.0, 0.0, 0.0]]] -Vector{Vector}[[[0.8], Float32[3.2487042], [[0.0, 0.0, 0.0, 0.0, 0.0]]], [[0.8], Float32[4.185718], [[0.0, 0.0, 0.0, 0.0, 0.0]]], [[1.0], Float32[4.0083566], [[0.0, 0.0, 0.0, 0.0, 0.0]]], [[1.0], Float32[2.9578466], [[0.0, 0.0, 0.0, 0.0, 0.0]]], [[0.8], Float32[2.6089585], [[0.0, 0.0, 0.0, 0.0, 0.0]]]]

This leads us to our next topic: Performance Benchmarks.

+Vector{Vector}[[[0.8], Float32[3.2487042], [[0.0, 0.0, 0.0, 0.0, 0.0]]], [[0.8], Float32[4.185718], [[0.0, 0.0, 0.0, 0.0, 0.0]]], [[1.0], Float32[4.0083566], [[0.0, 0.0, 0.0, 0.0, 0.0]]], [[1.0], Float32[2.9578466], [[0.0, 0.0, 0.0, 0.0, 0.0]]], [[0.8], Float32[2.6089585], [[0.0, 0.0, 0.0, 0.0, 0.0]]]]

This leads us to our next topic: Performance Benchmarks.

diff --git a/dev/tutorials/generators/index.html b/dev/tutorials/generators/index.html index 1cd53e796..82b41d16e 100644 --- a/dev/tutorials/generators/index.html +++ b/dev/tutorials/generators/index.html @@ -22,4 +22,4 @@ :greedy => GreedyGenerator

To specify the type of generator you want to use, you can simply instantiate it:

# Search:
 generator = GenericGenerator()
 ce = generate_counterfactual(x, target, counterfactual_data, M, generator)
-plot(ce)

We generally make an effort to follow the literature as closely as possible when implementing off-the-shelf generators.

References

Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia Liem. 2023. “Endogenous Macrodynamics in Algorithmic Recourse.” In First IEEE Conference on Secure and Trustworthy Machine Learning. https://doi.org/10.1109/satml54575.2023.00036.

Joshi, Shalmali, Oluwasanmi Koyejo, Warut Vijitbenjaronk, Been Kim, and Joydeep Ghosh. 2019. “Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems.” https://arxiv.org/abs/1907.09615.

Mothilal, Ramaravind K, Amit Sharma, and Chenhao Tan. 2020. “Explaining Machine Learning Classifiers Through Diverse Counterfactual Explanations.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 607–17. https://doi.org/10.1145/3351095.3372850.

+plot(ce)

We generally make an effort to follow the literature as closely as possible when implementing off-the-shelf generators.

References

Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia Liem. 2023. “Endogenous Macrodynamics in Algorithmic Recourse.” In First IEEE Conference on Secure and Trustworthy Machine Learning. https://doi.org/10.1109/satml54575.2023.00036.

Joshi, Shalmali, Oluwasanmi Koyejo, Warut Vijitbenjaronk, Been Kim, and Joydeep Ghosh. 2019. “Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems.” https://arxiv.org/abs/1907.09615.

Mothilal, Ramaravind K, Amit Sharma, and Chenhao Tan. 2020. “Explaining Machine Learning Classifiers Through Diverse Counterfactual Explanations.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 607–17. https://doi.org/10.1145/3351095.3372850.

diff --git a/dev/tutorials/index.html b/dev/tutorials/index.html index 7f22944c2..757e0e4b2 100644 --- a/dev/tutorials/index.html +++ b/dev/tutorials/index.html @@ -1,2 +1,2 @@ -Overview · CounterfactualExplanations.jl

Tutorials

In this section, you will find a series of tutorials that should help you gain a basic understanding of Counterfactual Explanations and how to apply it in Julia using this package.

Tutorials are lessons that take the reader by the hand through a series of steps to complete a project of some kind. Tutorials are learning-oriented.

Diátaxis

In other words, you come here because you are new to this topic and are looking for a first peek at the methodology and code 🫣.

+Overview · CounterfactualExplanations.jl

Tutorials

In this section, you will find a series of tutorials that should help you gain a basic understanding of Counterfactual Explanations and how to apply it in Julia using this package.

Tutorials are lessons that take the reader by the hand through a series of steps to complete a project of some kind. Tutorials are learning-oriented.

Diátaxis

In other words, you come here because you are new to this topic and are looking for a first peek at the methodology and code 🫣.

diff --git a/dev/tutorials/model_catalogue/index.html b/dev/tutorials/model_catalogue/index.html index 546c3fb5b..2c725a7bf 100644 --- a/dev/tutorials/model_catalogue/index.html +++ b/dev/tutorials/model_catalogue/index.html @@ -22,4 +22,4 @@ n_hidden = 32, dropout = true )

The model_params can be supplied to the familiar API call:

M = fit_model(train_data, :MLP; model_params...)
CounterfactualExplanations.Models.Model(Chain(Dense(784 => 32, relu), Dropout(0.25, active=false), Dense(32 => 10)), :classification_multi, Chain(Dense(784 => 32, relu), Dropout(0.25, active=false), Dense(32 => 10)), MLP())

The model performance on our test set can be evaluated as follows:

model_evaluation(M, test_data)
1-element Vector{Float64}:
- 0.9185

Finally, let’s restore the default training parameters:

CounterfactualExplanations.reset!(flux_training_params)

References

Lakshminarayanan, Balaji, Alexander Pritzel, and Charles Blundell. 2017. “Simple and Scalable Predictive Uncertainty Estimation Using Deep Ensembles.” Advances in Neural Information Processing Systems 30.

LeCun, Yann. 1998. “The MNIST Database of Handwritten Digits.” http://yann.lecun.com/exdb/mnist/.

+ 0.9185

Finally, let’s restore the default training parameters:

CounterfactualExplanations.reset!(flux_training_params)

References

Lakshminarayanan, Balaji, Alexander Pritzel, and Charles Blundell. 2017. “Simple and Scalable Predictive Uncertainty Estimation Using Deep Ensembles.” Advances in Neural Information Processing Systems 30.

LeCun, Yann. 1998. “The MNIST Database of Handwritten Digits.” http://yann.lecun.com/exdb/mnist/.

diff --git a/dev/tutorials/models/index.html b/dev/tutorials/models/index.html index c3ff1fd5c..373abf0c9 100644 --- a/dev/tutorials/models/index.html +++ b/dev/tutorials/models/index.html @@ -41,4 +41,4 @@ Epoch 80 avg_loss(data) = 0.011847609f0 Epoch 100 -avg_loss(data) = 0.007242911f0

To prepare the fitted model for use with our package, we need to wrap it inside a container. For plain-vanilla models trained in Flux.jl, the corresponding constructor is called MLP. There is also a separate constructor called DeepEnsemble, which applies to Deep Ensembles. Deep Ensembles are a popular approach to approximate Bayesian Deep Learning and have been shown to generate good predictive uncertainty estimates (Lakshminarayanan, Pritzel, and Blundell 2017).

The appropriate API call to wrap our simple network in a container follows below:

M = MLP(nn)
CounterfactualExplanations.Models.Model(Chain(Dense(2 => 32, relu), Dropout(0.1, active=false), Dense(32 => 2)), :classification_binary, Chain(Dense(2 => 32, relu), Dropout(0.1, active=false), Dense(32 => 2)), MLP())

The likelihood function of the output variable is automatically inferred from the data. The generic plot() method can be called on the model and data to visualise the results:

plot(M, counterfactual_data)

Our model M is now ready for use with the package.

References

Lakshminarayanan, Balaji, Alexander Pritzel, and Charles Blundell. 2017. “Simple and Scalable Predictive Uncertainty Estimation Using Deep Ensembles.” Advances in Neural Information Processing Systems 30.

+avg_loss(data) = 0.007242911f0

To prepare the fitted model for use with our package, we need to wrap it inside a container. For plain-vanilla models trained in Flux.jl, the corresponding constructor is called MLP. There is also a separate constructor called DeepEnsemble, which applies to Deep Ensembles. Deep Ensembles are a popular approach to approximate Bayesian Deep Learning and have been shown to generate good predictive uncertainty estimates (Lakshminarayanan, Pritzel, and Blundell 2017).

The appropriate API call to wrap our simple network in a container follows below:

M = MLP(nn)
CounterfactualExplanations.Models.Model(Chain(Dense(2 => 32, relu), Dropout(0.1, active=false), Dense(32 => 2)), :classification_binary, Chain(Dense(2 => 32, relu), Dropout(0.1, active=false), Dense(32 => 2)), MLP())

The likelihood function of the output variable is automatically inferred from the data. The generic plot() method can be called on the model and data to visualise the results:

plot(M, counterfactual_data)

Our model M is now ready for use with the package.

References

Lakshminarayanan, Balaji, Alexander Pritzel, and Charles Blundell. 2017. “Simple and Scalable Predictive Uncertainty Estimation Using Deep Ensembles.” Advances in Neural Information Processing Systems 30.

diff --git a/dev/tutorials/parallelization/index.html b/dev/tutorials/parallelization/index.html index 459b76281..53c26a57c 100644 --- a/dev/tutorials/parallelization/index.html +++ b/dev/tutorials/parallelization/index.html @@ -219,4 +219,4 @@ bmk = benchmark(counterfactual_data; parallelizer=parallelizer) -MPI.Finalize()

The file can be executed from the command line as follows:

mpiexecjl --project -n 4 julia -e 'include("docs/src/srcipts/mpi.jl")'
+MPI.Finalize()

The file can be executed from the command line as follows:

mpiexecjl --project -n 4 julia -e 'include("docs/src/srcipts/mpi.jl")'
diff --git a/dev/tutorials/simple_example/index.html b/dev/tutorials/simple_example/index.html index 71bb0d1aa..7c784ba76 100644 --- a/dev/tutorials/simple_example/index.html +++ b/dev/tutorials/simple_example/index.html @@ -10,4 +10,4 @@ x = select_factual(counterfactual_data, chosen)

Finally, we generate and visualize the generated counterfactual:

# Search:
 generator = WachterGenerator()
 ce = generate_counterfactual(x, target, counterfactual_data, M, generator)
-plot(ce)

+plot(ce)

diff --git a/dev/tutorials/whistle_stop/index.html b/dev/tutorials/whistle_stop/index.html index e501cb7f2..b9dd49738 100644 --- a/dev/tutorials/whistle_stop/index.html +++ b/dev/tutorials/whistle_stop/index.html @@ -26,4 +26,4 @@ ) ces[key] = ce plts = [plts..., plot(ce; title=key, colorbar=false)] -end

+end