v0.5.0
v0.5.0 - 2024-10-15
Features
- Improved the introduction in README.
- Added
calibrated_confusion_matrix
inCalibratedExplainer
andWrapCalibratedExplainer
, providing a leave-one-out calibrated confusion matrix using the calibration set. The insights from the confusion matrix are useful when analyzing explanations, to determine general prediction and error distributions of the model. An example of using the confusion matrix in the analysis is given in paper Calibrated Explanations for Multi-class. - Embraced the update of
crepes
version 0.7.1, making it possible to add a seed when fitting. Addresses issue #43. - Updating terminology and functionality:
- Introducing the concept of ensured explanations.
- Changed the name of
CounterfactualExplanation
toAlternativeExplanation
, as it better reflects the purpose and functionality of the class. - Added a collection subclass
AlternativeExplanations
inheriting fromCalibratedExplanations
, which is used for collections ofAlternativeExplanation
's. Collection methods referring to methods only available in theAlternativeExplanation
are included in the new collection class. - Added an
explore_alternatives
method inCalibratedExplainer
andWrapCalibratedExplainer
to be used instead ofexplain_counterfactual
, as the name of the later is ambiguous. Theexplain_counterfactual
is still kept for compatibility reasons but only forwards the call toexplore_alternatives
. All files and notebooks have been updated to only callexplore_alternatives
. All references to counterfactuals have been changed to alternatives, with obvious exceptions. - Added both filtering methods and a ranking metric that can help filter out ensured explanations.
- The parameters
rnk_metric
andrnk_weight
has been added to the plotting functions and is applicable to all kinds of plots. - Both the
AlternativeExplanation
class (for an individual instance) and the collection subclassAlternativeExplanations
contains filter functions only applicable to alternative explanations, such ascounter_explanations
,semi_explanations
,super_explanations
, andensured_explanations
.counter_explanations
removes all alternatives except those changing prediction.semi_explanations
removes all alternatives except those reducing the probability while not changing prediction.super_explanations
removes all alternatives except those increasing the probability for the prediction.- The concept of potential (uncertain) explanations is introduced. When the uncertainty interval spans across probability 0.5, an explanation is considered a potential. It will normally only be counter-potential and semi-potential, but can in some cases also be super-potential. Potential alternatives can be included or excluded from the above methods using the boolean parameter
include_potentials
. ensured_explanations
removes all alternatives except those with lower uncertainty (i.e. smaller uncertainty interval) than the original prediction.
- The parameters
- Added a new form of plot for probabilistic predictions is added, clearly visualizing both the aleatoric and the epistemic uncertainty.
- A global plot is added, plotting all test instances with probability and uncertainty as the x- and y-axes. The area corresponding to potential (uncertain) predictions is marked. The plot can be invoked using the
plot(X_test)
orplot(X_test, y_test)
call. - A local plot for alternative explanations, with probability and uncertainty as the x- and y-axes, is added, which can be invoked from an
AlternativeExplanation
or aAlternativeExplanations
usingplot(style='triangular')
. The optimal use is when combined with thefilter_top
parameter (see below), to include all alternatives, as follows:plot(style='triangular', filter_top=None)
.
- A global plot is added, plotting all test instances with probability and uncertainty as the x- and y-axes. The area corresponding to potential (uncertain) predictions is marked. The plot can be invoked using the
- Added prerpint and bibtex to the paper introducing ensured explanations:
- Löfström, T., Löfström, H., and Hallberg Szabadvary, J. (2024). Ensured: Explanations for Decreasing the Epistemic Uncertainty in Predictions. arXiv preprint arXiv:2410.05479.
- Bibtex:
@misc{lofstrom2024ce_ensured, title = {Ensured: Explanations for Decreasing the Epistemic Uncertainty in Predictions}, author = {L\"ofstr\"om, Helena and L\"ofstr\"om, Tuwe and Hallberg Szabadvary, Johan}, year = {2024}, eprint = {2410.05479}, archivePrefix = {arXiv}, primaryClass = {cs.LG} }
- Changed the name of
- Introduced fast explanations
- Introduced a new type of explanation called
FastExplanation
which can be extracted using theexplain_fast
method. It differs from aFactualExplanation
in that it does not define a rule condition but only provides a feature weight. - The new type of explanation is using ideas from ConformaSight, a recently proposed global explanation algorithm based on conformal classification. Acknowledgements have been added.
- Introduced a new type of explanation called
- Introduced a new form av probabilistic regression explanation:
- Introduced the possibility to get explanations for the probability of being inside an interval. This is achieved by assigning a tuple with lower and upper bounds as threshold, e.g.,
threshold=(low,high)
to get the probability of the prediction falling inside the interval (low, high]. - To the best of our knowledge, this is the only package that provide this functionality with epistemic uncertainty.
- Introduced the possibility to get explanations for the probability of being inside an interval. This is achieved by assigning a tuple with lower and upper bounds as threshold, e.g.,
- Introduced the possibility to add new user defined rule conditions, using the
add_new_rule_condition
method. This is only applicable to numerical features.- Factual explanations will create new conditions covering the instance value. Categorical features already get a condition for the instance value during the invocation of
explain_factual
. - Alternative explanations will create new conditions that exclude the instance value. Categorical features already get conditions for all alternative categories during the invocation of
explore_alternatives
.
- Factual explanations will create new conditions covering the instance value. Categorical features already get a condition for the instance value during the invocation of
- Parameter naming:
- The parameter indicating the number of rules to plot is renamed to
filter_top
(previouslyn_features_to_show
), making the call including all rules (filter_top=None
) makes a lot more sense.
- The parameter indicating the number of rules to plot is renamed to
- Introducing the concept of ensured explanations.
Fixes
- Added checks to ensure that the learner is not called unless the
WrapCalibratedExplainer
is fitted. - Added checks to ensure that the explainer is not called unless the
WrapCalibratedExplainer
is calibrated. - Fixed incorrect use of
np.random.seed
.
What's Changed
- chore(deps): update numpy requirement from <2.1,>=1.20 to >=1.20,<2.2 by @dependabot in #52
- chore(deps): bump codecov/codecov-action from 4.5.0 to 4.6.0 by @dependabot in #53