-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change normalisation default, fix bug in normalise_by_negative, adapt citations, absolute imports #166
Conversation
… for normalise_funcs. Fixed a name mismatch of the 'normalise_axes' kwarg in normalisation_functions in the rest of quantus (was called 'normalize_axes' elsewhere). Could not reproduce a sporadic RuntimeWarning in normalise_by_negative, so added a comprehensive error raise with action point and debug output instead (this needs to be a temporary solution)
…y_max instead of normalise_by_negative, since normalise_by_negative treats +/- values differently.
from ...quantus.helpers.explanation_func import explain | ||
from ...quantus.helpers.pytorch_model import PyTorchModel | ||
from ...quantus.helpers.tf_model import TensorFlowModel | ||
from tests.fixtures import * |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we replace the * here with more precise imports of functions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, did that everywhere it made sense. specifically the fixtures i left as *, since all fixtures should be available for tests i think
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also, e.g. in test_similarity_func i imported similarity funcs with *, because the tests need access to all similarity functions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think at least from quantus.metrics.axiomatic import *
should be replaced with something this:
from quantus.metrics.axiomatic import Completeness
from quantus.metrics.axiomatic import NonSensitivity
from quantus.metrics.axiomatic import InputInvariance
Regarding the fixture import I don't know how to do this efficiently without the star.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we maybe do this (similarly in all the other metrics folders):
from quantus.metrics.axiomatic import (Completeness, NonSensitivity, InputInvariance)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changed this. The fixtures remain as * imports though.
quantus/__init__.py
Outdated
from .helpers import * | ||
from .metrics import * | ||
from .evaluation import * | ||
from quantus.helpers import * |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you doing the split of /funcs (that we wanted to expose) as we talked about at the last meeting? @leanderweber
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no I have not touched upon that yet
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
Codecov Report
@@ Coverage Diff @@
## main #166 +/- ##
==========================================
- Coverage 94.35% 92.89% -1.46%
==========================================
Files 52 54 +2
Lines 2461 2647 +186
==========================================
+ Hits 2322 2459 +137
- Misses 139 188 +49
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
…antus and tests. Refactored helpers to structure better which functions are internal, and which should be handed through to the user.
…kages, as well as a full installation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks nice!
I just think the metric star imports in the test modules should be replaced by explicit imports.
I don't know how to get rid of the start import of the fixtures, maybe leave it as it is for now and try to get rid of it in a future PR.
One last thing:
I would prefer the functions directory path to be quantus/functions
instead of quantus/helpers/functions
from ...quantus.helpers.explanation_func import explain | ||
from ...quantus.helpers.pytorch_model import PyTorchModel | ||
from ...quantus.helpers.tf_model import TensorFlowModel | ||
from tests.fixtures import * |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think at least from quantus.metrics.axiomatic import *
should be replaced with something this:
from quantus.metrics.axiomatic import Completeness
from quantus.metrics.axiomatic import NonSensitivity
from quantus.metrics.axiomatic import InputInvariance
Regarding the fixture import I don't know how to do this efficiently without the star.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fantastic! So much good stuff. A few things as I see it before we can execute on the merge, see in the separate comments.
|
||
```setup | ||
pip install "quantus[tensorflow]" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the "
must be kept, else the pip command won't work! please add it to the others as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
@@ -16,21 +16,32 @@ e.g. pixel replacement strategy of a faithfulness test influences the ranking of | |||
[📑 Shortcut to paper!](https://arxiv.org/abs/2202.06861) | |||
|
|||
|
|||
This documentation is complementary to Quantus repository's [README.md](https://github.com/understandable-machine-intelligence-lab/Quantus) and provides documentation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Once everything related to the Installation, Getting started etc is finished then we also need to update the README.md so that they match
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated README
|
||
To enable the use of wrappers around [Captum](https://captum.ai/), you need to have PyTorch already installed and can then run | ||
|
||
```setup |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These should be captum, tensorflow? please also add quotation!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no, captum is based on pytorch, so i think it is correct as-is.
however, with the new installation options, neither torch nor tensorflow will be required to be installed already, so I will just remove that part.
- (a < 0.0) * np.divide(a, a_min, where=a_min != 0), | ||
return_array, | ||
) | ||
# TODO: TEMPORARY SOLUTION: CHANGE WHEN BUG IS IDENTIFIED. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Needs clarity/ rewrite todo comment before pushed to main.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rewrote it.
- (a < 0.0) * np.divide(a, a_min, where=a_min != 0), | ||
return_array, | ||
) | ||
except RuntimeWarning: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🙏
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
development branch would be nice for this...
@@ -41,109 +87,191 @@ x_batch, y_batch = x_batch.cpu().numpy(), y_batch.cpu().numpy() | |||
# Quick assert. | |||
assert [isinstance(obj, np.ndarray) for obj in [x_batch, y_batch, a_batch_saliency, a_batch_intgrad]] | |||
|
|||
# You can use any function e.g., quantus.explain (not necessarily captum) to generate your explanations. | |||
# You can use any function (not necessarily captum) to generate your explanations. | |||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice that you got the image to work!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks :D
|
||
Quantus implements XAI evaluation metrics from different categories | ||
(faithfulness, localisation, robustness, ...) which all inherit from the base `quantus.Metric` class. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove this line
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the empty line, right? did that.
(faithfulness, localisation, robustness, ...) which all inherit from the base `quantus.Metric` class. | ||
|
||
Metrics are designed as `Callables`. To apply a metric to your setting (e.g., [Max-Sensitivity](https://arxiv.org/abs/1901.09392)), | ||
they first need to be instantiated |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add :
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
results = evaluate( | ||
metrics=metrics, | ||
xai_methods=xai_methods, | ||
### Customizing Metrics |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice section!
similarity_func=quantus.difference | ||
) | ||
``` | ||
* Hyperparameters affecting the inputs (data, model, explanations) to each metric are set in the `__call__` method of each metric |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
…installation-update Installation option refactoring to be more concise
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
regarding formatting multiline imports.
although this is not yet in PEP8, recent guidelines in projects i'm working on was to use a single line for each import. this way you prevent merge conflicts in the future.
here's a package that does this automatically and i love it:
https://pypi.org/project/reorder-python-imports/
the motivation can be found under: "why this style?"
so instead of writing
from quantus.metrics.faithfulness import (
FaithfulnessCorrelation,
FaithfulnessEstimate,
Infidelity,
IROF,
Monotonicity,
MonotonicityCorrelation,
PixelFlipping,
RegionPerturbation,
ROAD,
Selectivity,
SensitivityN,
Sufficiency,
)
you would write
from quantus.metrics.faithfulness import FaithfulnessCorrelation
from quantus.metrics.faithfulness import FaithfulnessEstimate
from quantus.metrics.faithfulness import Infidelity
from quantus.metrics.faithfulness import IROF
from quantus.metrics.faithfulness import Monotonicity
from quantus.metrics.faithfulness import MonotonicityCorrelation
from quantus.metrics.faithfulness import PixelFlipping
from quantus.metrics.faithfulness import RegionPerturbation
from quantus.metrics.faithfulness import ROAD
from quantus.metrics.faithfulness import Selectivity
from quantus.metrics.faithfulness import SensitivityN
from quantus.metrics.faithfulness import Sufficiency
this maybe looks overly explicit at the beginning, but we can just add reorder-python-imports
to python pre-commit hooks and really never having to bother again about import ordering and stuff.
But it's just a suggestion, I don't care that much, although a pre-commit hook would really be nice.
apart from that, there's nothing to note anymore for me.
Almost ready! Two issues that need to be resolved before merging @leanderweber
|
The imports should be fixed. Was an issue with relative imports not having been replaced by absolute imports in base_batched.py Also renamed normalized_axes to normalise_axes in merged content. Not sure about the api docs not being generated properly. This is how it looks like for me when building the docs: Is this not how it is supposed to look? |
…fix-normalisation-division Change normalisation default, fix bug in normalise_by_negative, adapt citations, absolute imports
Changes of this PR: