Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change normalisation default, fix bug in normalise_by_negative, adapt citations, absolute imports #166

Merged
merged 43 commits into from
Oct 18, 2022

Conversation

leanderweber
Copy link
Collaborator

@leanderweber leanderweber commented Oct 11, 2022

Changes of this PR:

  • Make some Updates to the readthedocs
  • Update and Unify Citations in all Metrics
  • Update, unify, and sort all imports
  • Change all relative imports to absolute
  • Rethink star imports. Specifically, restructure helpers to distinguish internal functions and functions that should be user-accessible.
  • Change the normalisation_func default to normalise_by_max instead of normalise_by_negative due to the latter treating pos/neg values differently
  • Target some bugs within the normalisation functions and their application
  • Merge installation options rework (see PR Installation option refactoring to be more concise #172)
  • Updated and cleaned-up README.md file

… for normalise_funcs. Fixed a name mismatch of the 'normalise_axes' kwarg in normalisation_functions in the rest of quantus (was called 'normalize_axes' elsewhere). Could not reproduce a sporadic RuntimeWarning in normalise_by_negative, so added a comprehensive error raise with action point and debug output instead (this needs to be a temporary solution)
…y_max instead of normalise_by_negative, since normalise_by_negative treats +/- values differently.
@leanderweber leanderweber changed the title Change normalisation default, add, adapt citations, imports Change normalisation default, fix bug in normalise_by_negative, adapt citations, absolute imports Oct 11, 2022
from ...quantus.helpers.explanation_func import explain
from ...quantus.helpers.pytorch_model import PyTorchModel
from ...quantus.helpers.tf_model import TensorFlowModel
from tests.fixtures import *
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we replace the * here with more precise imports of functions?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, did that everywhere it made sense. specifically the fixtures i left as *, since all fixtures should be available for tests i think

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also, e.g. in test_similarity_func i imported similarity funcs with *, because the tests need access to all similarity functions

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think at least from quantus.metrics.axiomatic import *
should be replaced with something this:

from quantus.metrics.axiomatic import Completeness
from quantus.metrics.axiomatic import NonSensitivity
from quantus.metrics.axiomatic import InputInvariance

Regarding the fixture import I don't know how to do this efficiently without the star.

Copy link
Member

@annahedstroem annahedstroem Oct 13, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we maybe do this (similarly in all the other metrics folders):

from quantus.metrics.axiomatic import (Completeness, NonSensitivity, InputInvariance)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changed this. The fixtures remain as * imports though.

from .helpers import *
from .metrics import *
from .evaluation import *
from quantus.helpers import *
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you doing the split of /funcs (that we wanted to expose) as we talked about at the last meeting? @leanderweber

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no I have not touched upon that yet

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@codecov-commenter
Copy link

codecov-commenter commented Oct 13, 2022

Codecov Report

Merging #166 (9b1c1bb) into main (38dab8f) will decrease coverage by 1.45%.
The diff coverage is 94.75%.

❗ Current head 9b1c1bb differs from pull request most recent head 6bca39c. Consider uploading reports for the commit 6bca39c to get more accurate results

@@            Coverage Diff             @@
##             main     #166      +/-   ##
==========================================
- Coverage   94.35%   92.89%   -1.46%     
==========================================
  Files          52       54       +2     
  Lines        2461     2647     +186     
==========================================
+ Hits         2322     2459     +137     
- Misses        139      188      +49     
Impacted Files Coverage Δ
quantus/functions/discretise_func.py 100.00% <ø> (ø)
quantus/functions/loss_func.py 83.33% <ø> (ø)
quantus/functions/norm_func.py 100.00% <ø> (ø)
quantus/helpers/model/model_interface.py 76.92% <ø> (ø)
quantus/helpers/model/models.py 84.03% <ø> (ø)
quantus/functions/normalise_func.py 69.76% <31.57%> (ø)
...s/metrics/localisation/attribution_localisation.py 90.69% <80.00%> (ø)
quantus/metrics/localisation/auc.py 93.10% <87.50%> (ø)
quantus/metrics/localisation/pointing_game.py 90.32% <87.50%> (-0.31%) ⬇️
...us/metrics/localisation/relevance_mass_accuracy.py 93.10% <87.50%> (ø)
... and 48 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

…antus and tests. Refactored helpers to structure better which functions are internal, and which should be handed through to the user.
@leanderweber leanderweber requested a review from dkrako October 13, 2022 13:19
Copy link
Collaborator

@dkrako dkrako left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks nice!

I just think the metric star imports in the test modules should be replaced by explicit imports.
I don't know how to get rid of the start import of the fixtures, maybe leave it as it is for now and try to get rid of it in a future PR.

One last thing:
I would prefer the functions directory path to be quantus/functions instead of quantus/helpers/functions

from ...quantus.helpers.explanation_func import explain
from ...quantus.helpers.pytorch_model import PyTorchModel
from ...quantus.helpers.tf_model import TensorFlowModel
from tests.fixtures import *
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think at least from quantus.metrics.axiomatic import *
should be replaced with something this:

from quantus.metrics.axiomatic import Completeness
from quantus.metrics.axiomatic import NonSensitivity
from quantus.metrics.axiomatic import InputInvariance

Regarding the fixture import I don't know how to do this efficiently without the star.

tests/metrics/test_complexity_metrics.py Outdated Show resolved Hide resolved
tests/metrics/test_faithfulness_metrics.py Outdated Show resolved Hide resolved
tests/metrics/test_localisation_metrics.py Outdated Show resolved Hide resolved
tests/metrics/test_randomisation_metrics.py Outdated Show resolved Hide resolved
tests/metrics/test_robustness_metrics.py Outdated Show resolved Hide resolved
Copy link
Member

@annahedstroem annahedstroem left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fantastic! So much good stuff. A few things as I see it before we can execute on the merge, see in the separate comments.


```setup
pip install "quantus[tensorflow]"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the " must be kept, else the pip command won't work! please add it to the others as well

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -16,21 +16,32 @@ e.g. pixel replacement strategy of a faithfulness test influences the ranking of
[📑 Shortcut to paper!](https://arxiv.org/abs/2202.06861)


This documentation is complementary to Quantus repository's [README.md](https://github.com/understandable-machine-intelligence-lab/Quantus) and provides documentation
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Once everything related to the Installation, Getting started etc is finished then we also need to update the README.md so that they match

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated README


To enable the use of wrappers around [Captum](https://captum.ai/), you need to have PyTorch already installed and can then run

```setup
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These should be captum, tensorflow? please also add quotation!

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, captum is based on pytorch, so i think it is correct as-is.

however, with the new installation options, neither torch nor tensorflow will be required to be installed already, so I will just remove that part.

- (a < 0.0) * np.divide(a, a_min, where=a_min != 0),
return_array,
)
# TODO: TEMPORARY SOLUTION: CHANGE WHEN BUG IS IDENTIFIED.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Needs clarity/ rewrite todo comment before pushed to main.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rewrote it.

- (a < 0.0) * np.divide(a, a_min, where=a_min != 0),
return_array,
)
except RuntimeWarning:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🙏

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

development branch would be nice for this...

@@ -41,109 +87,191 @@ x_batch, y_batch = x_batch.cpu().numpy(), y_batch.cpu().numpy()
# Quick assert.
assert [isinstance(obj, np.ndarray) for obj in [x_batch, y_batch, a_batch_saliency, a_batch_intgrad]]

# You can use any function e.g., quantus.explain (not necessarily captum) to generate your explanations.
# You can use any function (not necessarily captum) to generate your explanations.
```
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice that you got the image to work!

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks :D


Quantus implements XAI evaluation metrics from different categories
(faithfulness, localisation, robustness, ...) which all inherit from the base `quantus.Metric` class.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this line

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the empty line, right? did that.

(faithfulness, localisation, robustness, ...) which all inherit from the base `quantus.Metric` class.

Metrics are designed as `Callables`. To apply a metric to your setting (e.g., [Max-Sensitivity](https://arxiv.org/abs/1901.09392)),
they first need to be instantiated
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add :

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

results = evaluate(
metrics=metrics,
xai_methods=xai_methods,
### Customizing Metrics
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice section!

similarity_func=quantus.difference
)
```
* Hyperparameters affecting the inputs (data, model, explanations) to each metric are set in the `__call__` method of each metric
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add .

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Collaborator

@dkrako dkrako left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

regarding formatting multiline imports.
although this is not yet in PEP8, recent guidelines in projects i'm working on was to use a single line for each import. this way you prevent merge conflicts in the future.

here's a package that does this automatically and i love it:
https://pypi.org/project/reorder-python-imports/
the motivation can be found under: "why this style?"

so instead of writing

from quantus.metrics.faithfulness import (
    FaithfulnessCorrelation,
    FaithfulnessEstimate,
    Infidelity,
    IROF,
    Monotonicity,
    MonotonicityCorrelation,
    PixelFlipping,
    RegionPerturbation,
    ROAD,
    Selectivity,
    SensitivityN,
    Sufficiency,
)

you would write

from quantus.metrics.faithfulness import FaithfulnessCorrelation
from quantus.metrics.faithfulness import FaithfulnessEstimate
from quantus.metrics.faithfulness import Infidelity
from quantus.metrics.faithfulness import IROF
from quantus.metrics.faithfulness import Monotonicity
from quantus.metrics.faithfulness import MonotonicityCorrelation
from quantus.metrics.faithfulness import PixelFlipping
from quantus.metrics.faithfulness import RegionPerturbation
from quantus.metrics.faithfulness import ROAD
from quantus.metrics.faithfulness import Selectivity
from quantus.metrics.faithfulness import SensitivityN
from quantus.metrics.faithfulness import Sufficiency

this maybe looks overly explicit at the beginning, but we can just add reorder-python-imports to python pre-commit hooks and really never having to bother again about import ordering and stuff.
But it's just a suggestion, I don't care that much, although a pre-commit hook would really be nice.

apart from that, there's nothing to note anymore for me.

@annahedstroem
Copy link
Member

annahedstroem commented Oct 17, 2022

Almost ready! Two issues that need to be resolved before merging @leanderweber

  1. circular import https://github.com/understandable-machine-intelligence-lab/Quantus/actions/runs/3266064142/jobs/5369251649#step:6:163
  2. API-docs do no longer generate properly

image

@leanderweber
Copy link
Collaborator Author

The imports should be fixed. Was an issue with relative imports not having been replaced by absolute imports in base_batched.py

Also renamed normalized_axes to normalise_axes in merged content.

Not sure about the api docs not being generated properly. This is how it looks like for me when building the docs:

Bildschirmfoto von 2022-10-18 14-06-44

Is this not how it is supposed to look?

@annahedstroem annahedstroem merged commit b2d4023 into main Oct 18, 2022
@annahedstroem annahedstroem deleted the fix-normalisation-division branch January 11, 2023 08:23
aaarrti pushed a commit that referenced this pull request Apr 9, 2023
…fix-normalisation-division

Change normalisation default, fix bug in normalise_by_negative, adapt citations, absolute imports
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants