Applying off-the-shelf methods with default configurations such as Local Interpretable Model-Agnostic Explanations (LIME) [1] is not sufficient to generate stable and understandable explanations in histopathology [3].
This work improves standard LIME by leveraging nuclei annotations, creating a reliable way for pathologists to audit black-box tumor classifiers.
The obtained visualizations reveal the sharp, neat and high attention of the deep classifier to the neoplastic nuclei in the dataset, an observation in line with clinical decision making. Compared to standard LIME, our explanations show improved understandability for domain-experts, report higher stability and pass the sanity checks of consistency to data or initialization changes and sensitivity to network parameters.
Explore the docs »
View Examples
·
Report Bug
·
We propose a methodology to improve the reliability and explainability of LIME for histopathology. Our main observation is that the unsupervised segmentation method used in standard LIME is not optimal to identify superpixels in pathology images.
We improves tis approach by selecting regions in the image that have a semantic meaning, being either nuclei or portions of the background.
This is obtained by exploiting the manual contours of nuclei in PanNuke breast images [4] and by using a Mask-RCNN to obtain segmentations of unlabelled nuclei in Camelyon [5]. To balance the foreground to background ratio, we divide the background tissue into nine blocks of small size and compare the LIME weights for these blocks against the nuclei.
- 1 Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. "" Why should i trust you?" Explaining the predictions of any classifier." Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016.
- 2 Palatnik de Sousa, Iam, Marley Maria Bernardes Rebuzzi Vellasco, and Eduardo Costa da Silva. "Local interpretable model-agnostic explanations for classification of lymph node metastases." Sensors 19.13 (2019): 2969.
- 3 Graziani, Mara, et al. "Evaluation and Comparison of CNN Visual Explanations for Histopathology." (2020).
- 4 Gamper, Jevgenij, et al. "Pannuke: an open pan-cancer histology dataset for nuclei instance segmentation and classification." European Congress on Digital Pathology. Springer, Cham, 2019.
- 5 Litjens, Geert, et al. "1399 H&E-stained sentinel lymph node sections of breast cancer patients: the CAMELYON dataset." GigaScience 7.6 (2018): giy065.
To get a local copy up and running follow these simple steps.
This code was developed in Python 3.6 and using Tensorflow 2. You will also need some standard packages to replicate the experiments.Follow the instructions in Installation to set the environment
- Clone the repo
git clone https://github.com/maragraziani/MICCAI2021_replicate
- Install python packages with pip
pip install numpy pandas matplotlib h5py seaborn scikit-image pip install git+https://github.com/palatos/lime@ColorExperiments
Use this space to show useful examples of how a project can be used. Additional screenshots, code examples and demos work well in this space. You may also link to more resources.
For more examples, please refer to the Notebooks folder
Distributed under the MIT License. See LICENSE
for more information.
Mara Graziani - @mormontre - [email protected] Iam Palatnik - [email protected]
If you make use of the code, please cite our paper in your work
@article{graziani2021sharpening,
title = "Sharpening Local Interpretable Model-agnostic Explanations for Histopathology: Improved Understandability and Reliability",
journal = "to be presented at MICCAI2021",
pages = "",
year = "2021",
issn = "",
doi = "",
author = "Mara Graziani and Iam Palatnik De Sousa and Marley M.B.R. Vellasco and Eduardo Costa da Silva and Henning Mueller and Vincent Andrearczyk"
}