Awesome Explainable Graph Reasoning
A collection of research papers and software related to explainability in graph machine learning.
Contents
License
A collection of research papers and software related to explainability in graph machine learning.
License
Hi all, I've added a new reference to a paper of mine related to counterfactual explanations for molecule predictions. I hope this is appreciated :)
Link to paper: https://arxiv.org/abs/2104.08060
You might want to double check this commit is ok - I added a new sub-heading called concept based methods which was not covered by the survey paper the rest of the approaches are categorised into.
Two papers on rule-based reasoning:
And one application note on a web application for visualizing predictions and their explanations using made my the approaches above:
The work 'Evaluating Attribution for Graph Neural Networks' is particularly useful because of its approach as a benchmarking. It comprises several attribution techniques and GNN architectures.
Hi, I have been impressed about how fast is this field growing. As I continue reading and learning, I will contribute with papers to make this list even better.
In particular, @flyingdoog is maintaining a list with the papers (grouped by year) at https://github.com/flyingdoog/awesome-graph-explainability-papers that can be interesting to review
Cockpit is a visual and statistical debugger specifically designed for deep learning!
Correlation Explanation Methods Official implementation of linear correlation explanation (linear CorEx) and temporal correlation explanation (T-CorEx
TensorFlow Model Analysis TensorFlow Model Analysis (TFMA) is a library for evaluating TensorFlow models. It allows users to evaluate their models on
Neural network visualization toolkit for tf.keras
👋🦊 Xplique is a Python toolkit dedicated to explainability, currently based on Tensorflow.
Skater Skater is a unified framework to enable Model Interpretation for all forms of model to help one build an Interpretable machine learning system
A collection of research papers and software related to explainability in graph machine learning.
Tool for visualizing attention in the Transformer model (BERT, GPT-2, Albert, XLNet, RoBERTa, CTRL, etc.)
dtreeviz : Decision Tree Visualization Description A python library for decision tree visualization and model interpretation. Currently supports sciki
⬛ PyCEbox Python Individual Conditional Expectation Plot Toolbox A Python implementation of individual conditional expecation plots inspired by R's IC
Themis ML themis-ml is a Python library built on top of pandas and sklearnthat implements fairness-aware machine learning algorithms. Fairness-aware M
Lucid Lucid is a collection of infrastructure and tools for research in neural network interpretability. We're not currently supporting tensorflow 2!
tensorboardX Write TensorBoard events with simple function call. The current release (v2.1) is tested on anaconda3, with PyTorch 1.5.1 / torchvision 0
PDPbox python partial dependence plot toolbox Motivation This repository is inspired by ICEbox. The goal is to visualize the impact of certain feature
GNNLens2 is an interactive visualization tool for graph neural networks (GNN).
======== FairML: Auditing Black-Box Predictive Models FairML is a python toolbox auditing the machine learning models for bias. Description Predictive
Portal is the fastest way to load and visualize your deep neural networks on images and videos 🔮
ELI5 ELI5 is a Python package which helps to debug machine learning classifiers and explain their predictions. It provides support for the following m
ModelChimp What is ModelChimp? ModelChimp is an experiment tracker for Deep Learning and Machine Learning experiments. ModelChimp provides the followi
Automatic neural network visualizations generated in your browser!