Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
-
Updated
Jun 17, 2024 - Jupyter Notebook
Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
📍 Interactive Studio for Explanatory Model Analysis
💡 Adversarial attacks on explanations and how to defend them
Boosting the AI research efficiency
Implements the Tsetlin Machine, Convolutional Tsetlin Machine, Regression Tsetlin Machine, Weighted Tsetlin Machine, and Embedding Tsetlin Machine, with support for continuous features, multigranularity, clause indexing, and literal budget
Pytorch implementation of various neural network interpretability methods
SurvSHAP(t): Time-dependent explanations of machine learning survival models
Mechanistically interpretable neurosymbolic AI (Nature Comput Sci 2024): losslessly compressing NNs to computer code and discovering new algorithms which generalize out-of-distribution and outperform human-designed algorithms
Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
Code for Interpretable Adversarial Perturbation in Input Embedding Space for Text, IJCAI 2018.
This code repository is associated with the paper "A Case-based Interpretable Deep Learning Model for Classification of Mass Lesions in Digital Mammography." Nature Machine Intelligence, 2021. https://www.nature.com/articles/s42256-021-00423-x
Summarization of static graphs using the Minimum Description Length principle
This repo contains code for Invariant Grounding for Video Question Answering
Techniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
graph neural networks, information theory, AI for Sciences
Trustworthy LoS Prediction Based on Multi-modal Data (AIME 2023)
Framework for material structure exploration
Visual explanations of supervised classification models
Maximal Linkability metric to evaluate the linkability of (protected) biometric templates. Paper: "Measuring Linkability of Protected Biometric Templates using Maximal Leakage", IEEE-TIFS, 2023.
Add a description, image, and links to the interpretable topic page so that developers can more easily learn about it.
To associate your repository with the interpretable topic, visit your repo's landing page and select "manage topics."