OpenXAI : Towards a Transparent Evaluation of Model Explanations
-
Updated
Mar 31, 2024 - JavaScript
OpenXAI : Towards a Transparent Evaluation of Model Explanations
Editing machine learning models to reflect human knowledge and values
🏔️ Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
A user interface to interpret machine learning models.
Visually compare fill-in-the-blank LLM prompts to uncover learned biases and associations!
Online exploration of memory reduction strategies of a DRL agent trained to solve a navigation task on ViZDoom
ir_explain: a Python Library of Explainable IR Methods
Visual analytics approach presented in the paper "Visual Analytics Tool for the Interpretation of Hidden States in Recurrent Neural Networks" (VCIBA, 2021).
A web user interface for the OncoText Pathology System (https://github.com/yala/OncoText)
Web based interpretability tool for LLMs using Huggingface and Inseq
Add a description, image, and links to the interpretability topic page so that developers can more easily learn about it.
To associate your repository with the interpretability topic, visit your repo's landing page and select "manage topics."