Skip to content
#

fairness-ml

Here are 194 public repositories matching this topic...

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

  • Updated Aug 9, 2024
  • TypeScript

WEFE: The Word Embeddings Fairness Evaluation Framework. WEFE is a framework that standardizes the bias measurement and mitigation in Word Embeddings models. Please feel welcome to open an issue in case you have any questions or a pull request if you want to contribute to the project!

  • Updated Jun 18, 2024
  • Python
deep-explanation-penalization

Improve this page

Add a description, image, and links to the fairness-ml topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the fairness-ml topic, visit your repo's landing page and select "manage topics."

Learn more