🐢 Open-Source Evaluation & Testing for LLMs and ML models
-
Updated
Jun 13, 2024 - Python
🐢 Open-Source Evaluation & Testing for LLMs and ML models
Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community.
Package for evaluating the performance of methods which aim to increase fairness, accountability and/or transparency
Evaluation & testing framework for computer vision models
Debiasing word embeddings
[AAAI 2018] Implementation of the Ethics Shaping approach proposed in "A low-cost ethics shaping approach for designing reinforcement learning agents"
Micro AI for multiple LLM switching, preparing datasets, training models, and deploying them in isolated environments using Docker
Automate ethical AI assessments via GitHub Actions
Code for fair-representation learning path
Welcome to the "Harnessing the Power of AI" workshop! This GitHub repository serves as a comprehensive resource for participants seeking hands-on learning and in-depth understanding of AI concepts, techniques, and tools.
Model check systems for obligations
The first ethical language generation model.
A bias bounty competition for income prediction. Using the pointer decision list method to improve group accuracy.
Add a description, image, and links to the ethical-artificial-intelligence topic page so that developers can more easily learn about it.
To associate your repository with the ethical-artificial-intelligence topic, visit your repo's landing page and select "manage topics."