π’ Open-Source Evaluation & Testing for ML & LLM systems
-
Updated
Nov 22, 2024 - Python
π’ Open-Source Evaluation & Testing for ML & LLM systems
The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and engineers to proactively identify risks in generative AI systems.
An offensive security toolset for Microsoft 365 focused on Microsoft Copilot, Copilot Studio and Power Platform
Agentic LLM Vulnerability Scanner / AI red teaming kit
π€π‘οΈπππ Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.
LMAP (large language model mapper) is like NMAP for LLM, is an LLM Vulnerability Scanner and Zero-day Vulnerability Fuzzer.
This is my prompts for Lakera's Gandalf challenges
Add a description, image, and links to the ai-red-team topic page so that developers can more easily learn about it.
To associate your repository with the ai-red-team topic, visit your repo's landing page and select "manage topics."