Papers and resources related to the security and privacy of LLMs 🤖
-
Updated
Jun 12, 2024 - Python
Papers and resources related to the security and privacy of LLMs 🤖
LLM App templates for RAG, knowledge mining, and stream analytics. Ready to run with Docker,⚡in sync with your data sources.
🐢 Open-Source Evaluation & Testing for LLMs and ML models
安全手册,企业安全实践、攻防与安全研究知识库
The fastest && easiest LLM security and privacy guardrails for GenAI apps.
[CCS'24] A dataset consists of 15,140 ChatGPT prompts from Reddit, Discord, websites, and open-source datasets (including 1,405 jailbreak prompts).
The Security Toolkit for LLM Interactions
AI-driven Threat modeling-as-a-Code (TaaC-AI)
A secure low code honeypot framework, leveraging AI for System Virtualization.
AiShields is an open-source Artificial Intelligence Data Input and Output Sanitizer
Agentic LLM Vulnerability Scanner
Formalizing and Benchmarking Prompt Injection Attacks and Defenses
User prompt attack detection system
Framework for LLM evaluation, guardrails and security
Ultra-fast, low latency LLM prompt injection/jailbreak detection ⛓️
A benchmark for prompt injection detection systems.
Risks and targets for assessing LLMs & LLM vulnerabilities
This repository contains various attack against Large Language Models.
SecGPT: An execution isolation architecture for LLM-based systems
Add a description, image, and links to the llm-security topic page so that developers can more easily learn about it.
To associate your repository with the llm-security topic, visit your repo's landing page and select "manage topics."