Block or Report
Block or report qdx
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
Bisheng is an open LLM devops platform for next generation AI applications.
Ingest, parse, and optimize any data format ➡️ from documents to multimedia ➡️ for enhanced compatibility with GenAI frameworks
Versioning extension for SQLAlchemy.
Integration of FastAPI framework supported by Pydantic with SQLAlchemy ORM and PostgreSQL on asyncpg driver
Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting yo…
Libssh SSH client & server port to ESP32 Arduino library
A local chatbot fine-tuned by bilibili user comments.
Beimingwu is the first systematic open-source implementation of the learnware dock system, providing a preliminary research platform for learnware studies and enabling effective learnware search an…
tiktoken is a fast BPE tokeniser for use with OpenAI's models.
ASCII Art Prompt Injection is a novel approach to hacking AI assistants using ASCII art. This project leverages the distracting nature of ASCII art to bypass security measures and inject prompts in…
Image Prompt Injection is a Python script that demonstrates how to embed a secret prompt within an image using steganography techniques. This hidden prompt can be later extracted by an AI system fo…
The Prompt Injection Testing Tool is a Python script designed to assess the security of your AI system's prompt handling against a predefined list of user prompts commonly used for injection attack…
Every practical and proposed defense against prompt injection.
Official repo for Customized but Compromised: Assessing Prompt Injection Risks in User-Designed GPTs
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
This project investigates the security of large language models by performing binary classification of a set of input prompts to discover malicious prompts. Several approaches have been analyzed us…
Make your GenAI Apps Safe & Secure 🚀 Test & harden your system prompt
Generative Agents: Interactive Simulacra of Human Behavior
🦜🔗 Build context-aware reasoning applications
[ICLR'24 spotlight] An open platform for training, serving, and evaluating large language model for tool learning.
The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".
The all-in-one Desktop & Docker AI application with full RAG and AI Agent capabilities.
SWE-agent takes a GitHub issue and tries to automatically fix it, using GPT-4, or your LM of choice. It solves 12.47% of bugs in the SWE-bench evaluation set and takes just 1 minute to run.
Benchmarking long-form factuality in large language models. Original code for our paper "Long-form factuality in large language models".
A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation"
Drop in a screenshot and convert it to clean code (HTML/Tailwind/React/Vue)
My inputs for the LLM Gandalf made by Lakera