Lists (6)
Sort Name ascending (A-Z)
Stars
RAGLAB: A Modular and Research-Oriented Unified Framework for Retrieval-Augmented Generation
An automated pipeline for evaluating LLMs for role-playing.
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Run PyTorch LLMs locally on servers, desktop and mobile
🔍 An LLM-based Multi-agent Framework of Web Search Engine (like Perplexity.ai Pro and SearchGPT)
Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting yo…
A modular graph-based Retrieval-Augmented Generation (RAG) system
A blazing fast inference solution for text embeddings models
A sample app for the Retrieval-Augmented Generation pattern running in Azure, using Azure AI Search for retrieval and Azure OpenAI large language models to power ChatGPT-style and Q&A experiences.
The AI-native database built for LLM applications, providing incredibly fast hybrid search of dense vector, sparse vector, tensor (multi-vector), and full-text
Official repo for "LongRAG: Enhancing Retrieval-Augmented Generation with Long-context LLMs".
Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and…
pytrec_eval is an Information Retrieval evaluation tool for Python, based on the popular trec_eval.
Infinity is a high-throughput, low-latency REST API for serving text-embeddings, reranking models and clip
The Triton TensorRT-LLM Backend
Korvus is a search SDK that unifies the entire RAG pipeline in a single database query. Built on top of Postgres with bindings for Python, JavaScript, Rust and C.
MTEB: Massive Text Embedding Benchmark
A 4-hour coding workshop to understand how LLMs are implemented and used
This is an implementation of the paper: Searching for Best Practices in Retrieval-Augmented Generation
Zero-shot Document Ranking with Large Language Models.
[KDD 2024] Improving the Consistency in Cross-Lingual Cross-Modal Retrieval with 1-to-K Contrastive Learning