Block or Report
Block or report suquark
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
Official repository for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
DSPy: The framework for programming—not prompting—foundation models
Generative Agents: Interactive Simulacra of Human Behavior
A high-throughput and memory-efficient inference and serving engine for LLMs
An Open-Ended Embodied Agent with Large Language Models
A guidance language for controlling large language models.
OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset
A GPT-4 AI Tutor Prompt for customizable personalized learning experiences.
Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)
NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
High-performance In-browser LLM Inference Engine
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
Instruction Tuning with GPT-4
A new markup-based typesetting system that is powerful and easy to learn.
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.
Examples and instructions about use LLMs (especially ChatGPT) for PhD
Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
Instruct-tune LLaMA on consumer hardware
LLM Chain for answering questions from documents with citations
The simplest, fastest repository for training/finetuning medium-sized GPTs.