Stars
Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes)
OpenChat: Advancing Open-source Language Models with Imperfect Data
A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
Accessible large language models via k-bit quantization for PyTorch.
Concurrently chat with ChatGPT, Bing Chat, Bard, Alpaca, Vicuna, Claude, ChatGLM, MOSS, 讯飞星火, 文心一言 and more, discover the best answers
A guidance language for controlling large language models.
0cc4m / GPTQ-for-LLaMa
Forked from qwopqwop200/GPTQ-for-LLaMa4 bits quantization of LLMs using GPTQ
LLM training code for Databricks foundation models
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions
Open Academic Research on Improving LLaMA to SOTA LLM
LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath
A curated list of practical guide resources of LLMs (LLMs Tree, Examples, Papers)
The RedPajama-Data repository contains code for preparing large datasets for training large language models.
JAX implementation of OpenAI's Whisper model for up to 70x speed-up on TPU.
StableLM: Stability AI Language Models
Scrapy, a fast high-level web crawling & scraping framework for Python.
A database of movie scripts from several sources
Neural Networks: Zero to Hero
LlamaIndex is a data framework for your LLM applications
🦜🔗 Build context-aware reasoning applications
A KoboldAI-like memory extension for oobabooga's text-generation-webui
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
Instruction Tuning with GPT-4
An English-language shell for any OS, powered by LLMs