Stars
Open-Sora: Democratizing Efficient Video Production for All
Neeko: Leveraging Dynamic LoRA for Efficient Multi-Character Role-Playing Agent
maoxxiao / LLaMA-Factory
Forked from hiyouga/LLaMA-FactoryEasy-to-use LLM fine-tuning framework (LLaMA, BLOOM, Mistral, Baichuan, Qwen, ChatGLM)
Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
Generative Agents: Interactive Simulacra of Human Behavior
A large-scale 7B pretraining language model developed by BaiChuan-Inc.
A 13B large language model developed by Baichuan Intelligent Technology
Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and…
An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks
ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
Paper collections of retrieval-based (augmented) language model.
Aligning pretrained language models with instruction data generated by themselves.
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference,…
Code and documentation to train Stanford's Alpaca models, and generate the data.
ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.
Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
中文语言理解测评基准 Chinese Language Understanding Evaluation Benchmark: datasets, baselines, pre-trained models, corpus and leaderboard
Chinese version of CLIP which achieves Chinese cross-modal retrieval and representation generation.
EasyNLP: A Comprehensive and Easy-to-use NLP Toolkit
Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Examples and guides for using the OpenAI API
ALIbaba's Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab
Source code and dataset for ACL 2019 paper "ERNIE: Enhanced Language Representation with Informative Entities"