- Shanghai
- https://me.csdn.net/qq_36846729
Stars
ChatGPT 中文调教指南。各种场景使用指南。学习怎么让它听你的话。
用于从头预训练+SFT一个小参数量的中文LLaMa2的仓库;24G单卡即可运行得到一个具备简单中文问答能力的chat-llama2.
从0到1构建一个MiniLLM (pretrain+sft+dpo实践中)
Retrieval and Retrieval-augmented LLMs
LlamaIndex is a data framework for your LLM applications
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
Firefly: 大模型训练工具,支持训练Qwen2.5、Qwen2、Yi1.5、Phi-3、Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya2、Vicuna、Bloom等大模型
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
🦜🔗 Build context-aware reasoning applications
A high-throughput and memory-efficient inference and serving engine for LLMs
RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding.
Unify Efficient Fine-tuning of RAG Retrieval, including Embedding, ColBERT,Cross Encoder
A one-stop data processing system to make data higher-quality, juicier, and more digestible for (multimodal) LLMs! 🍎 🍋 🌽 ➡️ ➡️🍸 🍹 🍷为大模型提供更高质量、更丰富、更易”消化“的数据!
ChatGPT爆火,开启了通往AGI的关键一步,本项目旨在汇总那些ChatGPT的开源平替们,包括文本大模型、多模态大模型等,为大家提供一些便利
Question and Answer based on Anything.
The code and data used for EACL2023 Paper: "Large Language Models are few(1)-shot Table Reasoners"
使用Bert,ERNIE,进行中文文本分类
OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.
You can use this foolproof tool to download and install All of PADDLE automatically.
Prefix-Tuning: Optimizing Continuous Prompts for Generation
ChatGLM3 series: Open Bilingual Chat LLMs | 开源双语对话语言模型
Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)
we want to create a repo to illustrate usage of transformers in chinese
Official release of InternLM2.5 base and chat models. 1M context support
Code and documentation to train Stanford's Alpaca models, and generate the data.
The official GitHub page for the survey paper "A Survey of Large Language Models".
OpenAI Baselines: high-quality implementations of reinforcement learning algorithms