Block or Report
Block or report NinedayWang
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseLists (6)
Sort Name ascending (A-Z)
Stars
Language: Python
Sort by: Most stars
Making large AI models cheaper, faster and more accessible
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
Code and documentation to train Stanford's Alpaca models, and generate the data.
A WebUI for Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
A high-throughput and memory-efficient inference and serving engine for LLMs
Transfer learning / domain adaptation / domain generalization / multi-task learning etc. Papers, codes, datasets, applications, tutorials.-迁移学习
Llama中文社区,Llama3在线体验和微调模型已开放,实时汇总最新Llama3学习资料,已将所有代码更新适配Llama3,构建最好的中文Llama大模型,完全开源可商用
The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
AI Native Data App Development framework with AWEL(Agentic Workflow Expression Language) and Agents
Fast and memory-efficient exact attention
An open-source tool-augmented conversational language model from Fudan University
The official GitHub page for the survey paper "A Survey of Large Language Models".
LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath
Train transformer language models with reinforcement learning.
Large Language Model Text Generation Inference
Home of StarCoder: fine-tuning & inference!
中文LLaMA-2 & Alpaca-2大模型二期项目 + 64K超长上下文模型 (Chinese LLaMA-2 & Alpaca-2 LLMs with 64K long context models)
Open Source Neural Machine Translation and (Large) Language Models in PyTorch
DeepSeek Coder: Let the Code Write Itself
Retrieval and Retrieval-augmented LLMs
Firefly: 大模型训练工具,支持训练Qwen2、Yi1.5、Phi-3、Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya2、Vicuna、Bloom等大模型
CodeGen is a family of open-source model for program synthesis. Trained on TPU-v4. Competitive with OpenAI Codex.
[NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models
A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)
Aligning pretrained language models with instruction data generated by themselves.