Stars
Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting yo…
雅意信息抽取大模型:在百万级人工构造的高质量信息抽取数据上进行指令微调,由中科闻歌算法团队研发。 (Repo for YAYI Unified Information Extraction Model)
Accessible large language models via k-bit quantization for PyTorch.
[ACL 2024] An Easy-to-use Instruction Processing Framework for LLMs.
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
中文LLaMA-2 & Alpaca-2大模型二期项目 + 64K超长上下文模型 (Chinese LLaMA-2 & Alpaca-2 LLMs with 64K long context models)
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)
Llama中文社区,Llama3在线体验和微调模型已开放,实时汇总最新Llama3学习资料,已将所有代码更新适配Llama3,构建最好的中文Llama大模型,完全开源可商用
An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
The official GitHub page for the survey paper "A Survey of Large Language Models".
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
Ongoing research training transformer language models at scale, including: BERT & GPT-2
A framework for few-shot evaluation of autoregressive language models.
Toolkit for creating, sharing and using natural language prompts.
Human ChatGPT Comparison Corpus (HC3), Detectors, and more! 🔥
Open source annotation tool for machine learning practitioners.
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.
Semi-supervised Learning for Sentiment Analysis