Block or Report
Block or report dawnranger
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseLanguage
Sort by: Recently starred
Starred repositories
Research on Tabular Deep Learning: Papers & Packages
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
Hunyuan-DiT : A Powerful Multi-Resolution Diffusion Transformer with Fine-Grained Chinese Understanding
LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
Jupyter notebooks for the Natural Language Processing with Transformers book
A WebUI for Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.
LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalability, and high-speed performance.
Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
THUDM / FasterTransformer
Forked from NVIDIA/FasterTransformerTransformer related optimization, including BERT, GPT
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
Instruct-tune LLaMA on consumer hardware
An open-source framework for training large multimodal models.
🦜🔗 Build context-aware reasoning applications
LlamaIndex is a data framework for your LLM applications
Aligning pretrained language models with instruction data generated by themselves.
Toolkit for creating, sharing and using natural language prompts.
ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
Code and documentation to train Stanford's Alpaca models, and generate the data.
Making large AI models cheaper, faster and more accessible