Block or Report
Block or report sorrymaker-cmd
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
llama3 implementation one matrix multiplication at a time
Towards LLM-RecSys Alignment with Textual ID Learning
Code for SIGIR 2024 paper: M3oE: Multi-Domain Multi-Task Mixture-of Experts Recommendation Framework
The official gpt4free repository | various collection of powerful language models
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
A high-throughput and memory-efficient inference and serving engine for LLMs
Tools for merging pretrained large language models.
🛠「Watt Toolkit」是一个开源跨平台的多功能 Steam 工具箱。
Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks.
Large Language Model for Generative Recommendation
中文Mixtral混合专家大模型(Chinese Mixtral MoE LLMs)
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
All code necessary for reproducing the "Improving Sequential Recommendation with LLMs" and "Leveraging Large Language Models for Sequential Recommendation" papers.
Is ChatGPT Good at Search? LLMs as Re-Ranking Agent [EMNLP 2023 Outstanding Paper Award]
Code for the Paper "Zero-Shot Next-Item Recommendation using Large Pretrained Language Models"
用于从头预训练+SFT一个小参数量的中文LLaMa2的仓库;24G单卡即可运行得到一个具备简单中文问答能力的chat-llama2.
Instruct-tune LLaMA on consumer hardware
A large-scale 7B pretraining language model developed by BaiChuan-Inc.
The simplest, fastest repository for training/finetuning medium-sized GPTs.
Code and documentation to train Stanford's Alpaca models, and generate the data.
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.