![rust logo](https://raw.githubusercontent.com/github/explore/80688e429a7d4ef2fca1e82350fe8e3517d3494d/topics/rust/rust.png)
Block or Report
Block or report tlightsky
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseLanguage
Sort by: Recently starred
Starred repositories
Write scalable load tests in plain Python 🚗💨
Understand Human Behavior to Align True Needs
Curated list of papers and resources focused on 3D Gaussian Splatting, intended to keep pace with the anticipated surge of research in the coming months.
PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
Support for generating Swagger REST API documentation for Akka-Http based services.
【PT下载|馒头|PT助手|m-team|team|mteam】在种子详情页添加下载按钮,点击后可以选择【标题|种子名|副标题】并将种子添加到 qBittorrent 或 Transmission,支持文件重命名并指定下载位置,兼容 NexusPHP 站点。
A simple Rust library for OpenAI API, free from complex async operations and redundant dependencies.
OpenAI API client library for Rust (unofficial)
Large Language Model Text Generation Inference
A generative speech model for daily dialogue.
Grafana datasource to load JSON data over your arbitrary HTTP backend
🤗 LeRobot: End-to-end Learning for Real-World Robotics in Pytorch
Build highly concurrent, distributed, and resilient message-driven applications on the JVM
CodeQwen1.5 is the code version of Qwen, the large language model series developed by Qwen team, Alibaba Cloud.
Tutorial for Porting PyTorch Transformer Models to Candle (Rust)
Deep learning in Rust, with shape checked tensors and neural networks
Making large AI models cheaper, faster and more accessible
Llama中文社区,Llama3在线体验和微调模型已开放,实时汇总最新Llama3学习资料,已将所有代码更新适配Llama3,构建最好的中文Llama大模型,完全开源可商用
中文大模型能力评测榜单:目前已囊括106个大模型,覆盖chatgpt、gpt4o、百度文心一言、阿里通义千问、讯飞星火、商汤senseChat、minimax等商用模型, 以及百川、qwen2、glm4、yi、书生internLM2、llama3等开源大模型,多维度能力评测。不仅提供能力评分排行榜,也提供所有模型的原始输出结果!
Code for the paper "Language Models are Unsupervised Multitask Learners"
Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.