-
South China Normal University (SCNU)
Highlights
- Pro
Block or Report
Block or report neteroster
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
Isaac-JL-Chen / rouge_chinese
Forked from pltrdy/rougePython ROUGE Score Implementation for Chinese Language Task (official rouge score)
A modern Python package and dependency manager supporting the latest PEP standards
LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
A Fundamental End-to-End Speech Recognition Toolkit and Open Source SOTA Pretrained Models, Supporting Speech Recognition, Voice Activity Detection, Text Post-processing etc.
Convert Compute And Books Into Instruct-Tuning Datasets (or classifiers)!
Enforce the output format (JSON Schema, Regex etc) of a language model
DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence
DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model
CodeQwen1.5 is the code version of Qwen, the large language model series developed by Qwen team, Alibaba Cloud.
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference,…
NVIDIA Linux open GPU with P2P support
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
Finetune Llama 3, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
Automatically split your PyTorch models on multiple GPUs for training & inference
A high-throughput and memory-efficient inference and serving engine for LLMs
Large Language Model Text Generation Inference
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Orion-14B is a family of models includes a 14B foundation LLM, and a series of models: a chat model, a long context model, a quantized model, a RAG fine-tuned model, and an Agent fine-tuned model. …
ms-swift: Use PEFT or Full-parameter to finetune 300+ LLMs or 50+ MLLMs. (Qwen2, GLM4v, Internlm2.5, Yi, Llama3, Llava-Video, Internvl2, MiniCPM-V, Deepseek, Baichuan2, Gemma2, Phi3-Vision, ...)
A WebUI for Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
[ACL 2023] One Embedder, Any Task: Instruction-Finetuned Text Embeddings
A guidance language for controlling large language models.