Block or Report
Block or report abzb1
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
Official PyTorch implementation code for realizing the technical part of Traversal of Layers (TroL) presenting new propagation operation to get super vision language performances. (Under Review)
Accelerating the development of large multimodal models (LMMs) with lmms-eval
Open-source evaluation toolkit of large vision-language models (LVLMs), support GPT-4v, Gemini, QwenVLPlus, 50+ HF models, 20+ benchmarks
Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793
20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.
Tools for understanding how transformer predictions are built layer-by-layer
A framework for few-shot evaluation of language models.
Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines
Bayesian low-rank adaptation for large language models
Modeling, training, eval, and inference code for OLMo
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.
A high-throughput and memory-efficient inference and serving engine for LLMs
Pytorch library for model calibration metrics and visualizations as well as recalibration methods. In progress!
A Native-PyTorch Library for LLM Fine-tuning
The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.