- Lyon, France
Stars
PyTorch native quantization and sparsity for training and inference
Generalist and Lightweight Model for Named Entity Recognition (Extract any entity types from texts) @ NAACL 2024
Rust native ready-to-use NLP pipelines and transformer-based models (BERT, DistilBERT, GPT2,...)
An Easy-to-use, Scalable and High-performance RLHF Framework (70B+ PPO Full Tuning & Iterative DPO & LoRA & Mixtral)
A library for efficient similarity search and clustering of dense vectors.
An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
A Unified Library for Parameter-Efficient and Modular Transfer Learning
Distribute and run LLMs with a single file.
Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verified research papers.
A Native-PyTorch Library for LLM Fine-tuning
Implementing a ChatGPT-like LLM in PyTorch from scratch, step by step
Kickstart your MLOps initiative with a flexible, robust, and productive Python package.
Gorilla: Training and Evaluating LLMs for Function Calls (Tool Calls)
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
Python client for Qdrant vector search engine
Robust recipes to align language models with human and AI preferences
the AI-native open-source embedding database
Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
Pretrain, finetune and serve LLMs on Intel platforms with Ray
Train transformer language models with reinforcement learning.
AirLLM 70B inference with single 4GB GPU
Tools for merging pretrained large language models.