Stars
The code used in the paper "DogeRM: Equipping Reward Models with Domain Knowledge through Model Merging"
Aura is like Siri, but in your browser. An AI voice assistant optimized for low latency responses.
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
A Heterogeneous Benchmark for Information Retrieval. Easy to use, evaluate your models across 15+ diverse IR datasets.
RewardBench: the first evaluation tool for reward models.
Ongoing research training transformer models at scale
A framework for few-shot evaluation of language models.
A framework for the evaluation of autoregressive code generation language models.
🦜🔗 Build context-aware reasoning applications
Scripts for fine-tuning Meta Llama with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting…
Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor / tokenizer, along with MusicGen, a simple and controllable…
Foundational Models for State-of-the-Art Speech and Text Translation
Instruct-tune LLaMA on consumer hardware
Code and documentation to train Stanford's Alpaca models, and generate the data.
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Collaborative Collection of C++ Best Practices. This online resource is part of Jason Turner's collection of C++ Best Practices resources. See README.md for more information.
Self-Supervised Speech Pre-training and Representation Learning Toolkit