Block or Report
Block or report stillmatic
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
Maelstrom is a fast Rust, Go, and Python test runner that runs every test in its own container. Tests are either run locally or distributed to a clustered job runner.
Simplex Random Feature attention, in PyTorch
Fast lexical search library implementing BM25 in Python using Numpy and Scipy
Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.
event broker with a focus on low operational cost
A novel human-interaction method for real-time speech extraction on headphones.
A flexible and efficient codebase for training visually-conditioned language models (VLMs)
My attempts at applying Soundstream design on learned tokenization of text and then applying hierarchical attention to text generation
HazyResearch / nanoGPT-TK
Forked from karpathy/nanoGPTThe simplest, fastest repository for training/finetuning medium-sized GPTs. Now, with kittens!
This is the code for the SpeechTokenizer presented in the SpeechTokenizer: Unified Speech Tokenizer for Speech Language Models. Samples are presented on
SGLang is yet another fast serving framework for large language models and vision language models.
DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model
A native PyTorch Library for large model training
🐚 Hermit manages isolated, self-bootstrapping sets of tools in software projects.
Tensors and Dynamic neural networks in Python with strong GPU acceleration
A Native-PyTorch Library for LLM Fine-tuning
Multi-Scale Neural Audio Codec (SNAC) compresses audio into discrete codes at a low bitrate
Schedule-Free Optimization in PyTorch
A ggml (C++) re-implementation of tortoise-tts
Make PyTorch models up to 40% faster! Thunder is a source to source compiler for PyTorch. It enables using different hardware executors at once; across one or thousands of GPUs.
MinHash, LSH, LSH Forest, Weighted MinHash, HyperLogLog, HyperLogLog++, LSH Ensemble and HNSW
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.