Stars
Learn how to design large-scale systems. Prep for the system design interview. Includes Anki flashcards.
All Algorithms implemented in Python
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
Robust Speech Recognition via Large-Scale Weak Supervision
Interact with your documents using the power of GPT, 100% privately, no data leaks
A collection of learning resources for curious software engineers
High-Resolution Image Synthesis with Latent Diffusion Models
The simplest, fastest repository for training/finetuning medium-sized GPTs.
šøš¬ - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (Vā¦
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
PyTorch Tutorial for Deep Learning Researchers
A high-throughput and memory-efficient inference and serving engine for LLMs
Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.
š¤ Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
The lean application framework for Python. Build sophisticated user interfaces with a simple Python API. Run your apps in the terminal and a web browser.
We write your reusable computer vision tools. š
Image-to-Image Translation in PyTorch
The official Python library for the OpenAI API
A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.
Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor / tokenizer, along with MusicGen, a simple and controllableā¦
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
Universal LLM Deployment Engine with ML Compilation
DSPy: The framework for programmingānot promptingāfoundation models