- New York, NY
- ekbuchanan.com
- @ekellbuch
Highlights
- Pro
Block or Report
Block or report ekellbuch
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseLanguage
Sort by: Recently starred
Starred repositories
A MAD laboratory to improve AI architecture designs 🧪
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
Robust Speech Recognition via Large-Scale Weak Supervision
JAX implementation of OpenAI's Whisper model for up to 70x speed-up on TPU.
Hackers' Guide to Language Models
Multimodal language model benchmark, featuring challenging examples
Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.
The simplest, fastest repository for training/finetuning medium-sized GPTs.
Open-Sora: Democratizing Efficient Video Production for All
The official implementation of Autoregressive Image Generation using Residual Quantization (CVPR '22)
Official codebase for Decision Transformer: Reinforcement Learning via Sequence Modeling.
Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks.
A living collection of deep learning problems
SAM with text prompt
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
A resource repository for 3D machine learning
A playbook for systematically maximizing the performance of deep learning models.
An open-source project dedicated to tracking and segmenting any objects in videos, either automatically or interactively. The primary algorithms utilized include the Segment Anything Model (SAM) fo…
Code for the paper "ViperGPT: Visual Inference via Python Execution for Reasoning"
Evaluating ensemble performance in long-tailed datasets (Neurips 2023 Heavy Tails Workshop)
DSPy: The framework for programming—not prompting—foundation models
20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Canonical Factors for Hybrid Neural Fields @ ICCV 2023
Code for CRATE (Coding RAte reduction TransformEr).
Implementing a ChatGPT-like LLM in PyTorch from scratch, step by step