Highlights
- Pro
Block or Report
Block or report mrdbourke
Contact GitHub support about this userโs behavior. Learn more about reporting abuse.
Report abuseLists (3)
Sort Name ascending (A-Z)
Stars
Language
Sort by: Recently starred
Website for teaching/writing/making tutorials for machine learning.
Train high-quality text-to-image diffusion models in a data & compute efficient manner
Implementing a ChatGPT-like LLM in PyTorch from scratch, step by step
Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO and SAM 2
A library that includes Keras 3 preprocessing and augmentation layers, providing support for various data types such as images, labels, bounding boxes, segmentation masks, and more.
Utilities intended for use with Llama models.
The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use thโฆ
The fastest way to create an HTML app
Generative Models by Stability AI
๐ค PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
Recipes for shrinking, optimizing, customizing cutting edge vision models. ๐
Quick exploration into fine tuning florence 2
Fast lexical search library implementing BM25 in Python using Numpy and Scipy
Train transformer language models with reinforcement learning.
GitHub Actions for GitHub Pages ๐ Deploy static files and publish your site easily. Static-Site-Generators-friendly.
Train huggingface models on top of Prodigy annotations
๐๏ธ Radically lightweight command-line interfaces
DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception
This repository is an official implementation of the paper "LW-DETR: A Transformer Replacement to YOLO for Real-Time Detection".
Supporting PyTorch models with the Google AI Edge TFLite runtime.
MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.
MINT-1T: A one trillion token multimodal interleaved dataset.
Code release for "Segment Anything without Supervision"