-
Monash university
- Clayton, Victoria, 3168
Lists (1)
Sort Name ascending (A-Z)
Starred repositories
🧑🏫 60+ Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), ga…
YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.
OCR software, free and offline. 开源、免费的离线OCR软件。支持截屏/批量导入图片,PDF文档识别,排除水印/页眉页脚,扫描/生成二维码。内置多国语言库。
You like pytorch? You like micrograd? You love tinygrad! ❤️
Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 500 universities from 70 countries including Stanford, MIT, Harvard, and Cambridge.
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
Universal LLM Deployment Engine with ML Compilation
Image augmentation for machine learning experiments.
Open deep learning compiler stack for cpu, gpu and specialized accelerators
[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
OpenMMLab Pre-training Toolbox and Benchmark
Summary of related papers on visual attention. Related code will be released based on Jittor gradually.
Repository for the main Dockerfile with the OpenWorm software stack and project-wide issues
EVA Series: Visual Representation Fantasies from BAAI
[ECCV2024] API code for T-Rex2: Towards Generic Object Detection via Text-Visual Prompt Synergy
PyTorch pre-trained model for real-time interest point detection, description, and sparse tracking (https://arxiv.org/abs/1712.07629)
OpenMMLab Rotated Object Detection Toolbox and Benchmark
Project Page for "LISA: Reasoning Segmentation via Large Language Model"
Reference implementations of MLPerf™ training benchmarks
Olive: Simplify ML Model Finetuning, Conversion, Quantization, and Optimization for CPUs, GPUs and NPUs.
The implementation of "Prismer: A Vision-Language Model with Multi-Task Experts".
A toy implementation of monocular SLAM written while livestreaming