Stars
Survey Paper List - Efficient LLM and Foundation Models
TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques such as quantization, pruning, distillation, etc. It compresses deep learning models for downstream d…
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
A framework for few-shot evaluation of language models.
[CVPR 2024 Highlight] DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models
[MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration
A curriculum for learning about foundation models, from scratch to the frontier
LAVIS - A One-stop Library for Language-Vision Intelligence
PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Look-into-Object: Self-supervised Structure Modeling for Object Recognition (CVPR 2020)
Transformer related optimization, including BERT, GPT
An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites
OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive Learning
A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
🦦 Otter, a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing improved instruction-following and in-context learning ability.
Spring 2023 NYCU (prev. NCTU) Integrated Circuit Design Laboratory (ICLab)
This repository implements variational graph auto encoder by Thomas Kipf.
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
RISC-V CPU with 5-stage pipeline, implemented in Verilog HDL.