Stars
Implementation of Muse: Text-to-Image Generation via Masked Generative Transformers, in Pytorch
Dataset Quantization with Active Learning based Adaptive Sampling [ECCV 2024]
📰 Must-read papers and blogs on LLM based Long Context Modeling 🔥
A curated list for Efficient Large Language Models
A curated list of awesome papers on dataset distillation and related applications.
[CVPR2024] Efficient Dataset Distillation via Minimax Diffusion
List of papers related to neural network quantization in recent AI conferences and journals.
Reorder-based post-training quantization for large language model
PB-LLM: Partially Binarized Large Language Models
Official PyTorch implementation of Which Tokens to Use? Investigating Token Reduction in Vision Transformers presented at ICCV 2023 NIVT workshop
MiniSora: A community aims to explore the implementation path and future development direction of Sora.
QuEST: Efficient Finetuning for Low-bit Diffusion Models
[ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models
[MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration
[ICCV 2019] TSM: Temporal Shift Module for Efficient Video Understanding
We introduce a novel approach for parameter generation, named neural network parameter diffusion (p-diff), which employs a standard latent diffusion model to synthesize a new set of parameters
Implementation of Post-training Quantization on Diffusion Models (CVPR 2023)
[CVPR2024] DisCo: Referring Human Dance Generation in Real World
[CVPR 2024] MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model
Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"