Stars
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Test various xformers with tightly controlled variables to explore the limits of transformers.
Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.
Transformer are RNNs: Fast Autoregressive Transformer with Linear Attention
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference,…
Repository containing code for blockwise SSL training
Code release of paper Debiased Self-Training for Semi-Supervised Learning (NeurIPS 2022 Oral)
README and scripts for the Cityscapes Dataset
Temporally Distributed Networks for Fast Video Semantic Segmentation
Download City Scapes Dataset using this script
OpenMMLab Semantic Segmentation Toolbox and Benchmark.
My continuously updated Machine Learning, Probabilistic Models and Deep Learning notes and demos (2000+ slides) 我不间断更新的机器学习,概率模型和深度学习的讲义(2000+页)和视频链接
This is the official repository for our recent work: PIDNet
An implementation of Performer, a linear attention-based transformer, in Pytorch
[NeurIPS 2022 Spotlight] GMMSeg: Gaussian Mixture based Generative Semantic Segmentation Models
Zhejiang University Graduation Thesis LaTeX Template
Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"
Official Pytorch implementation of 'Visual Recognition with Deep Nearest Centroids'. (ICLR2023 Spotlight)
Official repository for the paper "Going Beyond Linear Transformers with Recurrent Fast Weight Programmers" (NeurIPS 2021)
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
Efficient 3D Backbone Network for Temporal Modeling
Collection of Explainability/Interpretability of Deep Neural Networks.
Code for the paper "Query-Key Normalization for Transformers"