Code for "Language Model Knowledge Distillation for Efficient Question Answering in Spanish" (ICLR 2024 Tiny Papers)
-
Updated
Dec 5, 2023 - Python
Code for "Language Model Knowledge Distillation for Efficient Question Answering in Spanish" (ICLR 2024 Tiny Papers)
Denoising Diffusion Step-aware Models (ICLR2024)
[ECCV 2020 Oral] MutualNet: Adaptive ConvNet via Mutual Learning from Network Width and Resolution
This repository contains the code for the paper "TaylorShift: Shifting the Complexity of Self-Attention from Squared to Linear (and Back) using Taylor-Softmax"
[Awesome] Efficient-Deep-Learning
My learning record for <TinyML and Efficient Deep Learning Computing>
Official Website for the Workshop on Advancing Neural Networks Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@ICML 2024, WANT@NeurIPS 2023)
This repository is for reproducing the results shown in the NNCodec ICML Workshop paper. Additionally, it includes a demo, prepared for the Neural Compression Workshop (NCW).
[ICML 2024] CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers.
Recent Advances on Efficient Vision Transformers
[ICLR 2022] "Peek-a-Boo: What (More) is Disguised in a Randomly Weighted Neural Network, and How to Find It Efficiently", by Xiaohan Chen, Jason Zhang and Zhangyang Wang.
[Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Pruning
Official PyTorch training code of Accelerating Deep Neural Networks via Semi-Structured Activation Sparsity (ICCV2023-RCV)
A generic code base for neural network pruning, especially for pruning at initialization.
[2024 ECCV Workshop] Famba-V: Fast Vision Mamba with Cross-Layer Token Fusion
📚 Collection of awesome diffusion acceleration resources.
[ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference
Code repository of the paper "Exploiting Redundancy: Separable Group Convolutional Networks on Lie Groups" https://proceedings.mlr.press/v162/knigge22a.html
Official PyTorch implementation of "Meta-prediction Model for Distillation-Aware NAS on Unseen Datasets" (ICLR 2023 notable top 25%)
[ICLR'23] Trainability Preserving Neural Pruning (PyTorch)
Add a description, image, and links to the efficient-deep-learning topic page so that developers can more easily learn about it.
To associate your repository with the efficient-deep-learning topic, visit your repo's landing page and select "manage topics."