This repository contains the source code of our paper, ESPNetv2.
Compared to state-of-the-art efficient networks, our network delivers competitive performance while being much more power efficient. Sample results are shown in below figure. For more details, please read our paper.
FLOPs vs. accuracy on the ImageNet dataset |
Power consumption on TX2 device |
If you find our project useful in your research, please consider citing:
@article{mehta2018espnetv2,
title={ESPNetv2: A Light-weight, Power Efficient, and General Purpose Convolutional Neural Network},
author={Sachin Mehta and Mohammad Rastegari and Linda Shapiro and Hannaneh Hajishirzi},
journal={arXiv preprint arXiv:1811.11431},
year={2018}
}
@inproceedings{mehta2018espnet,
title={ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation},
author={Sachin Mehta and Mohammad Rastegari and Anat Caspi and Linda Shapiro and Hannaneh Hajishirzi},
booktitle={ECCV},
year={2018}
}
This repository contains source code and pretrained for the following:
- Object classification: We provide source code along with pre-trained models at different network complexities for the ImageNet dataset. Click here for more details.
- Semantic segmentation: We provide source code along with pre-trained models on the Cityscapes dataset. Check here for more details.
To run this repository, you should have following softwares installed:
- PyTorch - We tested with v0.4.1
- OpenCV - We tested with version 3.4.3
- Python3 - Our code is written in Python3. We recommend to use Anaconda for the same.
Assuming that you have installed Anaconda successfully, you can follow the following instructions to install the packeges:
conda install pytorch torchvision -c pytorch
Once installed, run the following commands in your terminal to verify the version:
import torch
torch.__version__
This should print something like this 0.4.1.post2
.
If your version is different, then follow PyTorch website here for more details.
conda install pip
pip install --upgrade pip
pip install opencv-python
Once installed, run the following commands in your terminal to verify the version:
import cv2
cv2.__version__
This should print something like this 3.4.3
.
You will see that EESP
unit, the core building block of the ESPNetv2 architecture, has a for
loop to process the input at different dilation rates.
You can parallelize it using Streams in PyTorch. It improves the inference speed.
A snippet to parallelize a for
loop in pytorch is shown below:
# Sequential version
output = []
a = torch.randn(1, 3, 10, 10)
for i in range(4):
output.append(a)
torch.cat(output, 1)
# Parallel version
num_branches = 4
streams = [(idx, torch.cuda.Stream()) for idx in range(num_branches)]
output = []
a = torch.randn(1, 3, 10, 10)
for idx, s in streams:
with torch.cuda.stream(s):
output.append(a)
torch.cuda.synchronize()
torch.cat(output, 1)
Note:
- we have used above strategy to measure inference related statistics, including power consumption and run time on a single GPU.
- We have not tested it (for training as well as inference) across multiple GPUs. If you want to use Streams and facing issues, please use PyTorch forums to resolve your queries.