Skip to content

A demo application to run inference using TorchScript and LibTorch C++ API.

License

Notifications You must be signed in to change notification settings

tufei/torchscript-ssd

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TorchScript on NVIDIA Jetson Nano

This is a demo application that shows how to run a network using TorchScripts dumped from other platforms along with LibTorch C++ API.

The application has been tested using the NVIDIA Jetson Nano SDK. The performance is actually not that good as it takes about 4 seconds to run MobileNetV2 based SSD network with input resolution of 300x300.

This project is supposed to be used together with pytorch-ssd, which can dump the TorchScript.

Prerequisites

Build

mkdir build && cd build
cmake -DCMAKE_PREFIX_PATH=<your libtorch path> ..

Run inference

./ts_ssd -s mb2-ssd-lite-mp-0_686.pt -l voc-model-labels.txt -p 0.5 -i messi.jpg

You should see message like the following:

–torch-script specified with value = mb2-ssd-lite-mp-0_686.pt
–labels specified with value = voc-model-labels.txt
–probability-threshold specified with value = 0.5
–input-file specified with value = messi.jpg
cuDNN: Yes
CUDA: Yes
Loaded TorchScript mb2-ssd-lite-mp-0_686.pt
Start inferencing ...
Original image size [width, height] = [1296, 729]
Inference done in 4305 ms
Tensor  shape: {3000 21}
Tensor  shape: {3000 4}
Class index [15]: person
Tensor  shape: {2 5}

Example output

Example of Mobile SSD

About

A demo application to run inference using TorchScript and LibTorch C++ API.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published