Skip to content

Commit

Permalink
[Refactor] Refactor demo readme. (#365)
Browse files Browse the repository at this point in the history
* [Refactor] Refactor demo

* add openvino benchmark

* update readme

* add ncnn benchmark
  • Loading branch information
RangiLyu committed Dec 24, 2021
1 parent 5967031 commit f2bb550
Show file tree
Hide file tree
Showing 5 changed files with 96 additions and 71 deletions.
6 changes: 0 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -243,12 +243,6 @@ NanoDet provide C++ and Android demo based on ncnn library.
python tools/export_onnx.py --cfg_path ${CONFIG_PATH} --model_path ${PYTORCH_MODEL_PATH}
```

Then using [onnx-simplifier](https://github.com/daquexian/onnx-simplifier) to simplify onnx structure.

```shell script
python -m onnxsim ${INPUT_ONNX_MODEL} ${OUTPUT_ONNX_MODEL}
```

Run **onnx2ncnn** in ncnn tools to generate ncnn .param and .bin file.

After that, using **ncnnoptimize** to optimize ncnn model.
Expand Down
52 changes: 18 additions & 34 deletions demo_mnn/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,26 +23,20 @@ Please follow the [official document](https://www.yuque.com/mnn/en/build_linux)
1. Export ONNX model

```shell
python ./tools/export_onnx.py
python tools/export_onnx.py --cfg_path ${CONFIG_PATH} --model_path ${PYTORCH_MODEL_PATH}
```

2. Use *onnx-simplifier* to simplify it
2. Convert to MNN

``` shell
python -m onnxsim ./output.onnx sim.onnx
python -m MNN.tools.mnnconvert -f ONNX --modelFile sim.onnx --MNNModel nanodet.mnn
```

3. Convert to MNN

``` shell
python -m MNN.tools.mnnconvert -f ONNX --modelFile sim.onnx --MNNModel nanodet-320.mnn
```

It should be note that the input size does not have to be 320, it can be any integer multiple of strides,
It should be note that the input size does not have to be fixed, it can be any integer multiple of strides,
since NanoDet is anchor free. We can adapt the shape of `dummy_input` in *./tools/export_onnx.py* to get ONNX and MNN models
with different input sizes.

Here are converted model [Baidu Disk](https://pan.baidu.com/s/1DE4_yo0xez6Wd95xv7NnDQ)(extra code: *5mfa*),
Here are converted model
[Google Drive](https://drive.google.com/drive/folders/1dEdAXkof_lCusYBNrgbGzdLFZbDPMiFn?usp=sharing).

## Build
Expand Down Expand Up @@ -70,29 +64,7 @@ Note that a flag at `main.cpp` is used to control whether to show the detection
### Python
`demo_mnn.py` provide an inference class `NanoDetMNN` that combines preprocess, post process, visualization.
Besides it can be used in command line with the form:
```shell
demo_mnn.py [-h] [--model_path MODEL_PATH] [--cfg_path CFG_PATH]
[--img_fold IMG_FOLD] [--result_fold RESULT_FOLD]
[--input_shape INPUT_SHAPE INPUT_SHAPE]
[--backend {MNN,ONNX,torch}]
```

For example:

``` shell
# run MNN 320 model
python ./demo_mnn.py --model_path ../model/nanodet-320.mnn --img_fold ../imgs --result_fold ../results
# run MNN 160 model
python ./demo_mnn.py --model_path ../model/nanodet-160.mnn --input_shape 160 160 --backend MNN
# run onnx model
python ./demo_mnn.py --model_path ../model/sim.onnx --backend ONNX
# run Pytorch model
python ./demo_mnn.py --model_path ../model/nanodet_m.pth ../../config/nanodet-m.yml --backend torch
```

The multi-backend python demo is still working in progress.
### C++
C++ inference interface is same with NCNN code, to detect images in a fold, run:
Expand All @@ -107,6 +79,18 @@ For speed benchmark
./nanodet-mnn "3" "0"
```

## Custom model

If you want to use custom model, please make sure the hyperparameters
in `nanodet_mnn.h` are the same with your training config file.

```cpp
int input_size[2] = {416, 416}; // input height and width
int num_class = 80; // number of classes. 80 for COCO
int reg_max = 7; // `reg_max` set in the training config. Default: 7.
std::vector<int> strides = { 8, 16, 32, 64 }; // strides of the multi-level feature.
```
## Reference
[Ultra-Light-Fast-Generic-Face-Detector-1MB](https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB/tree/master/MNN)
Expand Down
53 changes: 44 additions & 9 deletions demo_ncnn/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Download and Install Visual Studio from https://visualstudio.microsoft.com/vs/co
### Step2.
Download and install OpenCV from https://github.com/opencv/opencv/releases

### Step3(Optional).
### Step3 (Optional).
Download and install Vulkan SDK from https://vulkan.lunarg.com/sdk/home

### Step4.
Expand All @@ -30,7 +30,6 @@ Add `ncnn_DIR` = `YOUR_NCNN_PATH/build/install/lib/cmake/ncnn` to system environ
Build project: Open x64 Native Tools Command Prompt for VS 2019 or 2017

``` cmd
cd <this-folder>
mkdir -p build
cd build
cmake ..
Expand Down Expand Up @@ -65,7 +64,6 @@ export ncnn_DIR=YOUR_NCNN_PATH/build/install/lib/cmake/ncnn
Build project

``` shell script
cd <this-folder>
mkdir build
cd build
cmake ..
Expand All @@ -75,32 +73,32 @@ make
# Run demo

Download NanoDet ncnn model.
* [NanoDet ncnn model download link](https://github.com/RangiLyu/nanodet/releases/download/v0.3.0/nanodet_m_ncnn_model.zip)
* [NanoDet-Plus ncnn model download link](https://drive.google.com/file/d/1cuVBJiFKwyq1-l3AwHoP2boTesUQP-6K/view?usp=sharing)

Copy nanodet_m.param and nanodet_m.bin to demo program folder.
Unzip the file and rename the file to `nanodet.param` and `nanodet.bin`, then copy them to demo program folder (`demo_ncnn/build`).

## Webcam

```shell script
nanodet_demo 0 0
./nanodet_demo 0 0
```

## Inference images

```shell script
nanodet_demo 1 IMAGE_FOLDER/*.jpg
./nanodet_demo 1 ${IMAGE_FOLDER}/*.jpg
```

## Inference video

```shell script
nanodet_demo 2 VIDEO_PATH
./nanodet_demo 2 ${VIDEO_PATH}
```

## Benchmark

```shell script
nanodet_demo 3 0
./nanodet_demo 3 0
```
![bench_mark](benchmark.jpg)
****
Expand All @@ -114,3 +112,40 @@ Linux:
```shell script
export OMP_THREAD_LIMIT=4
```

Model |Resolution|COCO mAP | CPU Latency (i7-8700) | ARM CPU Latency (4*A76) | Vulkan GPU Latency (GTX1060) |
:------------------:|:--------:|:--------:|:---------------------:|:-----------------------:|:---------------------:|
NanoDet-Plus-m | 320*320 | 27.0 | 10.32ms / 96.9FPS | | 3.40ms / 294.1FPS |
NanoDet-Plus-m | 416*416 | 30.4 | 17.98ms / 55.6FPS | | 4.27ms / 234.2FPS |
NanoDet-Plus-m-1.5x | 320*320 | 29.9 | 12.87ms / 77.7FPS | | 3.78ms / 264.6FPS |
NanoDet-Plus-m-1.5x | 416*416 | 34.1 | 22.53ms / 44.4FPS | | 4.79ms / 208.8FPS |

# Custom model

## Export to ONNX

```shell script
python tools/export_onnx.py --cfg_path ${CONFIG_PATH} --model_path ${PYTORCH_MODEL_PATH}
```

## Convert to ncnn

Run **onnx2ncnn** in ncnn tools to generate ncnn .param and .bin file.

After that, using **ncnnoptimize** to optimize ncnn model.

If you have quentions about converting ncnn model, refer to ncnn wiki. https://github.com/Tencent/ncnn/wiki

You can also convert the model with an online tool https://convertmodel.com/ .

## Modify hyperparameters

If you want to use custom model, please make sure the hyperparameters
in `nanodet.h` are the same with your training config file.

```cpp
int input_size[2] = {416, 416}; // input height and width
int num_class = 80; // number of classes. 80 for COCO
int reg_max = 7; // `reg_max` set in the training config. Default: 7.
std::vector<int> strides = { 8, 16, 32, 64 }; // strides of the multi-level feature.
```
13 changes: 4 additions & 9 deletions demo_ncnn/main.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -306,14 +306,9 @@ int benchmark(NanoDet& detector)
{
double start = ncnn::get_current_time();
ncnn::Extractor ex = detector.Net->create_extractor();
ex.input("input.1", input);
for (const auto& head_info : detector.heads_info)
{
ncnn::Mat dis_pred;
ncnn::Mat cls_pred;
ex.extract(head_info.dis_layer.c_str(), dis_pred);
ex.extract(head_info.cls_layer.c_str(), cls_pred);
}
ex.input("data", input);
ncnn::Mat preds;
ex.extract("output", preds);
double end = ncnn::get_current_time();

double time = end - start;
Expand All @@ -337,7 +332,7 @@ int main(int argc, char** argv)
fprintf(stderr, "usage: %s [mode] [path]. \n For webcam mode=0, path is cam id; \n For image demo, mode=1, path=xxx/xxx/*.jpg; \n For video, mode=2; \n For benchmark, mode=3 path=0.\n", argv[0]);
return -1;
}
NanoDet detector = NanoDet("./nanodet_m.param", "./nanodet_m.bin", true);
NanoDet detector = NanoDet("./nanodet.param", "./nanodet.bin", true);
int mode = atoi(argv[1]);
switch (mode)
{
Expand Down
43 changes: 30 additions & 13 deletions demo_openvino/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,13 +64,7 @@ source /opt/intel/openvino_2021/bin/setupvars.sh
python ./tools/export_onnx.py --cfg_path ${CONFIG_PATH} --model_path ${PYTORCH_MODEL_PATH}
```

2. Use *onnx-simplifier* to simplify it

``` shell
python -m onnxsim ${INPUT_ONNX_MODEL} ${OUTPUT_ONNX_MODEL}
```

3. Convert to OpenVINO
2. Convert to OpenVINO

``` shell
cd <INSTSLL_DIR>/openvino_2021/deployment_tools/model_optimizer
Expand All @@ -84,7 +78,7 @@ source /opt/intel/openvino_2021/bin/setupvars.sh

Then convert model. Notice: mean_values and scale_values should be the same with your training settings in YAML config file.
```shell
python3 mo_onnx.py --input_model <ONNX_MODEL> --mean_values [103.53,116.28,123.675] --scale_values [57.375,57.12,58.395]
python3 mo.py --input_model ${ONNX_MODEL} --mean_values [103.53,116.28,123.675] --scale_values [57.375,57.12,58.395] --output output --data_type FP32 --output_dir ${OUTPUT_DIR}
```

## Build
Expand All @@ -111,28 +105,51 @@ make

## Run demo

First, move nanodet openvino model files to the demo's folder. Then run these commands:
You can convert the model to openvino or use the [converted model](https://drive.google.com/file/d/1dAwIA2pMkSetPEcvB0dvmLaOAK-9h-Lm/view?usp=sharing)

First, move nanodet openvino model files to the `build` folder and rename the files to `nanodet.xml`, `nanodet.mapping`, `nanodet.bin`.

Then run these commands:

### Webcam

```shell
nanodet_demo 0 0
./nanodet_demo 0 0
```

### Inference images

```shell
nanodet_demo 1 IMAGE_FOLDER/*.jpg
./nanodet_demo 1 ${IMAGE_FOLDER}/*.jpg
```

### Inference video

```shell
nanodet_demo 2 VIDEO_PATH
./nanodet_demo 2 ${VIDEO_PATH}
```

### Benchmark

```shell
nanodet_demo 3 0
./nanodet_demo 3 0
```

Model |Resolution|COCO mAP | CPU Latency (i7-8700) |
:------------------:|:--------:|:--------:|:---------------------:|
NanoDet-Plus-m | 320*320 | 27.0 | 5.25ms / 190FPS |
NanoDet-Plus-m | 416*416 | 30.4 | 8.32ms / 120FPS |
NanoDet-Plus-m-1.5x | 320*320 | 29.9 | 7.21ms / 139FPS |
NanoDet-Plus-m-1.5x | 416*416 | 34.1 | 11.50ms / 87FPS |

## Custom model

If you want to use custom model, please make sure the hyperparameters
in `nanodet_openvino.h` are the same with your training config file.

```cpp
int input_size[2] = {416, 416}; // input height and width
int num_class = 80; // number of classes. 80 for COCO
int reg_max = 7; // `reg_max` set in the training config. Default: 7.
std::vector<int> strides = { 8, 16, 32, 64 }; // strides of the multi-level feature.
```

0 comments on commit f2bb550

Please sign in to comment.