Skip to content

Commit

Permalink
fix doc of slim
Browse files Browse the repository at this point in the history
  • Loading branch information
LDOUBLEV committed May 21, 2021
1 parent 5e35b62 commit 7cb0b54
Show file tree
Hide file tree
Showing 4 changed files with 13 additions and 13 deletions.
4 changes: 2 additions & 2 deletions deploy/slim/prune/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,12 +24,12 @@
```bash
git clone https://github.com/PaddlePaddle/PaddleSlim.git
git checkout develop
cd Paddleslim
cd PaddleSlim
python3 setup.py install
```

### 2. 获取预训练模型
模型裁剪需要加载事先训练好的模型,PaddleOCR也提供了一系列(模型)[../../../doc/doc_ch/models_list.md],开发者可根据需要自行选择模型或使用自己的模型。
模型裁剪需要加载事先训练好的模型,PaddleOCR也提供了一系列[模型](../../../doc/doc_ch/models_list.md),开发者可根据需要自行选择模型或使用自己的模型。

### 3. 敏感度分析训练

Expand Down
4 changes: 2 additions & 2 deletions deploy/slim/prune/README_en.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,14 +23,14 @@ Five steps for OCR model prune:
```bash
git clone https://github.com/PaddlePaddle/PaddleSlim.git
git checkout develop
cd Paddleslim
cd PaddleSlim
python3 setup.py install
```


### 2. Download Pretrain Model
Model prune needs to load pre-trained models.
PaddleOCR also provides a series of (models)[../../../doc/doc_en/models_list_en.md]. Developers can choose their own models or use their own models according to their needs.
PaddleOCR also provides a series of [models](../../../doc/doc_en/models_list_en.md). Developers can choose their own models or use their own models according to their needs.


### 3. Pruning sensitivity analysis
Expand Down
10 changes: 5 additions & 5 deletions deploy/slim/quantization/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@

```bash
git clone https://github.com/PaddlePaddle/PaddleSlim.git
cd Paddleslim
cd PaddleSlim
python setup.py install
```

Expand All @@ -37,12 +37,12 @@ PaddleOCR提供了一系列训练好的[模型](../../../doc/doc_ch/models_list.

量化训练的代码位于slim/quantization/quant.py 中,比如训练检测模型,训练指令如下:
```bash
python deploy/slim/quantization/quant.py -c configs/det/det_mv3_db.yml -o Global.pretrained_model='your trained model' Global.save_model_dir=./output/quant_model
python deploy/slim/quantization/quant.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o Global.pretrained_model='your trained model' Global.save_model_dir=./output/quant_model

# 比如下载提供的训练模型
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_train.tar
tar -xf ch_ppocr_mobile_v2.0_det_train.tar
python deploy/slim/quantization/quant.py -c configs/det/det_mv3_db.yml -o Global.pretrained_model=./ch_ppocr_mobile_v2.0_det_train/best_accuracy Global.save_inference_dir=./output/quant_inference_model
python deploy/slim/quantization/quant.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o Global.pretrained_model=./ch_ppocr_mobile_v2.0_det_train/best_accuracy Global.save_model_dir=./output/quant_inference_model

```
如果要训练识别模型的量化,修改配置文件和加载的模型参数即可。
Expand All @@ -52,10 +52,10 @@ python deploy/slim/quantization/quant.py -c configs/det/det_mv3_db.yml -o Global
在得到量化训练保存的模型后,我们可以将其导出为inference_model,用于预测部署:

```bash
python deploy/slim/quantization/export_model.py -c configs/det/det_mv3_db.yml -o Global.checkpoints=output/quant_model/best_accuracy Global.save_model_dir=./output/quant_inference_model
python deploy/slim/quantization/export_model.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o Global.checkpoints=output/quant_model/best_accuracy Global.save_inference_dir=./output/quant_inference_model
```

### 5. 量化模型部署

上述步骤导出的量化模型,参数精度仍然是FP32,表现为量化后的模型大小不变,但是参数的数值范围是int8,导出的模型可以通过PaddleLite的opt模型转换工具完成模型转换。
上述步骤导出的量化模型,参数精度仍然是FP32,但是参数的数值范围是int8,导出的模型可以通过PaddleLite的opt模型转换工具完成模型转换。
量化模型部署的可参考 [移动端模型部署](../../lite/readme.md)
8 changes: 4 additions & 4 deletions deploy/slim/quantization/README_en.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ After training, if you want to further compress the model size and accelerate th

```bash
git clone https://github.com/PaddlePaddle/PaddleSlim.git
cd Paddleslim
cd PaddlSlim
python setup.py install
```

Expand All @@ -43,12 +43,12 @@ After the quantization strategy is defined, the model can be quantified.

The code for quantization training is located in `slim/quantization/quant.py`. For example, to train a detection model, the training instructions are as follows:
```bash
python deploy/slim/quantization/quant.py -c configs/det/det_mv3_db.yml -o Global.pretrained_model='your trained model' Global.save_model_dir=./output/quant_model
python deploy/slim/quantization/quant.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o Global.pretrained_model='your trained model' Global.save_model_dir=./output/quant_model

# download provided model
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_train.tar
tar -xf ch_ppocr_mobile_v2.0_det_train.tar
python deploy/slim/quantization/quant.py -c configs/det/det_mv3_db.yml -o Global.pretrained_model=./ch_ppocr_mobile_v2.0_det_train/best_accuracy Global.save_model_dir=./output/quant_model
python deploy/slim/quantization/quant.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o Global.pretrained_model=./ch_ppocr_mobile_v2.0_det_train/best_accuracy Global.save_model_dir=./output/quant_model
```


Expand All @@ -57,7 +57,7 @@ python deploy/slim/quantization/quant.py -c configs/det/det_mv3_db.yml -o Global
After getting the model after pruning and finetuning we, can export it as inference_model for predictive deployment:

```bash
python deploy/slim/quantization/export_model.py -c configs/det/det_mv3_db.yml -o Global.checkpoints=output/quant_model/best_accuracy Global.save_inference_dir=./output/quant_inference_model
python deploy/slim/quantization/export_model.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o Global.checkpoints=output/quant_model/best_accuracy Global.save_inference_dir=./output/quant_inference_model
```

### 5. Deploy
Expand Down

0 comments on commit 7cb0b54

Please sign in to comment.