Skip to content

Commit

Permalink
Import from internal hub.
Browse files Browse the repository at this point in the history
  • Loading branch information
megvii-mge committed Jul 9, 2020
1 parent 8e966d9 commit 73b3450
Show file tree
Hide file tree
Showing 5 changed files with 35 additions and 24 deletions.
24 changes: 12 additions & 12 deletions models/megengine_nlp_bert.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,16 +11,16 @@ github-link: https://github.com/megengine/models

```python
import megengine.hub as hub
model = hub.load("megengine/models", "wwm_cased_L-24_H-1024_A-16", pretrained=True)
model = megengine.hub.load("megengine/models", "wwm_cased_L-24_H-1024_A-16", pretrained=True)
# or any of these variants
# model = hub.load("megengine/models", "wwm_cased_L-24_H-1024_A-16", pretrained=True)
# model = hub.load("megengine/models", "wwm_uncased_L-24_H-1024_A-16", pretrained=True)
# model = hub.load("megengine/models", "cased_L-12_H-768_A-12", pretrained=True)
# model = hub.load("megengine/models", "cased_L-24_H-1024_A-16", pretrained=True)
# model = hub.load("megengine/models", "uncased_L-12_H-768_A-12", pretrained=True)
# model = hub.load("megengine/models", "uncased_L-24_H-1024_A-16", pretrained=True)
# model = hub.load("megengine/models", "chinese_L-12_H-768_A-12", pretrained=True)
# model = hub.load("megengine/models", "multi_cased_L-12_H-768_A-12", pretrained=True)
# model = megengine.hub.load("megengine/models", "wwm_cased_L-24_H-1024_A-16", pretrained=True)
# model = megengine.hub.load("megengine/models", "wwm_uncased_L-24_H-1024_A-16", pretrained=True)
# model = megengine.hub.load("megengine/models", "cased_L-12_H-768_A-12", pretrained=True)
# model = megengine.hub.load("megengine/models", "cased_L-24_H-1024_A-16", pretrained=True)
# model = megengine.hub.load("megengine/models", "uncased_L-12_H-768_A-12", pretrained=True)
# model = megengine.hub.load("megengine/models", "uncased_L-24_H-1024_A-16", pretrained=True)
# model = megengine.hub.load("megengine/models", "chinese_L-12_H-768_A-12", pretrained=True)
# model = megengine.hub.load("megengine/models", "multi_cased_L-12_H-768_A-12", pretrained=True)
```

<!-- section: zh_CN -->
Expand Down Expand Up @@ -147,7 +147,7 @@ We provide the following pre-trained models for users to finetune in different t
* `chinese_L-12_H-768_A-12`
* `multi_cased_L-12_H-768_A-12`

The weight of the model comes from Google's pre-trained models, and its meaning is also consistent with it. Users can use `megengine.hub` to easily use the pre-trained bert model, and download the corresponding` vocab.txt` and `bert_config.json`. We also provide a convenient script in [models] (https://github.com/megengine/models/official/nlp/bert), which can directly obtain the corresponding dictionary, configuration, and pre-trained model by task name. .
The weight of the model comes from Google's pre-trained models, and its meaning is also consistent with it. Users can use `megengine.hub` to easily use the pre-trained bert model, and download the corresponding` vocab.txt` and `bert_config.json`. We also provide a convenient script in [models](https://github.com/megengine/models/official/nlp/bert), which can directly obtain the corresponding dictionary, configuration, and pre-trained model by task name. .

```python
import megengine.hub as hub
Expand Down Expand Up @@ -229,11 +229,11 @@ bert, config, vocab_file = create_hub_bert('uncased_L-12_H-768_A-12', pretrained
model = BertForSequenceClassification(config, num_labels=2, bert=bert)
```

All pre-trained models expect the data to be pre-processed correctly. The requirements are consistent with the Google's bert. For details, please refer to original [bert] (https://github.com/google-research/bert), or refer to our example [models] ( https://github.com/megengine/models/official/nlp/bert).
All pre-trained models expect the data to be pre-processed correctly. The requirements are consistent with the Google's bert. For details, please refer to original [bert](https://github.com/google-research/bert), or refer to our example [models](https://github.com/megengine/models/official/nlp/bert).


### Model Description
We provide example code in [models] (https://github.com/megengine/models/official/nlp/bert).
We provide example code in [models](https://github.com/megengine/models/official/nlp/bert).
This example code fine-tunes the pre-trained `uncased_L-12_H-768_A-12` model on the Microsoft Research Paraphrase (MRPC) dataset.

Our test ran on the original implementation hyper-parameters gave evaluation results between 84% and 88%.
Expand Down
8 changes: 4 additions & 4 deletions models/megengine_vision_deeplabv3plus.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
---
template: hub1
title: deeplabv3plus
title: DeepLabV3plus
summary:
en_US: Deeplabv3plus pre-trained on VOC
zh_CN: Deeplabv3plus (VOC预训练权重)
en_US: DeepLabV3plus pre-trained on VOC
zh_CN: DeepLabV3plus (VOC预训练权重)
author: MegEngine Team
tags: [vision]
github-link: https://github.com/megengine/models
Expand Down Expand Up @@ -61,7 +61,7 @@ pred = cv2.resize(pred.astype("uint8"), (oriw, orih), interpolation=cv2.INTER_LI

### 模型描述

目前我们提供了deeplabv3plus的预训练模型, 在voc验证集的表现如下:
目前我们提供了 deeplabv3plus 的预训练模型, 在voc验证集的表现如下:

Methods | Backbone | TrainSet | EvalSet | mIoU_single | mIoU_multi |
:--: |:--: |:--: |:--: |:--: |:--: |
Expand Down
7 changes: 5 additions & 2 deletions models/megengine_vision_resnet.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ model = megengine.hub.load('megengine/models', 'resnet18', pretrained=True)
# model = megengine.hub.load('megengine/models', 'resnet34', pretrained=True)
# model = megengine.hub.load('megengine/models', 'resnet50', pretrained=True)
# model = megengine.hub.load('megengine/models', 'resnet101', pretrained=True)
# model = megengine.hub.load('megengine/models', 'resnet152', pretrained=True)
# model = megengine.hub.load('megengine/models', 'resnext50_32x4d', pretrained=True)
model.eval()
```
Expand Down Expand Up @@ -54,14 +55,15 @@ print(probs)

### 模型描述

目前我们提供了以下几个预训练模型,分别是`resnet18`, `resnet34`, `resnet50`, `resnet101``resnext50_32x4d`,它们在ImageNet验证集上的单crop性能如下表:
目前我们提供了以下几个预训练模型,分别是`resnet18`, `resnet34`, `resnet50`, `resnet101``resnet152`, `resnext50_32x4d`,它们在ImageNet验证集上的单crop性能如下表:

| 模型 | Top1 acc | Top5 acc |
| --- | --- | --- |
| ResNet18 | 70.312 | 89.430 |
| ResNet34 | 73.960 | 91.630 |
| ResNet50 | 76.254 | 93.056 |
| ResNet101| 77.944 | 93.844 |
| ResNet152| 78.582 | 94.130 |
| ResNeXt50 32x4d | 77.592 | 93.644 |

### 参考文献
Expand Down Expand Up @@ -105,14 +107,15 @@ print(probs)

### Model Description

Currently we provide these pretrained models: `resnet18`, `resnet34`, `resnet50`, `resnet101`, `resnext50_32x4d`. Their 1-crop accuracy on ImageNet validation dataset can be found in following table.
Currently we provide these pretrained models: `resnet18`, `resnet34`, `resnet50`, `resnet101`, `resnet152`, `resnext50_32x4d`. Their 1-crop accuracy on ImageNet validation dataset can be found in following table.

| model | Top1 acc | Top5 acc |
| --- | --- | --- |
| ResNet18 | 70.312 | 89.430 |
| ResNet34 | 73.960 | 91.630 |
| ResNet50 | 76.254 | 93.056 |
| ResNet101| 77.944 | 93.844 |
| ResNet152| 78.582 | 94.130 |
| ResNeXt50 32x4d | 77.592 | 93.644 |

### References
Expand Down
5 changes: 2 additions & 3 deletions models/megengine_vision_retinanet.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
template: hub1
title: retinanet
title: RetinaNet
summary:
en_US: RetinaNet pre-trained on COCO
zh_CN: RetinaNet (COCO预训练权重)
Expand All @@ -13,9 +13,8 @@ github-link: https://github.com/megengine/models
from megengine import hub
model = hub.load(
"megengine/models",
"retinanet_res50_coco_1x_800size",
"retinanet_res50_1x_800size",
pretrained=True,
use_cache=False,
)
model.eval()

Expand Down
15 changes: 12 additions & 3 deletions models/megengine_vision_shufflenet_v2.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
template: hub1
title: ResNet
title: ShuffleNet V2
summary:
en_US: "ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design"
zh_CN: ShuffleNet V2(ImageNet 预训练权重)
Expand All @@ -12,6 +12,9 @@ github-link: https://github.com/megengine/models
```python
import megengine.hub
model = megengine.hub.load('megengine/models', 'shufflenet_v2_x1_0', pretrained=True)
# model = megengine.hub.load('megengine/models', 'shufflenet_v2_x0_5', pretrained=True)
# model = megengine.hub.load('megengine/models', 'shufflenet_v2_x1_5', pretrained=True)
# model = megengine.hub.load('megengine/models', 'shufflenet_v2_x2_0', pretrained=True)
model.eval()
```
<!-- section: zh_CN -->
Expand Down Expand Up @@ -53,7 +56,10 @@ print(probs)

| 模型 | top1 acc | top5 acc |
| --- | --- | --- |
| shufflenet_v2_x1_0 | 69.369 | 88.793 |
| ShuffleNetV2 x0.5 | 60.696 | 82.190 |
| ShuffleNetV2 x1.0 | 69.372 | 88.764 |
| ShuffleNetV2 x1.5 | 72.806 | 90.792 |
| ShuffleNetV2 x2.0 | 75.074 | 92.278 |

### 参考文献

Expand Down Expand Up @@ -99,7 +105,10 @@ Currently we provide several pretrained models(see the table below), Their 1-cro

| model | top1 acc | top5 acc |
| --- | --- | --- |
| shufflenet_v2_x1_0 | 69.369 | 88.793 |
| ShuffleNetV2 x0.5 | 60.696 | 82.190 |
| ShuffleNetV2 x1.0 | 69.372 | 88.764 |
| ShuffleNetV2 x1.5 | 72.806 | 90.792 |
| ShuffleNetV2 x2.0 | 75.074 | 92.278 |

### References

Expand Down

0 comments on commit 73b3450

Please sign in to comment.