Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Exclude concat layer for gpu quantization #14060

Merged
merged 3 commits into from
Feb 7, 2019
Merged

Exclude concat layer for gpu quantization #14060

merged 3 commits into from
Feb 7, 2019

Conversation

jitMatrix
Copy link
Contributor

@jitMatrix jitMatrix commented Feb 3, 2019

Description

Exclude concat layer for gpu quantization since #13297 enable quantizaed_concat op for cpu.
below is the error log before this fix:

python imagenet_inference.py --symbol-file=./model/imagenet1k-inception-bn-quantized-5batches-naive-symbol.json --param-file=./model/imagenet1k-inception-bn-quantized-0000.params --rgb-mean=123.68,116.779,103.939 --num-skipped-batches=50 --num-inference-batches=500 --dataset=./data/val_256_q90.rec
INFO:logger:batch size = 32 for inference
INFO:logger:rgb_mean = 123.68,116.779,103.939
INFO:logger:rgb_std = 1,1,1
INFO:logger:label_name = softmax_label
INFO:logger:Input data shape = (3, 224, 224)
INFO:logger:Dataset for inference: ./data/val_256_q90.rec
[10:15:59] src/io/iter_image_recordio_2.cc:172: ImageRecordIOParser2: ./data/val_256_q90.rec, use 39 threads for decoding..
INFO:logger:Skipping the first 50 batches
INFO:logger:Running model ./model/imagenet1k-inception-bn-quantized-5batches-naive-symbol.json for inference
[10:16:04] src/executor/attach_op_execs_pass.cc:351: Neither FCompute nor FComputeEx registered _contrib_quantized_concat

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant JIRA issue created (except PRs with tiny changes)
  • Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage:
  • Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
  • Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
  • Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
  • Code is well-documented:
  • For user-facing API changes, API doc string has been updated.
  • For new C++ functions in header files, their functionalities and arguments are documented.
  • For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
  • Check the API doc at http:https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
  • To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Changes

  • Feature1, tests, (and when applicable, API doc)
  • Feature2, tests, (and when applicable, API doc)

Comments

  • If this change is a backward incompatible change, why must this change be made.
  • Interesting edge cases to note here

@jitMatrix jitMatrix requested a review from szha as a code owner February 3, 2019 03:27
@szha szha requested a review from TaoLv February 3, 2019 20:27
@vandanavk
Copy link
Contributor

@mxnet-label-bot add [pr-awaiting-review, Quantization]

@marcoabreu marcoabreu added pr-awaiting-review PR is waiting for code review Quantization Issues/Feature Requests related to Quantization labels Feb 4, 2019
Copy link
Contributor

@pengzhao-intel pengzhao-intel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, just wait for the CI pass.

@rajeshii could you help rebase the code and pass the CI?

@jitMatrix
Copy link
Contributor Author

@reminisce @pengzhao-intel CI pass now:)

@pengzhao-intel
Copy link
Contributor

@reminisce could you help to take a look again and merge this PR?
Thanks.

@reminisce reminisce merged commit 26ca37c into apache:master Feb 7, 2019
stephenrawls pushed a commit to stephenrawls/incubator-mxnet that referenced this pull request Feb 16, 2019
* exclude concat for gpu quantization

* remove quantized_concat test in non-subgraph flow
vdantu pushed a commit to vdantu/incubator-mxnet that referenced this pull request Mar 31, 2019
* exclude concat for gpu quantization

* remove quantized_concat test in non-subgraph flow
haohuanw pushed a commit to haohuanw/incubator-mxnet that referenced this pull request Jun 23, 2019
* exclude concat for gpu quantization

* remove quantized_concat test in non-subgraph flow
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
pr-awaiting-review PR is waiting for code review Quantization Issues/Feature Requests related to Quantization
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants