Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[MKLDNN]Add quantized relu #14604

Merged
merged 13 commits into from
Apr 18, 2019
Merged

Conversation

huangzhiyuan
Copy link
Contributor

@huangzhiyuan huangzhiyuan commented Apr 3, 2019

Description

This PR is to add quantized relu op and its MKLDNN implementation.
@pengzhao-intel @ZhennanQin @TaoLv

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant JIRA issue created (except PRs with tiny changes)
  • Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage:
  • Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
  • Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
  • Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
  • Code is well-documented:
  • For user-facing API changes, API doc string has been updated.
  • For new C++ functions in header files, their functionalities and arguments are documented.
  • For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
  • Check the API doc at http:https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
  • To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Changes

  • Feature1, tests, (and when applicable, API doc)
  • Feature2, tests, (and when applicable, API doc)

Comments

  • If this change is a backward incompatible change, why must this change be made.
  • Interesting edge cases to note here

@piyushghai
Copy link
Contributor

Thanks for your contributions @huangzhiyuan.
Can you please look into the CI failures ?

@piyushghai
Copy link
Contributor

@mxnet-label-bot Add [MKLDNN, pr-awaiting-review]

@marcoabreu marcoabreu added MKLDNN pr-awaiting-review PR is waiting for code review labels Apr 3, 2019
src/operator/nn/mkldnn/mkldnn_act-inl.h Show resolved Hide resolved

mkldnn::algorithm GetMKLDNNActAlgo(const ActivationParam& param);
mkldnn::eltwise_forward::primitive_desc GetActFwdDescImpl(
const ActivationParam& param, bool is_train,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unify the format of "&"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pengzhao-intel
Copy link
Contributor

cc @anirudh2290

*/
/*!
* Copyright (c) 2019 by Contributors
* \file mkldnn_act-inl.h
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fix file name.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

const std::vector<NDArray>& out_data) {
CHECK(in_data[0].dtype() == mshadow::kUint8 ||
in_data[0].dtype() == mshadow::kInt8)
<< "mkldnn_quantized_activation op only supports uint8 and int8 as input "
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mkldnn_quantized_activation is not a valid operator name.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

if (param.act_type == activation::kReLU) {
TYPE_ASSIGN_CHECK(*out_type, 0, mshadow::kInt8);
} else {
LOG(FATAL) << "QuantizedActivationOp only supports act_type=relu for now";
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

QuantizedActivationOp is not a valid operator name.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -414,6 +414,57 @@ def check_quantized_flatten(shape, qdtype):
check_quantized_flatten((10, 15, 18), qdtype)
check_quantized_flatten((3, 4, 23, 23), qdtype)

@with_seed()
def test_quantized_act():
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have cases for non-relu activation type?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Negative test cases are not necessary there, it well prompt that _contrib_quantized_act only supports act_type=relu for now.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fine. I'm personally okay about there is no non-relu cases in UT. But do you mind running one on your machine and paste the error log here? I just want to make sure that the error happens at a appropriate place and the message is accurate.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, the error log just like below when change activation type from relu to no-relu type (e.g. sigmoid):
[16:03:13] src/operator/quantization/quantized_activation.cc:54: _contrib_quantized_act only supports act_type=relu for now
Aborted

@pengzhao-intel pengzhao-intel added the Quantization Issues/Feature Requests related to Quantization label Apr 11, 2019
@pengzhao-intel
Copy link
Contributor

Please retrigger the CI

@pengzhao-intel
Copy link
Contributor

@TaoLv @ZhennanQin @ciyongch please help review the PR again.

src/operator/nn/mkldnn/mkldnn_act.cc Outdated Show resolved Hide resolved
src/operator/nn/mkldnn/mkldnn_act.cc Outdated Show resolved Hide resolved
src/operator/nn/mkldnn/mkldnn_act.cc Outdated Show resolved Hide resolved
src/operator/quantization/quantize_graph_pass.cc Outdated Show resolved Hide resolved
src/operator/quantization/quantized_activation.cc Outdated Show resolved Hide resolved
src/operator/quantization/quantized_activation.cc Outdated Show resolved Hide resolved
Copy link
Member

@TaoLv TaoLv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for addressing my comments. Approved now.

Copy link
Contributor

@pengzhao-intel pengzhao-intel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your contribution. LGTM

@pengzhao-intel
Copy link
Contributor

@huangzhiyuan please rebase the code since I have merged the quantize v2 changes.
Let's see if there are any impact on your change.

I will merge the PR if everything is fine.

@pengzhao-intel
Copy link
Contributor

Merging now, thanks for your contribution :)

@pengzhao-intel pengzhao-intel merged commit 0da4b67 into apache:master Apr 18, 2019
@huangzhiyuan huangzhiyuan deleted the quantized-relu branch April 19, 2019 02:51
kedarbellare pushed a commit to kedarbellare/incubator-mxnet that referenced this pull request Apr 20, 2019
* add quantized relu

* fix testcase

* add author and skip quantized-relu for gpu

* fix comments

* retrigger ci

* retrigger ci

* comment fix

* retrigger ci

* retrigger ci
haohuanw pushed a commit to haohuanw/incubator-mxnet that referenced this pull request Jun 23, 2019
* add quantized relu

* fix testcase

* add author and skip quantized-relu for gpu

* fix comments

* retrigger ci

* retrigger ci

* comment fix

* retrigger ci

* retrigger ci
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
MKLDNN pr-awaiting-review PR is waiting for code review Quantization Issues/Feature Requests related to Quantization
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants