Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Mixed precison binary op backward (use in) for numpy #16791

Merged
merged 2 commits into from
Nov 20, 2019

Conversation

haojin2
Copy link
Contributor

@haojin2 haojin2 commented Nov 12, 2019

Description

As title.
Done through casting the lhs or rhs values to the same type and fallback to existing implementations.
Not implemented for cases where both inputs are integers, since that case backward is not meaningful.

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant JIRA issue created (except PRs with tiny changes)
  • Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage:
  • Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
  • Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
  • Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
  • Code is well-documented:
  • For user-facing API changes, API doc string has been updated.
  • For new C++ functions in header files, their functionalities and arguments are documented.
  • For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
  • Check the API doc at https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
  • To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Changes

  • BackwardUseIn for multiply op
  • Unittest coverage

Comments

Limited support for the sake of d2l, benchmark results yet to come.
More to come for UseNone in the future.

@haojin2 haojin2 added the Numpy label Nov 12, 2019
@haojin2 haojin2 self-assigned this Nov 12, 2019
@haojin2 haojin2 added this to In progress in numpy via automation Nov 12, 2019
@haojin2 haojin2 added the R1.6.0 label Nov 12, 2019
numpy automation moved this from In progress to Reviewer approved Nov 14, 2019
@reminisce reminisce merged commit 7c9cb6b into apache:master Nov 20, 2019
numpy automation moved this from Reviewer approved to Done Nov 20, 2019
ptrendx pushed a commit to ptrendx/mxnet that referenced this pull request Nov 20, 2019
* mixed precison binary op backward

* reduce unix cpu runtime
ptrendx added a commit to ptrendx/mxnet that referenced this pull request Nov 21, 2019
ptrendx added a commit that referenced this pull request Nov 22, 2019
* Add unoptimized symbol to executor for sharing (#16798)

* Add unoptimized symbol to executor for sharing

* Copy the symbol in Reshape

* Added test for multiple reshapes

* Mixed precison binary op backward (use in) for numpy (#16791)

* mixed precison binary op backward

* reduce unix cpu runtime

* USE_NVRTC -> ENABLE_CUDA_RTC to fix maven build.  Add compile-guard to fusion. (#16838)

* Rename USE_NVRTC -> ENABLE_CUDA_RTC to fix maven build.  Compile-guard fusion framework.

* Fix fusion-not-supported warning.

* Fix compile guards

* Fix cmake build so -DMXNET_ENABLE_CUDA_RTC=1 is passed to nvcc

* Minimize side-effects of prev change

* Fix InferAttr/InferShapeAttr not calling inference for all nodes in a graph (#16836)

* Fix the attribute inference omitting nodes

* Add test

* Cleaning

* Fix lint

* Fix TransposeShape

* Fix WhileLoopType

* Changing a/b test for fusion to a/(b+1) to increase numerical stability

* Revert "Mixed precison binary op backward (use in) for numpy (#16791)"

This reverts commit 8b58b78.
ptrendx pushed a commit to ptrendx/mxnet that referenced this pull request Nov 25, 2019
* mixed precison binary op backward

* reduce unix cpu runtime
ptrendx added a commit that referenced this pull request Nov 26, 2019
* refactor and reduce float types for some functions, also add bitwise_xor (#16827)

* Mixed precison binary op backward (use in) for numpy (#16791)

* mixed precison binary op backward

* reduce unix cpu runtime

* Add evaluation_loss to the estimator base class. (#16888)

* Add evaluation_loss to the estimator base class.

* Update the base estimator class to support the separate evaluation loss.

* Add evaluation loss to the base estimator class.

* Add unittest for evaluation loss in the test_evaluation function

* Update estimator.py

* Update estimator.py
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
No open projects
numpy
  
Done
Development

Successfully merging this pull request may close these issues.

None yet

2 participants