Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[v1.7.x] cherry pick #17741 to v1.7.x #18113

Merged
merged 1 commit into from
Apr 21, 2020

Conversation

ptrendx
Copy link
Member

@ptrendx ptrendx commented Apr 20, 2020

  • Vectorized loads for binary elemwise kernel

  • More generalization

  • Add backwardusenone

  • Remove the unused _backward_add op

  • Add vectorized backwardusein

  • Extending vectorization to more binary ops, binary ops with scalar and
    unary ops

  • Handling ElementwiseSum

  • Get rid of half2 in mshadow

  • Remove backward_elemwiseaddex

  • Revert "Remove the unused _backward_add op"

This reverts commit f86da86.

  • Revert "Remove backward_elemwiseaddex"

This reverts commit 7729114.

  • Add back the backward_add since C++ test relies on it

  • Test bcast implementations

  • First version of vecotrized bcast

  • Adding single side vectorized bcast kernel

  • Removing debug prints

  • Actually run the single side kernel

  • Move the default implementation of bcast to the vectorized one

  • Limit the new implementation to GPU only

  • Enabling vectorization when broadcast does not actually do broadcast

  • Cleaning

  • Cleaning part 2

  • Fix for numpy ops using stuff from broadcast

  • Fix

  • Fix lint

  • Try to debug pinv numpy test

  • Fix

  • Fix the vectorized broadcast implementation for misaligned input
    pointers

  • Added tests

  • Added docs to cuda_vectorization.cuh

  • Another fix for broadcast and fix INT64 compilation

  • Optimize for aligned=true

  • 1 more addition to test

  • Reverting the change to Numpy op test

  • Trying mcmodel=medium to fix the failure in CMake static build

  • Revert "Trying mcmodel=medium to fix the failure in CMake static build"

This reverts commit 1af684c.

  • Limiting the PR to just elementwise ops

@ciyongch

* Vectorized loads for binary elemwise kernel

* More generalization

* Add backwardusenone

* Remove the unused _backward_add op

* Add vectorized backwardusein

* Extending vectorization to more binary ops, binary ops with scalar and
unary ops

* Handling ElementwiseSum

* Get rid of half2 in mshadow

* Remove backward_elemwiseaddex

* Revert "Remove the unused _backward_add op"

This reverts commit f86da86.

* Revert "Remove backward_elemwiseaddex"

This reverts commit 7729114.

* Add back the backward_add since C++ test relies on it

* Test bcast implementations

* First version of vecotrized bcast

* Adding single side vectorized bcast kernel

* Removing debug prints

* Actually run the single side kernel

* Move the default implementation of bcast to the vectorized one

* Limit the new implementation to GPU only

* Enabling vectorization when broadcast does not actually do broadcast

* Cleaning

* Cleaning part 2

* Fix for numpy ops using stuff from broadcast

* Fix

* Fix lint

* Try to debug pinv numpy test

* Fix

* Fix the vectorized broadcast implementation for misaligned input
pointers

* Added tests

* Added docs to cuda_vectorization.cuh

* Another fix for broadcast and fix INT64 compilation

* Optimize for aligned=true

* 1 more addition to test

* Reverting the change to Numpy op test

* Trying mcmodel=medium to fix the failure in CMake static build

* Revert "Trying mcmodel=medium to fix the failure in CMake static build"

This reverts commit 1af684c.

* Limiting the PR to just elementwise ops
@mxnet-bot
Copy link

Hey @ptrendx , Thanks for submitting the PR
All tests are already queued to run once. If tests fail, you can trigger one or more tests again with the following commands:

  • To trigger all jobs: @mxnet-bot run ci [all]
  • To trigger specific jobs: @mxnet-bot run ci [job1, job2]

CI supported jobs: [clang, centos-gpu, website, edge, windows-gpu, centos-cpu, windows-cpu, unix-cpu, sanity, miscellaneous, unix-gpu]


Note:
Only following 3 categories can trigger CI :PR Author, MXNet Committer, Jenkins Admin.
All CI tests must pass before the PR can be merged.

@ptrendx
Copy link
Member Author

ptrendx commented Apr 20, 2020

@mxnet-bot run ci [windows-gpu]

@mxnet-bot
Copy link

Jenkins CI successfully triggered : [windows-gpu]

@ptrendx ptrendx merged commit 7c63f56 into apache:v1.7.x Apr 21, 2020
@ciyongch
Copy link
Contributor

@ptrendx thanks for backporting the PR to v1.7.x branch, suppose the original PR is #18095 in v1.x, and #17767 in master, right?

BTW, do you have any other pending PRs that would like to be included in 1.7.0 release?

@ptrendx
Copy link
Member Author

ptrendx commented Apr 21, 2020

I don't have anything else, I know @samskalicky wanted to include his custom graph pass PR.

@ptrendx
Copy link
Member Author

ptrendx commented Apr 21, 2020

Just noticed that I made a mistake in PR name about hich PR I'm backporting, my bad :-P

@ciyongch
Copy link
Contributor

Thanks @ptrendx , just want to make sure we've all we need in 1.7.0 :)
If custom graph pass PR refer to #18069, then it's already merged in both v1.x and v1.7.x.

@samskalicky
Copy link
Contributor

If custom graph pass PR refer to #18069, then it's already merged in both v1.x and v1.7.x.

Almost done with #17885, as soon as CI passes and I get one more review i'll backport to 1.x and 1.7.x

@ChaiBapchya
Copy link
Contributor

@ptrendx can we please rename the PR since its incorrect? What should be the correct name?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants