Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[MXNET-1410]Adding Large Tensor Support for tensor transpose #15059

Merged
merged 1 commit into from
Jun 2, 2019

Conversation

access2rohit
Copy link
Contributor

@access2rohit access2rohit commented May 24, 2019

Description

Adds tests for tensor transpose
Fix operators: flip

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • The PR title starts with [MXNET-1410], where [MXNET-1410]Adding Large Tensor Support for tensor transpose #15059 refers to the relevant JIRA issue created (except PRs with tiny changes)
  • Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage:
  • Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
  • Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
  • Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
  • Code is well-documented:
  • For user-facing API changes, API doc string has been updated.
  • For new C++ functions in header files, their functionalities and arguments are documented.
  • For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
  • Check the API doc at http:https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
  • To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

@access2rohit
Copy link
Contributor Author

access2rohit commented May 24, 2019

@mxnet-label-bot add [pr-work-in-progress]

@marcoabreu marcoabreu added pr-awaiting-review PR is waiting for code review pr-work-in-progress PR is still work in progress labels May 24, 2019
@access2rohit access2rohit changed the title [MXNET-1410]Adding Large Tensor Support for tensor transpose [WIP][MXNET-1410]Adding Large Tensor Support for tensor transpose May 24, 2019
@access2rohit access2rohit force-pushed the lts_transpose branch 2 times, most recently from 1976567 to cc20e34 Compare May 24, 2019 10:24
@access2rohit access2rohit changed the title [WIP][MXNET-1410]Adding Large Tensor Support for tensor transpose [MXNET-1410]Adding Large Tensor Support for tensor transpose May 24, 2019
Copy link
Contributor

@samskalicky samskalicky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@@ -1950,7 +1950,7 @@ struct ReverseParam : public dmlc::Parameter<ReverseParam> {
#define REVERSE_MAX_DIM 10U

struct reverse {
MSHADOW_XINLINE static int ReverseIndex(index_t idx,
MSHADOW_XINLINE static index_t ReverseIndex(index_t idx,
index_t nreversedim,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: indent

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

my bad .. will fix it

t = b.T
assert t.shape == (SMALL_Y, LARGE_X)
assert np.sum(t[:,-1].asnumpy() == LARGE_X) == b.shape[1]
t = nd.swapaxes(b, dim1=0, dim2=1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test transpose instead?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are 3 different transpose operations all three are tested

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought transpose is a operator name? If you are testing swapaxes, maybe change the test name as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

flip(), swapaxes() and b.T are all transpose operations

t = nd.swapaxes(b, dim1=0, dim2=1)
assert t.shape == (SMALL_Y, LARGE_X)
assert np.sum(t[:,-1].asnumpy() == LARGE_X) == b.shape[1]
t = nd.flip(b, axis=0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test this operator in separate?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as above

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

try test operator invidually to avoid collateral failures.

@access2rohit access2rohit force-pushed the lts_transpose branch 3 times, most recently from d5443f0 to e678461 Compare May 29, 2019 18:16
@@ -279,6 +279,33 @@ def test_diag():
assert_almost_equal(r.asnumpy(), np.diag(a_np, k=k))


def create_large_tensor():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we make this method more generic, so user can pass in values to create tensors with different dimension? Also, if it is limited to a 2D tensor, maybe you want to rename the method too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is currently generic to 2D tensor. Can be used for other tests will update that in a separate PR. Apart from this do you think its good to go ?

Copy link
Contributor

@apeforest apeforest left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@apeforest apeforest merged commit 99e69e6 into apache:master Jun 2, 2019
@roywei
Copy link
Member

roywei commented Jun 3, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
pr-awaiting-review PR is waiting for code review pr-work-in-progress PR is still work in progress
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants