Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

grouping large array tests based on type and updating nightly CI funtion #17305

Merged
merged 1 commit into from
Jan 16, 2020

Conversation

access2rohit
Copy link
Contributor

@access2rohit access2rohit commented Jan 14, 2020

Description

Grouped Large Array tests for large tensor support into 3:

  • tensor (creation and manipulation)
  • nn (neural network)
  • basic (arithmetic and logical)

calling 4 groups separately in nightly run so memory is freed by after every group run.

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage:
  • Code is well-documented:
  • To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Testing

Currently running, will update once they finish

@access2rohit
Copy link
Contributor Author

access2rohit commented Jan 14, 2020

@mxnet-label-bot add [pr-awaiting-review]

@lanking520 lanking520 added the pr-awaiting-review PR is waiting for code review label Jan 14, 2020
@access2rohit
Copy link
Contributor Author

@apeforest @ChaiBapchya PR is ready for review

Copy link
Contributor

@apeforest apeforest left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. How about we regroup/rename the groups to test_tensor (including array_creation and array_manipulation), test_nn (test_dnn), test_basic (test_arithmetic), so they follow same convention as in current src/operator folder.

@access2rohit access2rohit force-pushed the array_grouping branch 2 times, most recently from 7288560 to f7c3ab8 Compare January 15, 2020 00:36
@access2rohit
Copy link
Contributor Author

LGTM. How about we regroup/rename the groups to test_tensor (including array_creation and array_manipulation), test_nn (test_dnn), test_basic (test_arithmetic), so they follow same convention as in current src/operator folder.

Done


check_gluon_embedding()
check_fully_connected()
check_dense(ctx=mx.cpu(0))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why hard code ctx to cpu? Is it too big for GPU?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

my bad. Copy paste error. https://github.com/apache/incubator-mxnet/pull/17305/files#diff-9ee911616af04047075035c95cf542fbR60 already has default parameter as cpu ctx.

We are not testing for GPU since theoretically its not possible for a 32-bit(4 Byte) variable to hold such a large value.

(2^32*4)/(2^30) = 16 GB

Unless its v100 w/ 32 GB Global mem.

Copy link
Contributor

@apeforest apeforest left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@apeforest apeforest merged commit 7b349dd into apache:master Jan 16, 2020
@ChaiBapchya
Copy link
Contributor

@access2rohit to prevent any issue coming ahead, can you please post the results of the tests? Thanks
@apeforest PR shouldnt have been merged without the test results.

@apeforest
Copy link
Contributor

@ChaiBapchya Good call. @access2rohit Please post your local results. Thanks.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
pr-awaiting-review PR is waiting for code review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants