Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[MXNET-729] Scala Examples memory leak fix #12232

Merged
merged 13 commits into from
Aug 23, 2018

Conversation

lanking520
Copy link
Member

@lanking520 lanking520 commented Aug 17, 2018

Description

Currently the Scala integration test running are strong influenced by CUDA memory not enough. In order to address that issue, this PR gives a test run on the NDArrayCollector created by @yzhliu.
@andrewfayres @nswamy

related CI failure:
http:https://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-11753/9/pipeline

http:https://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-11753/10/pipeline

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant JIRA issue created (except PRs with tiny changes)
  • Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage:
  • Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
  • Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
  • Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
  • Code is well-documented:
  • For user-facing API changes, API doc string has been updated.
  • For new C++ functions in header files, their functionalities and arguments are documented.
  • For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
  • Check the API doc at http:https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
  • To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

@lanking520 lanking520 changed the title [MXNET-729][WIP] Scala Examples memory leak fix [MXNET-729] Scala Examples memory leak fix Aug 20, 2018
@@ -50,26 +49,32 @@ class ExampleRNNSuite extends FunSuite with BeforeAndAfterAll {
System.getenv("SCALA_TEST_ON_GPU").toInt == 1) {
ctx = Context.gpu()
}
LstmBucketing.runTraining(tempDirPath + "/RNN/sherlockholmes.train.txt",
NDArrayCollector.auto().withScope {
LstmBucketing.runTraining(tempDirPath + "/RNN/sherlockholmes.train.txt",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is better to have the collector inside runTraining, like that you do in GanMnist.scala. Otherwise when the # of loop increases, too many ndarrays will be stored temporarily in the collector here.
But if you just mean to get rid of mem leak in CI, then this is fine.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As you said, this piece of code is just used to improve the memory leak issues in the CI.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lanking520 Can you please make the change that Yizhi is asking? lets do it right because this will become a pattern in other parts of the code.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nswamy Depends on the purpose of the action:

  1. Improving CI, this is the right action to wrap from outside
  2. Improving model itself, then need to change from inside.

@@ -44,7 +44,9 @@ class GanExampleSuite extends FunSuite with BeforeAndAfterAll{

val context = Context.gpu()

val output = GanMnist.runTraining(modelDirPath, context, modelDirPath, 5)
val output = NDArrayCollector.auto().withScope {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you need NDCollectors here. It should be sufficient to just have it inside the training loop(for each epoch)

@lanking520
Copy link
Member Author

lanking520 commented Aug 22, 2018

I only add a big and giant {} in every example, not changing things inside.

@nswamy nswamy merged commit 2f177d8 into apache:master Aug 23, 2018
XinYao1994 pushed a commit to XinYao1994/incubator-mxnet that referenced this pull request Aug 29, 2018
apache#12232)

* initial fix for RNN

* add CI test

* ignore the test due to memory leaks

* release the GAN beast

* enable rnn

* add collector and dispose

* revert the hacky thing after rebase

* rename with inference

* add collector in some examples

* add experimental tag and comments

* change the scope of the NDArrayCollector

* apply final changes...

* fix scalastyle
anirudh2290 pushed a commit to anirudh2290/mxnet that referenced this pull request Sep 19, 2018
apache#12232)

* initial fix for RNN

* add CI test

* ignore the test due to memory leaks

* release the GAN beast

* enable rnn

* add collector and dispose

* revert the hacky thing after rebase

* rename with inference

* add collector in some examples

* add experimental tag and comments

* change the scope of the NDArrayCollector

* apply final changes...

* fix scalastyle
@lanking520 lanking520 deleted the example-memory branch September 19, 2018 23:00
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants