Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add export to limit GPU-memory-usage #1553

Merged
merged 1 commit into from
Dec 22, 2020
Merged

Conversation

LDOUBLEV
Copy link
Collaborator

No description provided.

@LDOUBLEV LDOUBLEV mentioned this pull request Dec 22, 2020
@dyning dyning merged commit 0a6e86e into PaddlePaddle:dygraph Dec 22, 2020
@@ -123,6 +123,7 @@ def create_predictor(args, mode, logger):
# cache 10 different shapes for mkldnn to avoid memory leak
config.set_mkldnn_cache_capacity(10)
config.enable_mkldnn()
args.rec_batch_num = 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个地方的rec_batch_num = 1,为什么不是上面定义的6呢,求教一下

@@ -33,7 +33,7 @@ def str2bool(v):
parser.add_argument("--ir_optim", type=str2bool, default=True)
parser.add_argument("--use_tensorrt", type=str2bool, default=False)
parser.add_argument("--use_fp16", type=str2bool, default=False)
parser.add_argument("--gpu_mem", type=int, default=8000)
parser.add_argument("--gpu_mem", type=int, default=500)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gpu_mem 这个地方设置为512是不是更好一些呢?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants