Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gpu-manger are always scheduled to master node #409

Closed
ryanqin01 opened this issue May 29, 2020 · 2 comments
Closed

gpu-manger are always scheduled to master node #409

ryanqin01 opened this issue May 29, 2020 · 2 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@ryanqin01
Copy link

What happened:
I use two GPU machines to deploy a k8s cluster using TKE. One as master node and one as worker node. When I try to install gpu manger plugin, it will always fail. Since the gpu-quota-admission pod is always scheduled to master node. I have to manually change gpu-manger.yaml and force the gpu-quota-admission pod to be deployed in worker node.

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • TKE version: 1.2.4
  • Global or business cluster:
  • Kubernetes version (use kubectl version):
  • Install addons:
  • Others:
@ryanqin01 ryanqin01 added the kind/bug Categorizes issue or PR as related to a bug. label May 29, 2020
@QianChenglong
Copy link
Contributor

Seems like gpu-quota-admision need copy config to /etc/kubernetes.
image

@mYmNeo Maybe explain why?

@mYmNeo
Copy link
Contributor

mYmNeo commented Jun 1, 2020

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

3 participants