-
-
Notifications
You must be signed in to change notification settings - Fork 229
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
provide support for backend traffic forwarding #333
Conversation
b902b2c
to
32eea41
Compare
A few |
If there are implications for docs that need to be covered, can you either submit a PR in kube-vip/website or raise an issue with the details, please? |
There are few implications for docs which I updated in this PR but sure, I'll move that to kube-vip/website once this is ready (this is yet a WIP). |
cada0d1
to
bb5088a
Compare
b613e97
to
f239ab4
Compare
618a74a
to
0bac767
Compare
a6dfe28
to
2a04691
Compare
2a04691
to
74a8619
Compare
This all looks great to me, I have but one concern and I might be completely wrong here 😄. Originally Kube-vip came in a cc / @neolit123 @yastij @yaocw2020 @detiber (for comment, if they have any) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
K8s distributed images have moved in the direction of being distroless. Preferably there should be no shell and exploitable bits in the image and only the required tools.
Alpine is considered insecure by some providers.
Hello @thebsdbox Thank you for the review! I've just added a commit. Sometime CSI init can fail (related: https://bugzilla.redhat.com/show_bug.cgi?id=1817382) at node add time which cause the node status to be not ready (so the node is removed from the IPVS LB pool and k8s API is no more reachable), I had to separate the add and update logic of the node watcher so to always add a new node to the pool (nevermind its status) in case of initial addition to the cluster. About base image, since this need iptables binary there is not much alternative AFAIK, at least, if eventually there is a security issue, it must be the true for flannel (https://github.com/flannel-io/flannel/blob/master/Dockerfile.amd64) and kube-proxy (https://github.com/kubernetes/kubernetes/blob/master/build/common.sh#L93) as well: we eventually may ask them how they cope with that. |
Hello @neolit123 flannel rely on alpine and kube-proxy on debian. we may move to debian as kube-proxy if preferable. |
I think the debian for kube proxy only contains iptables and other
dependencies but no shell or package managers.
|
We can go with the same base image than kube-proxy, I don't mind until we have iptables it's fine. |
FTR, another option is to move to nftables (https://github.com/google/nftables) which has a netlink interface so we can stay with |
Due to security concerns |
You need to |
This has all been re-implemented with health checks in |
This brings decoupling between vip's address/port and backend's (api server) address/port.
Coupling is setup at start using either iptables or IPVS (depends if load balancer is enabled or not) if the backend address isn't
0.0.0.0
(which is the default value and is backward compatible).It also adds/removes backends as they become ready/not-ready (in addition to already implemented in / out)
lbBackEndPort
looks unused and conflict/confuse with newbackendPort
that is used to define the api server port. Maybe better to remove ?Deprecates
lbPort
sinceport
is used for the loadbalancer port (andbackendPort
is used for real (api)servers port)