-
Notifications
You must be signed in to change notification settings - Fork 7.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add IPv6 Support #1756
Comments
Moving to un-assigned and adding help needed, to indicate to contributors nobody is working on it yet. |
What's needed to get this implemented. We would love to see IPv6 at Scalefastr as that works really well for our use case. IPv6 is awesome when you have a whole /64 and plenty of IPs to work with on the host machine. |
First is testing and finding what doesn't work - there are likely few
places where ipv4 is assumed.
There are 2 main uses:
- Istio ingress and the egress rules are likely the most important, to
allow communication
with external ipv6 clients and servers.
- For service-to-service in-mesh communication. AFAIK it's still quite
common to use private ip4, this
is also important to move to v6.
…On Fri, Dec 15, 2017 at 9:13 PM, Kevin Burton ***@***.***> wrote:
What's needed to get this implemented. We would love to see IPv6 at
Scalefastr as that works really well for our use case.
IPv6 is awesome when you have a whole /64 and plenty of IPs to work with
on the host machine.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#1756 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAFI6sJM1JVN5ovbkZTxAvH6SO8x6fAcks5tA1F0gaJpZM4QhAWT>
.
|
Very interesting work: |
@pmichali I commented on one of the issues, but wanted to pull it up to the higher level. It is extremely desirable to not make the operator explicitly configure IPv4 vs IPv6. Naively, I'm assuming this is possible since considerable work went toward making them play nicely in the Linux kernel. So, we should
etc. There may be some "gotchas" I don't know about that really require some operator config, but the goal should be zero Istio config changes to make this work. |
I added the 9 issues to the epic (epic only shows if you have zenhub addon) |
@spikecurtis I agree with the goals there. We need to verify that IPv6 addresses will work in IPv4 mode, especially when IPv6 is disabled on the host. For the configurability, I was hoping that it could be done implicitly, by determining the IP mode possibly. I wouldn't want to see knobs, either. |
Not all ipv6 setup are configured to also bind to v4 when using v6; for most complete coverage you often have to listen twice |
I have tried to run istio in an ipv6-only kubernetes cluster recently. By some incompatible hard-coded modifications,istio/envoy really works. To implicitly adapt both ipv4 and ipv6, there first should be a way to verify the host support ipv6 or not. However, even if we know the host support ipv6 automatically, it does not mean any services really run on ipv6 . On the system which may enable ipv6 support by default but only work on ipv4, we should not automatically config istio to run in dual stack mode. So explicitly configuring IPv4 vs IPv6 may be necessary ? @pmichali @spikecurtis |
@zmlcc I don't see the harm in Istio running dual stack, where available. Can you elaborate? |
@spikecurtis To run in dual stack, istio may require extra network abilities or add extra traffic rule (e.g. #5273), which are not necessary or expected for the operator. For example, considering a dual stack cluster where the business traffic is running in ipv6 and the management traffic is in ipv4. Using istio to manage micro services in ipv6 is good, but the operator do not want istio effect anything abount ipv4 network (especially adding/deleting iptables rules). Meanwhile, as ipv4/v6 services could not communicate directly, I think one alone service mesh rarely manage dual stack traffic as a whole (if needed, it could be seen as the ipv4 part and ipv6 part separately). So in most instances, running isito in single stack mode is enough and suitable. I think we should leave the right to choose using istio in ipv4-only / ipv6-only / dual stack mode to the final operator. Even if itstio could run in dual stack by default, we could also keep the ability to manually disable ipv4/ipv6 functionality if we do not need at all. |
As a reminder, in 0.8 the IP6 is explicitly disabled in sidecar capture -
we had a security bug with capturing all ports, and didn't have time to add
and
test the ip6 fix. It is obviously a priority to get this fixed ASAP and
before 1.0.
We also need quite a bit of work to have automated tests for IP6 endpoints
in EDS - we can do a bit using ServiceEntry, but currently
the test infra doesn't run IP6.
IMO the most important issue is IP6 support in gateway, and making sure
mixer handles it. This takes care of having external clients on IP6
(mobile, etc) - while the services and cluster are ipv4. Also having
'ServiceEntry' with IP6 addresses for external nodes ( mesh expansion ,
external services, etc).
Having entire mesh in IP6 is nice to have - but I think is less urgent than
talking with external IP6 clients and servers.
…On Fri, Jun 15, 2018 at 1:41 AM zmlcc ***@***.***> wrote:
@spikecurtis <https://github.com/spikecurtis> To run in dual stack, istio
may require extra network abilities or add extra traffic rule (e.g. #5273
<#5273>), which are not necessary or
expected for the operator.
For example, considering a dual stack cluster where the business traffic
is running in ipv6 and the management traffic is in ipv4. Using istio to
manage micro services in ipv6 is good, but the operator do not want istio
effect anything abount ipv4 network (especially adding/deleting iptables
rules).
Meanwhile, as ipv4/v6 services could not communicate directly, I think one
alone service mesh rarely manage dual stack traffic as a whole (if needed,
it could be seen as the ipv4 part and ipv6 part separately). So in most
instances, running isito in single stack mode is enough and suitable.
I think we should leave the right to choose using istio in ipv4-only /
ipv6-only / dual stack mode to the final operator. Even if itstio could run
in dual stack by default, we could also keep the ability to manually
disable ipv4/ipv6 functionality if we do not need at all.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#1756 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAFI6st6DR6Y6eYgsgnxB2NklT0SWzWSks5t83MkgaJpZM4QhAWT>
.
|
Hello, Is there any place I can find the overall status for the IPv6 support? For example, is IPv6 support in gateway? What is the status of IPv6 for entire mesh? Thanks, |
Hi, +1 for Zy's question. I think it would be great to have a view of what are the different aspects needed for IPv6 support in the mesh and in gateways, and what's the status and priority for each of them. I also think gateways are top prio, but at some point it should be possible to get fully IPv6 based meshes - I'm thinking of existing applications that are introducing Istio and don't necessarily want to change their L3 protocol because of that. My two cents. Thanks in advance! Stefano |
@sbezverk are there any tests that exercise IPv6, ideally with IPv4 disabled so we can be sure that IPv6 continues to work / doesn't regress? |
Does Istio CNI plugin support IPv6, according to the README.md from CNI, only IPv4 is supported. Thanks. |
@howardjohn Are there any other outstanding tasks for IPv6 support? |
@knrc what is missing is testing, which means I can't really answer that confidently 🙂. However, in theory its supposed to work now, but I am not sure if that is true today and was ever actually true in the past |
@knrc I still have k8s/istio pure ipv6 cluster up and running, but I agree with @howardjohn testing is missing. since there was no too much interest in ipv6, attempts to build k8s |
@howardjohn @sbezverk Great, I think this is something we will be able to help out with in that case. I'm not sure about creating a k8s IPv6 |
We have ipv6 testing jobs running in kind, it should not be difficult run istio on top, is this something we can set up? I can help here |
@aojea the problem is not Kind, but the underlying node. This is my understand, and correct me if I am wrong:
|
@aojea Please share steps to bring up a multinode (preferably 3 node) ipv6 k8s cluster in kind. |
https://kind.sigs.k8s.io/docs/user/quick-start/#ipv6-clusters |
Just to clarify: Is it possible to receive IPv6 ingress requests (e.g. using the istio ingress gateway with NodePort on an IPv6 enabled server) if the cluster is in IPv4 mode? In out evaluation cluster all incoming IPv6 requests keep hanging but the port is open for IPv6 (as stated by netstat) - IPv4 request are successfully routed as expected. |
Steps to get things testing on ipv6: Set up this in our docker environment: https://github.com/kubernetes/test-infra/blob/d96ae1300d79fc134d7ad2b28663f94bbbbb7e3d/images/krte/wrapper.sh#L55-L65. This will require changes to https://github.com/istio/tools/blob/master/docker/build-tools/prow-entrypoint.sh most likely? set the ipv6 kind config: https://kind.sigs.k8s.io/docs/user/quick-start/#ipv6-clusters . Add a config into https://github.com/istio/istio/tree/master/prow/config and make some minor tweaks to istio/prow/integ-suite-kind.sh Line 1 in fa1b6b0
Add a new job to https://github.com/istio/test-infra/blob/master/prow/config/jobs/istio.yaml. I suggest just adding one like After that thing should be running smoothly if it all goes according to plan? |
/assign |
I think this is considered done. We will track improving testing in #23473 |
Is there a timeline for IPv6 in ISTIO getting to beta status (it's currently marked as alpha here )? Thanks! |
@danehans commented on Mon Oct 30 2017
Is this a BUG or FEATURE REQUEST?:
FEATURE REQUEST
Did you review existing epics or issues to identify if this already being worked on?
Yes
Bug:
N
Feature Request:
Y
Describe the feature:
The Kubernetes community is in the process of adding IPv6 support and kubernetes/kubernetes#1443 is being used to track the effort. IPv6 alpha support is expected in the v1.9 Kubernetes release. As a user, I would like to manage my micro services using Istio running on Kubernetes IPv6 with the same or better capabilities than running on IPv4.
The text was updated successfully, but these errors were encountered: