-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Any known problems with GCP Compute Engine #384
Comments
Hi @Jonathan-Eid is this a GKE cluster? And are you running Kilo in add-on mode or as the only CNI for the cluster? If it's add-on mode then this is probably a limitation of the compatibility with the other CNI and you'll need to run Kilo in full-mesh mode to get full connectivity. |
We're not running on GKE, we setup vanilla K8s on fresh GCP Instances with Kilo as the main CNI, we're not running it on add-on mode. Please let me know if there's any more information I should send. |
Here I'm calling tracepath from the master to two different pod ips pod ip pod ip |
Seems like a GCP firewall issue actually, things started working when i opened up all ports and ip sources. Do you have recommendations on which ports and ip sources i need to open on the masters and workers? I opened up their external and private subnet ips to each other on the kubelet api port, kilo port, kube server api port, wasn't sure what i was missing honestly |
I allowed the |
Glad you got it working 💫 |
I cannot get master to worker communication if both nodes are in GCP and set to the same location, trying to get logs from worker nodes times out and the pods can't communicate with the kube apiserver. I tried putting the worker nodes in a different subnet as well and set their locations unique. When I do the latter, I can connect to the leader of that new subnet, but the same problems arise with any followers. What's going on? External AWS nodes connected to our GCP master work fine.
The text was updated successfully, but these errors were encountered: