-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IPv6 cluster bootstrap failure #219
Comments
KUBERNETES_SERVICE_HOST is set by Kubernetes itself, to the address of the kubernetes service: See the upstream docs for more information service discovery environment variables: Are you trying to use the helm controller to deploy cilium on k3s? Can you provide the actual content of the HelmChart resource that you are deploying? |
Thank you @brandond.
I don't see helmcontroller picking the correct address then. KUBERNETES_SERVICE_HOST is somehow auto filled and it's always 127.0.0.1 and that's why I was wondering if this value is hard-coded. Yes I am trying helm controller to deploy cilium. The charts is here. |
The oyaml of the pod that is stuck in the CrashLoopBackOff.
|
For bootstrap charts, helm-controller/pkg/controllers/chart/chart.go Lines 556 to 557 in f9103f6
I know that this works on ipv6-only nodes, since the apiserver binds to a dual-stack wildcard address that is accessible via the IPv4 loopback even even when the node does not have a valid IPv4 interface address. Does your node for some reason not have an IPv4 loopback configured, or have you done something else to modify the apiserver configuration? Are you using the helm controller as part of k3s, rke2, or on some other cluster type? The helm controller is almost exclusively used with k3s and rke2, if you are running it standalone on some other cluster, the assumptions that it makes about apiserver availability when installing bootstrap charts may not be valid.
No, I was asking for the yaml of the HelmChart resource that you are using to deploy cilium. - name: BOOTSTRAP
value: "true" |
I do have both IPv4 and IPv6 address on the node. As mentioned earlier, Dualstack works with a combination of {ipv4},{ipv6}. And my master node status. Another half problem I am facing with agent and i don't think so it is related to this issue but would love to hear some suggestions.
the listener on agent is ipv4 only instead of ipv4/ipv6 :( i believe it should be same as server. |
My agent config.yaml is below. cat /etc/rancher/k3s/config.yaml
|
@brandond thank you so much for all the inputs. I think I figured out what the issue was with the agent, it was missing [] in the server url. After adding that, it's working.
|
Can someone please explain how KUBERNETES_SERVICE_HOST value works. When I am trying to bootstrap IPv6 cluster, I am getting below error.
I believe it should be trying to listen on ::1.
Seems like KUBERNETES_SERVICE_HOST is hard coded here? Is there a way I can tell helm controller to use ::1 instead of 127.0.0.1 ?
BTW it works when I am trying Dualstack cluster with a combination of {ipv4},{ipv6} but not the other way around.
The text was updated successfully, but these errors were encountered: