-
Notifications
You must be signed in to change notification settings - Fork 285
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dual stack configuration #3175
Comments
I think adding I think the larger issue in your case is that you've reduced the CIDR blocks below the default sizes, which requires also reducing the node CIDR masks. Unfortunately, if you reduce the CIDR sizes, RKE2 cannot automatically calculate the proper masks for you. We should probably just link to the upstream docs with a note about configuring appropriate sizes for the node CIDRs masks. |
This is the expected behavior: https://kubernetes.io/docs/concepts/services-networking/dual-stack/#services
Unless explicitly configured in the Service spec as dual stack, Services will only come up with a single address family even if the pods behind it have dual-stack addresses. So, other than adding a link to the documentation, I'm not convinced we need to change anything in RKE2. |
For testing purposes, we should ensure that the service-cluster-ip-range arg is set on the controller-manager and that dual stack continues to work as expected, regardless of CNI. |
Validated on v1.25.2-rc1+rke2r1, v1.24.6-rc1+rke2r1, v1.23.12-rc1+rke2r1, and v1.22.15-rc1+rke2r1Environment DetailsInfrastructure
Node(s) CPU architecture, OS, and Version:
Cluster Configuration: 1 server Config.yaml:
Additional files N/A Testing Steps
Validation Results:
|
As the screenshot shows, The RKE2 documentation describes how to setup a dual stack cluster. But when you configure your cluster this way, the controller-manager has issues and "kubectl get endpoints -A" only shows ipv4 endpoints.
The official Kubernetes documentation shows how to configure the controller-manager if you need dual stack:

It turns out
rke2 does not add the right flags to the controller-manager:
kube-controller-manager-arg:
the node-cidr-mask-size-ipv4 and node-cider-mask-size-ipv6 are only needed if the configuration is different from the default values. In my case, it is.
To get dual stack working I added the following to my config.yaml:
########
cni: cilium
cluster-cidr: 10.42.0.0/16,fc00::/104
service-cidr: 10.43.0.0/16,fc01::/120
node-ip: 192.168.40.30,REAL-IPv6-ADDRESS
disable-kube-proxy: true
kube-controller-manager-arg:
########
Maybe this can be added to the documentation on rke2.io or rke2 can be changed to recognize dual stack and add the kube-controller-manager flags automatically.
The text was updated successfully, but these errors were encountered: