Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dual stack configuration #3175

Closed
ghakfoort opened this issue Jul 21, 2022 · 4 comments
Closed

dual stack configuration #3175

ghakfoort opened this issue Jul 21, 2022 · 4 comments
Assignees

Comments

@ghakfoort
Copy link

ghakfoort commented Jul 21, 2022

rke2-dual-stack-documentation
As the screenshot shows, The RKE2 documentation describes how to setup a dual stack cluster. But when you configure your cluster this way, the controller-manager has issues and "kubectl get endpoints -A" only shows ipv4 endpoints.

The official Kubernetes documentation shows how to configure the controller-manager if you need dual stack:
Schermafbeelding 2022-07-21 om 15 41 34

It turns out
rke2 does not add the right flags to the controller-manager:

kube-controller-manager-arg:

  • “service-cluster-ip-range=10.43.0.0/16,fc01::/120”
  • “--node-cidr-mask-size-ipv4=24"
  • “--node-cidr-mask-size-ipv6=120”

the node-cidr-mask-size-ipv4 and node-cider-mask-size-ipv6 are only needed if the configuration is different from the default values. In my case, it is.

To get dual stack working I added the following to my config.yaml:

########
cni: cilium
cluster-cidr: 10.42.0.0/16,fc00::/104
service-cidr: 10.43.0.0/16,fc01::/120
node-ip: 192.168.40.30,REAL-IPv6-ADDRESS
disable-kube-proxy: true

kube-controller-manager-arg:

  • "service-cluster-ip-range=10.43.0.0/16,fc01::/120"
  • "--node-cidr-mask-size-ipv4=24"
  • "--node-cidr-mask-size-ipv6=120"
    ########

Maybe this can be added to the documentation on rke2.io or rke2 can be changed to recognize dual stack and add the kube-controller-manager flags automatically.

@brandond
Copy link
Member

brandond commented Jul 21, 2022

I think adding service-cluster-ip-range probably makes sense, although I am curious how we've gotten away without setting it so far, and the cluster passes all tests for dual-stack and ipv6-only operation, as long as the CNI supports it. I suspect it is not actually required.

I think the larger issue in your case is that you've reduced the CIDR blocks below the default sizes, which requires also reducing the node CIDR masks. Unfortunately, if you reduce the CIDR sizes, RKE2 cannot automatically calculate the proper masks for you. We should probably just link to the upstream docs with a note about configuring appropriate sizes for the node CIDRs masks.

@brandond
Copy link
Member

brandond commented Jul 21, 2022

"kubectl get endpoints -A" only shows ipv4 endpoints.

This is the expected behavior: https://kubernetes.io/docs/concepts/services-networking/dual-stack/#services

The address family of a Service defaults to the address family of the first service cluster IP range (configured via the --service-cluster-ip-range flag to the kube-apiserver).
When you define a Service you can optionally configure it as dual stack. To specify the behavior you want, you set the .spec.ipFamilyPolicy field.

Unless explicitly configured in the Service spec as dual stack, Services will only come up with a single address family even if the pods behind it have dual-stack addresses.

So, other than adding a link to the documentation, I'm not convinced we need to change anything in RKE2.

@rancher-max rancher-max added this to the v1.25.2+rke2r1 milestone Sep 22, 2022
@rancher-max
Copy link
Member

rancher-max commented Sep 22, 2022

For testing purposes, we should ensure that the service-cluster-ip-range arg is set on the controller-manager and that dual stack continues to work as expected, regardless of CNI.

@rancher-max rancher-max self-assigned this Sep 23, 2022
@rancher-max
Copy link
Member

Validated on v1.25.2-rc1+rke2r1, v1.24.6-rc1+rke2r1, v1.23.12-rc1+rke2r1, and v1.22.15-rc1+rke2r1

Environment Details

Infrastructure

  • Cloud (AWS)
  • Hosted

Node(s) CPU architecture, OS, and Version:

$ uname -a && cat /etc/os-release
Linux ip-192-168-24-18 5.15.0-1019-aws #23-Ubuntu SMP Wed Aug 17 18:33:13 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.1 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.1 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy

Cluster Configuration:

1 server

Config.yaml:

write-kubeconfig-mode: 644
token: "secret"
cluster-cidr: 10.42.0.0/16,2001:cafe:42:0::/56
service-cidr: 10.43.0.0/16,2001:cafe:42:1::/112
cni: cilium
node-ip: <redacted private ipv4>,<redacted ipv6>
node-external-ip: <redacted public ipv4>,<redacted ipv6>

Additional files

N/A

Testing Steps

  1. Copy config.yaml
$ sudo mkdir -p /etc/rancher/rke2 && sudo cp config.yaml /etc/rancher/rke2
  1. Install and start RKE2
$ curl -sfL https://get.rke2.io | sudo INSTALL_RKE2_VERSION=$VERSION sh -
$ sudo systemctl enable rke2-server.service --now
  1. Check arg is present on kube-controller-manager: kubectl describe -n kube-system pod/kube-controller-manager-<NODE NAME>
  2. Smoketest dualstack functionality:
$ kubectl apply -f https://gist.githubusercontent.com/aojea/90768935ab71cb31950b6a13078a7e92/raw/99ceac308f2b2658c7313198a39fbe24b155ae68/dual-stack.yaml
$ kubectl get all -o wide
$ kubectl describe service
# Curl the different IP/Port combinations from the service output, for example:
   $ curl [2001:cafe:42:1::a495]:8080
   $ curl 10.43.26.134:8080
   $ curl 10.43.227.29:8081
# All should have the same output:
<html><body><h1>It works!</h1></body></html>

Validation Results:

  • Arg is now present on kube-controller-manager and matches the value entered for the service cidr in config.yaml
$ k describe -n kube-system pod/kube-controller-manager-ip-192-168-24-18
...
Containers:
  kube-controller-manager:
    ...
    Command:
      kube-controller-manager
    Args:
      --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins
      --terminated-pod-gc-threshold=1000
      --permit-port-sharing=true
      --allocate-node-cidrs=true
      --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig
      --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig
      --bind-address=127.0.0.1
      --cluster-cidr=10.42.0.0/16,2001:cafe:42::/56
      --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt
      --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key
      --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt
      --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key
      --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt
      --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key
      --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt
      --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key
      --configure-cloud-routes=false
      --controllers=*,-service,-route,-cloud-node-lifecycle
      --feature-gates=JobTrackingWithFinalizers=true
      --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig
      --profiling=false
      --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt
      --secure-port=10257
      --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key
      --service-cluster-ip-range=10.43.0.0/16,2001:cafe:42:1::/112
      --use-service-account-credentials=true
  • dualstack smoketest is successful

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants