Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k8s IPv6 dual stack: Observed a panic: Address is not an IPv4 address #1629

Closed
bingoct opened this issue Aug 5, 2022 · 11 comments
Closed

k8s IPv6 dual stack: Observed a panic: Address is not an IPv4 address #1629

bingoct opened this issue Aug 5, 2022 · 11 comments

Comments

@bingoct
Copy link

bingoct commented Aug 5, 2022

i setup k8s 1.23.9 by kubeadm.

i put ipv6 CIDR before ipv4 CIDR in kubeadm-config,and config kube-apiserver advertise-ip in host ipv6.

controllerManager:
  extraArgs:
    allocate-node-cidrs: "true"
    bind-address: "::"
    service-cluster-ip-range: "fd00::1234:5678:1:0/112,10.96.0.0/12"
    cluster-cidr: "fd00::1234:5678:100:0/104,10.244.0.0/16"
    node-cidr-mask-size-ipv6: "120"
    node-cidr-mask-size-ipv4: "24"

kubectl logs -n kube-flannel kube-flannel-ds-z9ggx -f
E0805 05:20:57.156568 1 runtime.go:76] Observed a panic: Address is not an IPv4 address

Expected Behavior

kube-flannel pods run

Current Behavior

for k8s:
kube-flannel kube-flannel-ds-z9ggx ● 0/1 5 CrashLoopBackOff

for host: flannel interface create fail
flannel.1: <BROADCAST,MULTICAST> mtu 1414 qdisc noop state DOWN group default
link/ether f2:57:d9:14:3d:d2 brd ff:ff:ff:ff:ff:ff

Possible Solution

Steps to Reproduce (for bugs)

  1. flannel config
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "SubnetLen": 24,
      "IPv6Network": "fd00::1234:5678:100:0/104",
      "IPv6SubnetLen": 120,
      "Backend": {
        "Type": "vxlan"
      }
    }

Context

if i only config flannel ipv6 single stack, it works.

  net-conf.json: |
    {
      "EnableIPv4": false,
      "EnableIPv6": true,
      "IPv6Network": "fd00::1234:5678:100:0/104",
      "IPv6SubnetLen": 120,
      "Backend": {
        "Type": "vxlan"
      }
    }

However

Your Environment

  • Flannel version: mirrored-flannelcni-flannel:v0.19.0
  • Backend used (e.g. vxlan or udp): vxlan
  • Etcd version:
  • Kubernetes version (if used): 1.23.9
  • Operating System and version: Centos 7
  • Link to your project (optional):
@rbrtbnfgl
Copy link
Contributor

rbrtbnfgl commented Aug 10, 2022

Could you order <IPv4,IPv6> on the service and pod CIDR? Do you get the same error?

@wsldankers
Copy link

If I do that (and leave the apiserver-advertise-address set to IPv6) kubeadm is unable to init the cluster.

If I do that and do not override apiserver-advertise-address (so it defaults to IPv4) flannel starts up normally.

kube-flannel.txt

  • Flannel version: mirrored-flannelcni-flannel:v0.19.2
  • Backend used (e.g. vxlan or udp): vxlan
  • Etcd version: 3.5.4-0
  • Kubernetes version (if used): 1.25.1-00
  • Operating System and version: Debian GNU/Linux 11.5 “bullseye”

@rbrtbnfgl
Copy link
Contributor

Do you want an IPv6 only setup or a dualstack environment I think you have to specify both IPs as apiserver-advertise-address.

@rbrtbnfgl
Copy link
Contributor

it seems to me that it's not flannel related but Kubeadm can't contact kubelet. It seems that kubelet by default starts with IPv4 only you have to modify its config to let it works with IPv6.

@wsldankers
Copy link

wsldankers commented Oct 22, 2022

I'm trying to create a dualstack environment.

Trying to specify both IPs for apiserver-advertise-address results in:

couldn't use "fec0::52:5054:ff:fe64:501f,10.1.82.32" as "apiserver-advertise-address", must be ipv4 or ipv6 address

I started the kubelet with --node-ip=fec0::52:5054:ff:fe64:501f,10.1.82.32 to make it listen on IPv6 and IPv4. Flannel still fails; the error message is still the same.

@wsldankers
Copy link

I think this might fix it, but I don't currently have the infrastructure to test it:

diff --git a/subnet/kube/kube.go b/subnet/kube/kube.go
index f7c78c86d2d6f3c29a996cf08fc0c059e0429870..dbad1d252e51a5209a29329c131d454cd9da3175 100644
--- a/subnet/kube/kube.go
+++ b/subnet/kube/kube.go
@@ -430,9 +430,17 @@ func (ksm *kubeSubnetManager) nodeToLease(n v1.Node) (l subnet.Lease, err error)
        }
        l.Attrs.BackendData = json.RawMessage(n.Annotations[ksm.annotations.BackendData])
 
-       _, cidr, err := net.ParseCIDR(n.Spec.PodCIDR)
-       if err != nil {
-           return l, err
+       cidr := new(net.IPNet)
+       log.Infof("Creating the node lease for IPv4. This is the n.Spec.PodCIDRs: %v", n.Spec.PodCIDRs)
+       for _, podCidr := range n.Spec.PodCIDRs {
+           _, parseCidr, err := net.ParseCIDR(podCidr)
+           if err != nil {
+               return l, err
+           }
+           if len(parseCidr.IP) == net.IPv4len {
+               cidr = parseCidr
+               break
+           }
        }
        l.Subnet = ip.FromIPNet(cidr)
        l.EnableIPv4 = ksm.enableIPv4

Basically, the dualstack case needs nodeToLease to skip v4 CIDRs when looking for v6 ranges and vice-versa. The IPv6 version (further down in that function) had that covered but the IPv4 version did not.

@rbrtbnfgl
Copy link
Contributor

I can investigate a bit at it next week.

@rbrtbnfgl
Copy link
Contributor

I checked the fix and it should fix your error. It needs also another fix. I'll prepare a PR to fix this.

@wsldankers
Copy link

Thanks! Much appreciated!

@wsldankers
Copy link

Succesfully deployed a v6-first dualstack cluster today, using kubeadm 1.25.4 and flannel 0.20.2! @bingoct may want to test it for themselves but I'm a happy camper. 😁

@rbrtbnfgl many thanks, again!

@rbrtbnfgl
Copy link
Contributor

Thanks for the feedback. I'll close the issue if there are any other errors we'll reopened it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants