Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubeadm incorrectly calculating the node CIDR in cases when the given podSubnet smaller than /24 #2327

Closed
sidharthsurana opened this issue Oct 16, 2020 · 5 comments
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence.
Milestone

Comments

@sidharthsurana
Copy link

Is this a BUG REPORT or FEATURE REQUEST?

BUG REPORT

Versions

kubeadm version (use kubeadm version):
v1.18 and possibly older versions as well
Environment:

  • Kubernetes version (use kubectl version): v1.18
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Others:

What happened?

When configuring a pod subnet, a subnet smaller than /24 will cause the kube-controller-manager to enter a CrashLoop due to this error:

F1015 00:49:48.857664       1 node_ipam_controller.go:118] Controller: Invalid --cluster-cidr, mask size of cluster CIDR must be less than --node-cidr-mask-size

This is happening because the logic https://github.com/kubernetes/kubernetes/blob/9af86e8db8e965d2aec5b8d1762fc7cbab323daa/cmd/kubeadm/app/phases/controlplane/manifests.go#L294-L317

func calcNodeCidrSize(podSubnet string) (string, bool) {
	maskSize := "24"
	isIPv6 := false
	if ip, podCidr, err := net.ParseCIDR(podSubnet); err == nil {
		if utilsnet.IsIPv6(ip) {
			var nodeCidrSize int
			isIPv6 = true
			podNetSize, totalBits := podCidr.Mask.Size()
			switch {
			case podNetSize == 112:
				// Special case, allows 256 nodes, 256 pods/node
				nodeCidrSize = 120
			case podNetSize < 112:
				// Use multiple of 8 for node CIDR, with 512 to 64K nodes
				nodeCidrSize = totalBits - ((totalBits-podNetSize-1)/8-1)*8
			default:
				// Not enough bits, will fail later, when validate
				nodeCidrSize = podNetSize
			}
			maskSize = strconv.Itoa(nodeCidrSize)
		}
	}
	return maskSize, isIPv6
}

does not do any real calculations for maskSize in IPv4 case based on the podSubnet passed. Thus for cases where the podSubnet is smaller than /24 this results in the above error.

What you expected to happen?

Kubeadm should have set the --node-cidr-mask-size parameter in the controller pod to possibly match the podSubnet in case it is smaller than /24 and possibly return /24 as it does today for podSubnet larger than or equal to /24

How to reproduce it (as minimally and precisely as possible)?

Simply, set the podSubnet to something smaller than /24

kind: ClusterConfiguration
networking:
  podSubnet: 192.0.2.0/25
@sidharthsurana
Copy link
Author

kubernetes-sigs/kind#1256 highlights the same issue when the cluster is created via kind

@neolit123
Copy link
Member

neolit123 commented Oct 19, 2020

@sidharthsurana thanks for logging the ticket. yes, this is a known problem as noted in the kind discussion.

would you be able to send a PR fix for it?

/kind bug
/priority backlog

@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence. labels Oct 19, 2020
@neolit123 neolit123 added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Oct 19, 2020
@neolit123 neolit123 added this to the v1.20 milestone Oct 19, 2020
@neolit123
Copy link
Member

but something to note is that the KCM now has multiple flags for this:
https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/

--node-cidr-mask-size int32
--
  | Mask size for node cidr in cluster. Default is 24 for IPv4 and 64 for IPv6.
--node-cidr-mask-size-ipv4 int32
  | Mask size for IPv4 node cidr in dual-stack cluster. Default is 24.
--node-cidr-mask-size-ipv6 int32
  | Mask size for IPv6 node cidr in dual-stack cluster. Default is 64.

it's not clear whether we should move kubeadm to the explicit vX flags at this point?
cc @Arvinderpal

@neolit123
Copy link
Member

neolit123 commented Oct 19, 2020

actually we are already tracking this here:
#1612

see this in the OP:

don't hardcode the ipv4 mask size to 24

@neolit123
Copy link
Member

this was fixed in kubernetes/kubernetes@8b52995#diff-d870edeb820f49abf2f52cd357a8a5396c7cd1a36e024d339146e512186c7297

1.20 and newer and cannot be backported.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

3 participants