Kubeadm incorrectly calculating the node CIDR in cases when the given podSubnet smaller than /24 #2327
Labels
help wanted
Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.
kind/bug
Categorizes issue or PR as related to a bug.
priority/backlog
Higher priority than priority/awaiting-more-evidence.
Milestone
Is this a BUG REPORT or FEATURE REQUEST?
BUG REPORT
Versions
kubeadm version (use
kubeadm version
):v1.18 and possibly older versions as well
Environment:
kubectl version
): v1.18uname -a
):What happened?
When configuring a pod subnet, a subnet smaller than /24 will cause the
kube-controller-manager
to enter a CrashLoop due to this error:This is happening because the logic https://github.com/kubernetes/kubernetes/blob/9af86e8db8e965d2aec5b8d1762fc7cbab323daa/cmd/kubeadm/app/phases/controlplane/manifests.go#L294-L317
does not do any real calculations for maskSize in IPv4 case based on the podSubnet passed. Thus for cases where the podSubnet is smaller than
/24
this results in the above error.What you expected to happen?
Kubeadm should have set the
--node-cidr-mask-size
parameter in the controller pod to possibly match the podSubnet in case it is smaller than/24
and possibly return/24
as it does today for podSubnet larger than or equal to/24
How to reproduce it (as minimally and precisely as possible)?
Simply, set the podSubnet to something smaller than
/24
The text was updated successfully, but these errors were encountered: