You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 11, 2023. It is now read-only.
DefaultKubernetesDNSServiceIP = "10.5.8.10"
// DefaultKubernetesServiceCIDR specifies the IP subnet that kubernetes will
// create Service IPs within.
DefaultKubernetesServiceCIDR = "10.5.8.0/23"
Then I have recompiled the acs-engine and I've created a k8s cluster with the cluster definition bellow, the deployment has succeeded:
But the cluster-info dump shows a repetitive error, furthermore the kube-dns, heapster and kube-proxy statuses are in error: Line 34750: E1009 18:25:41.128853 1 controller_utils.go:351] Error while processing Node Add/Delete: failed to allocate cidr: CIDR allocation failed; there are no remaining CIDRs left to allocate in the accepted range
Hi,
I have finaly figured out what was hapening: clusterSubnet in my json template was set to X.X.X.X /23.
From my point of view, 507 (=512-7) was enough.
Apparently, the component who is responsible for IP allocation (whitch one?) try to allocate a /24 pod range for each node, while a 23 mask can only give 2 X /24. I have seen that in the node spec.PodCIDR in the dump file, only two nodes had this field set:
"Spec": {
"PodCIDR": "10.5.5.0/24",
This error is not occuring anymore when I set the clusterSubnet to X.X.X.X/21.
So the question now is: How to force change of the PodCIDR so we can have something smaller that a /24?
I hope it helps and in advance thanks for your help on my last question.
Is this a request for help?:
YES
Is this an ISSUE or FEATURE REQUEST? (choose one):
ISSUE
What version of acs-engine?:
0.7.0
Orchestrator and version (e.g. Kubernetes, DC/OS, Swarm)
Kubernetes 1.7.5
What happened:
Hi,
(The cluster-info dump is in here: https://drive.google.com/file/d/0BxaknNvZVd06dXpUSDViYkhTMjg/view?usp=sharing).
In order to workaround these known issues: #1159 (in v0.8.0) and #1453 (in v0.7.0), I've replaced these lines (tag v0.7.0)
acs-engine/pkg/acsengine/const.go
Line 73 in b22c4e5
acs-engine/pkg/acsengine/const.go
Line 76 in b22c4e5
with these:
Then I have recompiled the acs-engine and I've created a k8s cluster with the cluster definition bellow, the deployment has succeeded:
But the cluster-info dump shows a repetitive error, furthermore the kube-dns, heapster and kube-proxy statuses are in error:
Line 34750: E1009 18:25:41.128853 1 controller_utils.go:351] Error while processing Node Add/Delete: failed to allocate cidr: CIDR allocation failed; there are no remaining CIDRs left to allocate in the accepted range
This error is throwed here I guess: https://github.com/giantswarm/kubernetes-dashboard/blob/2da85548513368ab111881a1968cb74fee09206e/Godeps/_workspace/src/k8s.io/kubernetes/pkg/controller/node/cidr_allocator.go#L48
Please notice that adding the route table to the subnets as suggested here https://github.com/Azure/acs-engine/blob/master/docs/kubernetes/features.md#custom-vnet does not change anything.
Here are the connected devices (to the VNET):
And here are the available services:
kubectl get svc --all-namespaces -o wide
What you expected to happen:
An operational k8s cluster
How to reproduce it (as minimally and precisely as possible):
See above
The text was updated successfully, but these errors were encountered: