Skip to content
This repository has been archived by the owner on Jan 11, 2023. It is now read-only.

acs-engine needs at least 9 bits for custom VNET #1836

Closed
nakah opened this issue Nov 28, 2017 · 2 comments · Fixed by #1863
Closed

acs-engine needs at least 9 bits for custom VNET #1836

nakah opened this issue Nov 28, 2017 · 2 comments · Fixed by #1863
Assignees

Comments

@nakah
Copy link

nakah commented Nov 28, 2017

Is this a request for help?:
No

Is this an ISSUE or FEATURE REQUEST? (choose one):
ISSUE

What version of acs-engine?:
0.9.3

Orchestrator and version (e.g. Kubernetes, DC/OS, Swarm)
Kubernetes

What happened:
I tried to deploy a Kubernetes cluster on a custom VNET within a /24 subnet. However, during validation I received the follwoing error : "must reserve at least 9 bits for node" which is here in the code.
The network team that I'm working with can't create a /23 subnet for each of our environment (that's too many reserved IP's) as Azure VNETs have a 4000 IP limit

What you expected to happen:
The cluster should have been created without any issue

How to reproduce it (as minimally and precisely as possible):
Deploy a K8S cluster on a custom VNET with a /24 subnet

Anything else we need to know:

@jackfrancis
Copy link
Member

I would assume this is due to the fact that restricting a cluster to ~250 IP addresses is an edge case that we can't currently manage, and don't want customers to get themselves into: a cluster that needs to scale, but has no more IPs to allocate to nodes.

@JackQuincy @anhowe was that the thinking behind requiring a network with 2^9-ish allocatable IPs for a cluster? Are we willing to revisit this?

@nakah
Copy link
Author

nakah commented Nov 28, 2017

Just to precise that I'm not using any network policy, so VNET IP's are for nodes only. Thus I don't think that clusters with less thant 250 nodes is an edge case

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants