You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
EKS preallocates the max # of IPs that a given instance supports. This pre-warms and allows for fast pod spinup. However, it leads to over allocation of IPs and in some cases IP exhaustion of a given VPC / subnet.
In extreme examples, where a node always hosts a single pod, there could be upwards for 50 IPs wasted on each host.
Suggestion is for the IP pre-warming to be a configurable value.
The text was updated successfully, but these errors were encountered:
#119 propose to have a configuration option to use different subnet for Pods than worker node's subnet. This can also help to prevent IP exhaustion of node's subnet.
@liwenwu-amazon We have been given a relatively small VPC CIDR block for use since we have a lot of accounts that are peered together the address space is divided accordingly. We have found that our addresses are all being consumed despite relatively few pods being deployed. If #119 is to allow the use of a different subnet within the same VPC that wouldn't provide any relief from the lack of private IPs available to us.
EKS preallocates the max # of IPs that a given instance supports. This pre-warms and allows for fast pod spinup. However, it leads to over allocation of IPs and in some cases IP exhaustion of a given VPC / subnet.
In extreme examples, where a node always hosts a single pod, there could be upwards for 50 IPs wasted on each host.
Suggestion is for the IP pre-warming to be a configurable value.
The text was updated successfully, but these errors were encountered: