-
Notifications
You must be signed in to change notification settings - Fork 744
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
aws-cni on vanilla K8s #508
Comments
Hi @tvalasek, You are right that we have not yet optimized the plugin much for use outside of EKS, so there is more work to be done here. I think it sounds like a good idea to make the CNI more configurable in order work better on the masters. Do you have any more concrete suggestions for what configurations you would need? |
How I see it is something to what already has been done: #68 Ain't sure if is it part of aforementioned PR but for our use case I would like to have something like Secondly these config options would work globally on all members of a cluster (like it's now) or only work on nodes with either specific labels on K8s level (e.g. Does it make sense? |
We do have the In this case, I guess it would be better to have a way to tag master nodes, or that the CNI is aware of common taints like |
@mogren is creating separate daemonsets an option? One for control plane nodes with the right tolerations and selectors and one for non control plane nodes? |
@tvalasek Hi Tomas, we're actually wondering what the specific feature request is for this. We're hoping you can elaborate. Are you asking for the CNI plugin to behave in a different way if it knows it's running on a master node (via inspection of, say, node-role annotation)? Or are you asking for a way to prevent the CNI plugin (via a daemonset taint/toleration) from running on master nodes? |
@jaypipes Hi Jay.
yes, thats the correct one P.S.: If we would prevent it from running on master nodes we would not be able to schedule any pods on masters because aws-cni is responsible for assigning IP addresses to pods |
Apologies for the long delay in getting back to you @tvalasek! This unfortunately dropped out of my email radar :( The solution you are looking for is to modify the YAML manifest for the aws-k8s-cni Daemonset you are using to deploy the CNI plugin to include a affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/role
operator: NotIn
values:
- master Depending on how you are installing Kubernetes, the "key" above may be different (it's the label that is applied to a node). The example above is the label that kops applies, for example. |
We build K8s clusters in AWS using CF/kubeadm from upstream vanilla K8s. For networking we use aws-cni plugin. Our out-of-the-box setup is 3x masters (etcd running inside) and 3x worker nodes.
AWS-cni plugin runs as deamonset, thus on all 6 members of a cluster (masters + workers).
Now, the behaviour I'm seeing is that aws-cni plugin does not differenciate masters from workers.
The result of that is (looking at the cni-metrics-helper stats) aws-cni on demand creates new ENIs and secondary IP address pool (warm pool) also on master nodes which by default have pod scheduling disabled (for obvious reasons). That leaves us with huge amount of warm pool unused IPs on master nodes that can't ever be allocated.
I believe aws-cni was primarily build for EKS (where control plane / master and etcd) are hidden from the EKS admin, but I wonder if aws-cni has feature to distinguish masters from workers (and thus different warm pool scheduling) for those of us who decided not to use EKS.
E.g. labeling master nodes and annotate aws-cni daemon set to act differently (like do not create new ENIs) on labeled nodes
Thanks
The text was updated successfully, but these errors were encountered: