-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Transition to EKS managed addons (Enables NetworkPolicy enforcement via aws CNI) #4660
Comments
I think this work makes #4661 less relevant, but I think doing #4661 first can help reduce differences between clusters, and that in turn improves this issue to be worked without hickups. I propose we do #4661 before this due to that, and then get this done, and then #4652 could potentially be resolved by applying a simple eksctl config addition to each of our EKS clusters. addons:
- name: vpc-cni
+ configurationValues: |-
+ enableNetworkPolicy: "true" |
I've not assigned an allocation to this. It could be internal engineering if its considered routine maintenance effort - or driven by product dev by seen as a path towards NetworkPolicy enforcement. |
Given various other priorities and commitments, let's roll this into the next round of EKS upgrades (when they happen in a few months). |
I've put this on the internal engineering roadmap and prioritized it accordingly. |
Looking at
So maybe this can be done by simply declaring the addons explicitly, and then doing |
There is a banner in https://eksctl.io/, linking to Cluster creation flexibility for networking add-ons:
I think we should act related to this, and transition from installing
aws-cni
,kube-proxy
, andcoredns
aseksctl
managed (but EKS self-managed) addons with dedicated upgrade commands etc, towards installing them as EKS managed addons as enabled byeksctl
config.Motivation
This approach is the new approach, and the old approach is being phased out I think. The new approach is what we find docs about in AWS, and only through the new approach can we for example understand how to enable network policy enforcement for
aws-cni
plugin.For example, consider this part from an eksctl config example in AWS docs, this is referencing the new approach where
vpc-cni
,coredns
, andkube-proxy
is explicitly listed as EKS managed addons. This acceptsconfigurationValues
withenableNetworkPolicy
, but something equivalent for the oldeksctl
self-installedvpc-cni
addon doesn't allow itself to be configured like this making us unable to enable network policy enforcement.Preliminary steps
EDIT: re-creation of addons could be one strategy to try, but also #4660 (comment).
Ensure you have the latest
eksctl
versionTrial and verify re-creation of addons:
This should be seen as disruptive maintenance, so don't do it in a AWS cluster with active users now or if you believe it soon will have. Its not clear how to revert a change like this, because there are no good documentaiton on doing this transitioning.
Documentation about re-creating addons is available here, and here is a key section screenshotted:

Note the
resolveConflicts: overwrite
config, its very relevant for us!Initial advice:
vpc-cni
addons pods are theaws-node
pods).Addon specific steps
coredns
addoncoredns
addon seems OK and try to resolve it otherwisekube-proxy
addonkube-proxy
addon seems OK and try to resolve it otherwisevpc-cni
addonvpc-cni
addon seems OK and try to resolve it otherwiseUpdate k8s upgrade docs about addon updating.
I think this means to reduce four commands in the docs into just the last command, as that updates all addons listed in the config.
Update terraform template for new clusters to include
addons
listingvpc-cni
,coredns
andkube-proxy
I think this makes sense, but I'm not sure.
Apply the trialed transition to all other EKS clusters.
EKS clusters can be listed by
deployer config get-clusters --provider=aws
.Definition of done
addons
listingvpc-cni
,coredns
,kube-proxy
.The text was updated successfully, but these errors were encountered: