Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support checking the kubectl.kubernetes.io/last-applied-configuration annotation #92

Closed
pwhitehead00 opened this issue Jun 23, 2020 · 6 comments
Labels
enhancement Adding additional functionality or improvements

Comments

@pwhitehead00
Copy link

Some k8s manifests are applied by the cloud provider and will be missed by detect-files and detect-helm. A good example is how EKS handles CoreDNS and kubeproxy; they seem to be applied via kubectl apply. It would be awesome if pluto could check for this use case.

@sudermanjr sudermanjr added the enhancement Adding additional functionality or improvements label Jun 23, 2020
@sudermanjr
Copy link
Member

@IronHalo Thanks for the feedback.

We have discussed this possibility internally in the past, and have decided that we would rather not implement this.

Since it would be limited in scope (only kubectl apply), and it would require scanning every single resource in the cluster, it would be high cost for little benefit.

I would really hope that EKS and GKE don't allow upgrading to a k8s version that would render their deployments obsolete without first updating their deployments. This seems to be in the hands of the cloud provider, and not something the typical operator would need or want to monitor.

@pwhitehead00
Copy link
Author

Agreed, I'd expect AWS would handle this. A second use case could be developers that have applied things outside of a pipeline. This should never happen, but depending on how locked down an environment it may. Spinnaker also applies manifests generated by hal via kubectl apply instead of helm or dumping raw manifests.

My goal is to use a single tool to check compliance of a cluster prior to an upgrade with API removals.

@pkoraca
Copy link
Contributor

pkoraca commented Jun 23, 2020

I would really hope that EKS and GKE don't allow upgrading to a k8s version that would render their deployments obsolete without first updating their deployments.

At least in the case of EKS, they don't manage kube-proxy, aws-node and coredns after the cluster is created. Users must convert resources to a new apiVersion before doing the upgrade to 1.16 (which also requires the upgrade of images for those components by the user). But soon it should be possible to manage those add-ons.

Additionally, we did some kubectl patch to apply resource requests to those add-ons, and last-applied-configuration annotation is now gone - which means this will not be detected by a tool (ie. kubent) or a custom script that looks into last-applied-configuration.

@pwhitehead00
Copy link
Author

The patch is a good point, we're planning to do the same when we upgrade to 1.16. Closing.

@sudermanjr
Copy link
Member

At least in the case of EKS, they don't manage kube-proxy, aws-node and coredns after the cluster is created. Users must convert resources to a new apiVersion before doing the upgrade to 1.16

@pkoraca that is really good information to have and also extremely disappointing

@pkoraca
Copy link
Contributor

pkoraca commented Jun 24, 2020

Sorry, I just found out it's actually not kubectl patch that is removing last-applied-configuration. Patching seems to work fine. What we also used is eksctl utils to upgrade those 3 add-ons to the latest recommended image, and that's when the annotation is lost, which makes it impossible to detect the correct apiVersion.
To, ensure everything is apps/v1, we will probably do the following: kubectl get for each add-on, then kubectl convert and kubectl apply

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Adding additional functionality or improvements
Projects
None yet
Development

No branches or pull requests

3 participants