-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mega Issue: Deprovisioning Controls #1738
Comments
Heavy ➕ for #1716 - Karpenter should watch for AMI version updates and roll nodes to update to the new version. Use case: rotate (refresh) running instances when new security or other important update is released to the AMI used by the project/organization. |
+1 |
How does ami refresh or OS refresh looks like in karpentar as it does not use asg node groups. |
Any update on this ? Pls |
Would it be possible to do something like a subset of #1841, which doesn't necessarily have to re-provision nodes? For example, updating instance tags shouldn't need to re-provision new nodes. |
Another example of #1716 , I upgraded the EKS control plane and would like the nodes to upgrade accordingly. |
Hey all. Check out #2569 for the design doc for node upgrades. |
@ellistarn Hi, Maybe you could specify the ETA of NodeDrift implementation? |
I am saying I want drift to be remediated by karpenter. |
@ellistarn Regarding "ability to control when and when not to expire nodes":
Just to understand: Will the proposed way to address this be using Node Disruption Budget and k8s cronjobs updating them to achieve "maintenance windows"? |
Changed the title and updated this issue to be more accurate of the current state of affairs. Added a design doc on how we're thinking Drift should work for the rest of the known fields here: kubernetes-sigs/karpenter#366 |
Hi, I’ve searched online and in Github, but can’t find the documentation covering the node patch / update. It’s related to #1716, which I saw it has already been ticked. Can you please help guide me where can I the documentation? |
@hendryanw you can drive node patches/updates by deprovisioning. As of v0.29.0, Karpenter automatically upgrades nodes through Drift for AMIs, Security Groups, and Subnets. |
can we add kubernetes-sigs/karpenter#735 to this list? |
Just released v0.30.0-rc.0 which contains the full set of drift expansion. Checkout the full release notes in karpenter and karpenter-core |
Hey all, I've linked an RFC here that has some of the API decisions we're thinking about for some of the linked items in this mega issue.
Please take a read and give a review if you can! |
Considering this is an extremely old issue, and is tracked for a multitude of issues, I'm closing this out as completed, as the issues that we've been planning to fix have been since merged. The remainder of issues that are open should continue to be tracked individually. Thanks! |
Tell us about your request
Karpenter provisions nodes and deprovisions nodes as described in the docs.
Some users are asking for more control on how to specify when Karpenter should disrupt nodes, and controls on how to rate-limit these disruptions. Opening this issue to aggregate the existing issues.
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
** Karpenter disruption conditions:**
jitter
for TTLSecondsUntilExpiredControl over how Karpenter should disrupt nodes:
spec.disruption.consolidateAfter
kubernetes-sigs/karpenter#735 - Per-node TTL for ConsolidationMinAge
for NodeClaims kubernetes-sigs/karpenter#1030 - MinAge for NodeClaimsControl over Karpenter's Eviction Policy:
do-not-disrupt
pod annotation kubernetes-sigs/karpenter#752 - Allow specifying a non-permanent time framedo-not-evict
for a pod, for un-programmatic removal use-cases.allow-evict
annotation for pods for emptiness condition.Additional context
Attachments
If you think you might have additional information that you'd like to include via an attachment, please do - we'll take a look. (Remember to remove any personally-identifiable information.)
Community Note
The text was updated successfully, but these errors were encountered: