-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automatic Update of Nodes #1716
Comments
This is a duplicate of #1018 |
I disagree that this is a duplicate. My proposal is about Karpenter periodically checking for a new AMI when using The linked issue (and issues linked from it) talk about reconciling nodes when the spec of the Provisioner that spawned them is updated. |
I think this issue is a part of what is being discussed here: #1457 |
Hey @DWSR, you’re right that these issues are different, but they are all related. #1457 asks for a feature to reconcile nodes that are out of spec of the provisioner. This issue seems to ask for a new case similar to #1457 to consume another condition for when to roll nodes, this one being AMI changes. #1457 asks for a signal native to Kubernetes, and this issue asks for a signal native to AWS. I think this issue, #1457, and #1018 should all be aggregated into one issue (or at least linked to one) that discusses more signals on when to roll nodes, and controls for it. This would allow us to think about cases where Karpenter would attempt to automatically reconcile "out-of-spec" capacity and how Karpenter could control or surface controls for this. |
I think aggregating all of the issues together makes lots of sense. I just disagree that it's been covered by the existing issues. @bwagner5 The difference between #1457 and this request is that #1457 specifically mentions nodes that are "out of spec". In the scenario I am describing, the nodes are still technically "within spec" because they are using the correct AMI family, etc. |
+1 |
Closing in favor of #1738 |
Tell us about your request
As a cluster operator, when a new AMI version is released for my cluster, Karpenter should slowly and gracefully update all nodes to the latest version without waiting for the node TTL
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
I'd like to ensure that OS updates are applied to our clusters as soon as possible without causing undue node churn by setting an aggressive TTL (aggressive node churn also has other drawbacks currently due to no max-in-flight). This avoids the toil of ensuring that these patches are applied.
Are you currently working around this issue?
Yes, by either manually cycling nodes or waiting until node TTL.
Additional context
There is currently no mechanism to subscribe to updates to EKS Optimized AMIs: aws/containers-roadmap#734
Attachments
N/A
Community Note
The text was updated successfully, but these errors were encountered: