-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scale node storage based on pod ephemeral-storage requests #2394
Comments
is there some progress with it? |
Those docs are now here, but there hasn't been any update. |
No current, active progress. But it's something that's considered "v1" scope, which means that we're planning to work on this part of the v1 release for Karpenter. It's definitely within our list of priorities but the maintainer team has been somewhat time-constrained lately and working on some other feature work and stability improvements. |
+1 |
This would reduce some alerting for NodePools sharing the same StorageClass with static storage values. 2xl - 16xl might require a bit of a drift in storage due to multiple factors, including docker pulls and volumes. Any chance this would get a bump in priority? |
Are there any update on this issue? We currently need to specify a constraint for using nvme instance types. And prepare the RAID0 array. - key: karpenter.k8s.aws/instance-local-nvme
operator: Gt
values: ["100"] Otherwise we face issues for some pods not having enough ephemeral disk space which leads to evicted pods with a node |
is there some progress with it? |
Any progress on this? |
Doesn't seem to work for me at all, all nodes have hardcoded 20GB ephemeral no matter what. |
+1 |
Tell us about your request
What do you want us to build?
Enable Karpenter to dynamically scale the size of block device attached to a node at provision time. The size of blocked device would be based on the sum of ephemeral-storage requests of pods being bin-packed to a node plus some overhead.
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
Currently, a node's storage capacity can be defined through the Karpenter provisioner through Block Device Mappings. This works well, but forces customers to define a static value for all instances launched through a given provisioner. Customers would like the ability to dynamically scale storage of the nodes based on the pod workload or by instance-type.
Are you currently working around this issue?
This can be worked around by defining Block Device Mappings in the Karpenter Provisioner. These values are static for the given provisioner, however, and cannot be dynamically scaled up/down.
Related issues:
#2512
#2298
#1995
#1467
#3077
#3111
Additional context
Anything else we should know?
Attachments
If you think you might have additional information that you'd like to include via an attachment, please do - we'll take a look. (Remember to remove any personally-identifiable information.)
Community Note
The text was updated successfully, but these errors were encountered: