-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to set Node Allocatable memory in kind? #1524
Comments
I think this and #877 are the same question. Feel free to close if you agree |
#877 is looking to actually change the node size I think, overridding kubelet flags can be done with a kubeadmConfigPatch which needs more documentation... it's on our radar. |
supporting this first class might be a better answer than #877's current WIP approach |
yeah, definitely that's the missing part in that PR, is just not isolating the nodes, is "converting" them in VMs and for that we need kubelet to only "see" the allocated resources ... hehe, I didn't get that before 😅 Let me play with this https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#example-scenario and see how it goes /assign |
How would this look with |
Answering my own question: kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
system-reserved: memory=4Gi |
Thanks @arianvp . I did this, and copied the block to my worker node as well. It reduced the allocatable memory for the control-plane node, but not the worker node. kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
system-reserved: memory=8Gi
- role: worker
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
system-reserved: memory=8Gi |
Figured it out. For worker nodes you need JoinConfiguration (not InitConfiguration). |
Correct me if I'm wrong, this approach limits the resources that can be used for the system (OS system daemon) of the container and not the resources that can be allocated by the pods ? |
You are correct in your understanding. The documentation below should help clarify your understanding of arguments. https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable |
When I create a KIND cluster; it sets the Node Allocatable memory to the max memory that my laptop has. However I want to limit it to something lower because I'm usually running other things on my laptop as well (like a browser)
Usually one would do this through specifying kubelet flags as I read here: https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/
I tried looking in the docs but found nothing about how to override kubelet flags so this is probably not the right way.
Reason is i'm trying to debug some deployment locally; but because kind thinks there is more RAM than it actually has it over-allocates and my computer runs out of memory and locks up before I can debug. I want to turn that over-allocation into a scheduling error; by giving kind less memory to work with.
The text was updated successfully, but these errors were encountered: