-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Karpenter Node NotReady when provided with extra kubelet args #5043
Comments
After changing amiFamily from AL2 to Custom, it seems that there is no any noready nodes. So my question: what is the behavior when providing kubelet config via user data? Does the user data will be executed twice which caused this bug? |
This seems like a duplicate of Node repair. Since most of the nodes (398/400) became ready, it seems like a transit error was the problem in this case. |
It is the responsibility for karpenter to do the node repair, but I just wondering why this happens, does it due to the two time running of user-data? |
I suspect its not due to userData, as most of the nodes are ready |
Closing as a duplicate of kubernetes-sigs/karpenter#750 |
@hitsub2 Just wondering, where you following a guide or something else for working with these flags?
|
Description
Observed Behavior:
When provided the following kubelet args, some nodes(2 out of 400) are not ready and karpenter can not disrupt them, leaving them forever.
Extra kubelet config:
--cpu-manager-policy=static --enforce-node-allocatable=pods,kube-reserved,system-reserved --system-reserved-cgroup=/system.slice --kube-reserved-cgroup=/system.slice
ec2 nodelcass.yaml
kublet error log
Expected Behavior:
All the nodes should be ready, if notready nodes come up, karpenter should recycle them or disrup them.
Reproduction Steps (Please include YAML):
Versions:
kubectl version
): 1.25The text was updated successfully, but these errors were encountered: