-
Notifications
You must be signed in to change notification settings - Fork 979
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pod with Volume couldn't be scheduled on any node #1887
Comments
I was just coming in here to open up a similar ticket. We're running the same version of EKS and testing out the same version of Karpenter. We noticed that it didn't seem like Karpenter was able to understand that our EBS CSI Driver has the max volume limit set to Can you check your nodes and see if they already have too many EBS volumes attached? |
@diranged If you look at the output of |
|
I think this is a different issue, would you mind creating a new one? The error from this issue doesn't indicate a max volumes per node issue. |
So it turns out the pod was not scheduled because of unsatisfiable (and bogus) topologySpreadConstraints. However, karpenter should have been able to detect this and log something about it. |
Could you give me a little more info on this? Did you have an incorrect value for Fwiw, we'd want Karpenter to conform to what the kube-scheduler would do and not add special logic on top of that, so I'm curious to know what the isue was. |
Labeled for closure due to inactivity in 10 days. |
Version
Karpenter: v0.10.1
Kubernetes: v1.22.6-eks-7d68063
Expected Behavior
Actual Behavior
I tried to upgrade a redis cluster deployed with bitnami's chart and karpenter failed to schedule the new redis-master pod (which is managed by a stateful set and has a PVC volume).
The pod remain in pending state:
Here are karpenters logs with debug level:
I managed to solve the problem by draining an existing node where I knew the pod could be scheduled and uncordonned it right away:
Steps to Reproduce the Problem
Resource Specs and Logs
The text was updated successfully, but these errors were encountered: