-
Notifications
You must be signed in to change notification settings - Fork 984
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed to provision new node #1180
Comments
From the error logs it looks like its not able to fit all the daemon set pods running in the cluster on to the instance types allowed |
All DS pods fit into the node when launching the node via other method. |
I have the same issue. At the moment my cluster has 2 nodes, one
My Provisioner object apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
name: default
spec:
ttlSecondsAfterEmpty: 30
requirements:
- key: "topology.kubernetes.io/zone"
operator: In
values: ["eu-west-1a"]
- key: "karpenter.sh/capacity-type"
operator: In
values: ["spot", "on-demand"]
limits:
resources:
cpu: 50
provider:
tags:
cluster: eks-ci-1
instanceProfile: KarpenterNodeInstanceProfile-eks-ci-1 |
This is a known issue. Currently daemon sets are not correctly considered in provisioning nodes. A fix is coming. |
I see |
Closing in favor of #1084. We'll get you sorted out ASAP @alfianabdi |
Version
Karpenter: v0.5.3
Kubernetes: v1.21.2-eks-0389ca3
Expected Behavior
A new node is provisioned when there are pending pods.
Actual Behavior
New node is not provisioned. The controller threw these errors
Steps to Reproduce the Problem
Create provisioner and deployment as in resource specs
Resource Specs and Logs
Provisioner
Deployment
Logs
The text was updated successfully, but these errors were encountered: