Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I am getting this error when deploy kafka => Ignoring pod, pod anti-affinity is not supported. But some kafka pod deployed. #1507

Closed
burakhalefoglu opened this issue Mar 14, 2022 · 3 comments
Assignees
Labels
question Further information is requested

Comments

@burakhalefoglu
Copy link

burakhalefoglu commented Mar 14, 2022

my provisioner;
apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
name: default
spec:
limits:
resources:
cpu: "1000"
memory: 1000Gi
requirements:
- key: "node.kubernetes.io/instance-type"
operator: In
values: ["r5n.large","r5n.xlarge","r5n.2xlarge","r5n.4xlarge", "t3.large","t3.xlarge","t3.2xlarge","t3.4xlarge", "m5n.large","m5n.xlarge","m5n.2xlarge","m5n.4xlarge","c5n.xlarge","c5n.2xlarge","c5n.4xlarge"]
- key: "topology.kubernetes.io/zone"
operator: In
values: ["us-east-2a", "us-east-2b", "us-east-2c"]
- key: "karpenter.sh/capacity-type" # Defaults to on-demand
operator: In
values: ["spot"] # ["spot", "on-demand"]
provider:
subnetSelector:
karpenter.sh/discovery: xxx
securityGroupSelector:
karpenter.sh/discovery: xxx
ttlSecondsAfterEmpty: 30
ttlSecondsUntilExpired: 2592000


my cluster config ->

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: xxx
region: us-east-2
version: '1.21'
tags:
karpenter.sh/discovery: xxx
managedNodeGroups:

  • name: xxx
    desiredCapacity: 2
    labels:
    dedicated: spot
    volumeSize: 20
    amiFamily: AmazonLinux2
    minSize: 1
    maxSize: 20
    spot: true
    privateNetworking: true
    volumeEncrypted: true
    availabilityZones: ["us-east-2a","us-east-2b","us-east-2c"]
    instanceTypes: ["c5n.xlarge","c5n.2xlarge","c5n.4xlarge","r5n.xlarge","r5n.2xlarge","r5n.4xlarge","m5n.xlarge","m5n.2xlarge","m5n.4xlarge"]
    iam:
    withAddonPolicies:
    autoScaler: true
    ssh:
    allow: false

kafka:
afka kafka-zookeeper-0 0/1 Pending 0 0s
kafka kafka-0 0/1 Pending 0 0s
kafka kafka-1 0/1 Pending 0 0s
kafka kafka-2 0/1 Pending 0 0s
kafka kafka-2 0/1 Pending 0 0s
kafka kafka-3 0/1 Pending 0 0s
kafka kafka-3 0/1 Pending 0 0s
kafka kafka-4 0/1 Pending 0 0s
kafka kafka-4 0/1 Pending 0 0s
kafka kafka-zookeeper-0 0/1 Pending 0 6s
kafka kafka-0 0/1 Pending 0 6s
kafka kafka-zookeeper-0 0/1 ContainerCreating 0 6s
kafka kafka-0 0/1 ContainerCreating 0 6s
kafka kafka-1 0/1 Pending 0 6s
kafka kafka-1 0/1 ContainerCreating 0 6s
logging elasticsearch-master-2 0/1 Pending 0 20m
kafka kafka-1 0/1 Running 0 26s
kafka kafka-1 0/1 Error 0 36s
kafka kafka-1 0/1 Running 1 37s
kafka kafka-zookeeper-0 0/1 Running 0 37s
kafka kafka-0 0/1 Running 0 38s
kafka kafka-zookeeper-0 1/1 Running 0 46s
kafka kafka-1 0/1 Error 1 47s
kafka kafka-0 0/1 Error 0 47s
kafka kafka-0 0/1 Running 1 48s
kafka kafka-0 1/1 Running 1 56s
kafka kafka-1 0/1 CrashLoopBackOff 1 56s
kafka kafka-1 0/1 Running 2 57s
kafka kafka-1 1/1 Running 2 66s

kafka-0, kafka-1 is deployed but. kafka-2,-3,-4 stay pending.

I have 2 node now.
ip-xx-xx-xx-xx.us-east-2.compute.internal Ready 36m
ip-xx-xx-xx-xx.us-east-2.compute.internal Ready 36m

Version

Karpenter: v0.6.5

Kubernetes: v1.21.x

Expected Behavior

Actual Behavior

Steps to Reproduce the Problem

Resource Specs and Logs

@burakhalefoglu burakhalefoglu added the bug Something isn't working label Mar 14, 2022
@ellistarn ellistarn added question Further information is requested and removed bug Something isn't working labels Mar 14, 2022
@dewjam dewjam self-assigned this Mar 14, 2022
@dewjam
Copy link
Contributor

dewjam commented Mar 14, 2022

Hey @burakhalefoglu ,
One thing to note about Karpenter is it will only add new nodes when pods are unable to be scheduled (in Pending state). So if there is sufficient capacity in the cluster which meet a pods requirements, kube-scheduler will use those existing nodes first before Karpenter can take any action.

In your case, I would imagine that kafka-0 and kafka-1 are being scheduled on the existing managed nodes. But Karpenter is ignoring kafka-2 through kafka-5 because Pod ant-affinity is not currently supported (#942).

Can you run this command to confirm?

kubectl get pods -o wide -n <kafka_namespace>

Also, this may help us understand why pods are still pending.

kubectl describe pod <kafka_pod> -n <kafka_namespace>

@dewjam
Copy link
Contributor

dewjam commented Mar 15, 2022

Hey @burakhalefoglu . Any updates on this issue? Anything else we can help with?

@dewjam
Copy link
Contributor

dewjam commented Mar 21, 2022

Hey @burakhalefoglu . I haven't heard back on this issue for a while now. I'm going to go ahead and close it. Feel free to reopen if you still have questions.

Thanks!

@dewjam dewjam closed this as completed Mar 21, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants