Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support PodAntiAffinity #1010

Closed
Noksa opened this issue Dec 16, 2021 · 3 comments
Closed

Support PodAntiAffinity #1010

Noksa opened this issue Dec 16, 2021 · 3 comments
Labels
feature New feature or request

Comments

@Noksa
Copy link

Noksa commented Dec 16, 2021

Tell us about your request

As you know k8s has great podAntiAffinity mechanism that allows us to place the same pods on different nodes.
As example:

podAntiAffinity:
  requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
        matchExpressions:
          - key: "kazooService"
            operator: In
            values:
              - "kazoo-sbc"
      topologyKey: "kubernetes.io/hostname"

At the same time Karpenter doesn't support both podAffinity and podAntiAffinity

We can use topologySpreadConstraints instead but it has maxSkew 1 as minimum value.
It means that two same pods can be placed on the same node.

So how can we tell Karpenter to create a new node for each specific pod instead of trying to place them to an existing node if it still has capacity?

I know that I can use

resources:
  requests:

to let pod acquire all resources on a node but if I do this all other pods (from other deploy/sts) won't be able to be placed on the node if they satisfy requirements.

Can you please tell me if there is a workaround?

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@Noksa Noksa added the feature New feature or request label Dec 16, 2021
@ellistarn ellistarn changed the title Is there a way to use the same as PodAntiAffinity behaviour ? Support PodAntiAffinity Dec 18, 2021
@ellistarn
Copy link
Contributor

Closing in favor of #942. Can you post your question there?

@ellistarn
Copy link
Contributor

ellistarn commented Dec 18, 2021

We can use topologySpreadConstraints instead but it has maxSkew 1 as minimum value.
It means that two same pods can be placed on the same node.

If you use hostname topology with max skew of one, you will only get one pod per node.

edit: Thinking further, in single node cluster, the kube scheduler will place all pods on that node if there is room, since provisioning isn't part of the decision making.

I think this convinces me that we need antiaffinity support and my previous statements about topology being an abject replacement do not hold in all use cases.

@Noksa
Copy link
Author

Noksa commented Jan 12, 2022

@ellistarn
Actually, the topologySpreadConstraints doesn't work properly when we want to place one pod per node because of that:
kubernetes/kubernetes#94627 (comment) (but in my case maxSkew is 1).
I mean the only one node always satisfies maxSkew

So let's say if I have only one node with label db-only=true ALL pods that tolerate this label and have nodeSelector or nodeAffinity equals to this label will be placed on the only one node.
And new nodes won't be provisioned.
topologySpreadConstraints starts to work properly only if we have >=2 nodes that are included in calculation of topologySpreadConstraints.

That's why I asked PodAntiAffinity support.

Currently it is impossible to place one pod per node with Karpenter when we use nodeSelector or nodeAffinity. Karpenter will create only one node and then kube-scheduler will place all pods on this node while it has capacity.

So please consider support for PodAntiAffinity. Thanks!

@Noksa Noksa mentioned this issue Jan 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants