-
Notifications
You must be signed in to change notification settings - Fork 980
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
podAffinity #985
Comments
+1 to podAffinity support We're investigating if we can use Karpenter with Agones, and found that Agones use podAffinity for pod scheduling to pack as much as possible into the smallest set of nodes. doc |
For hostname affinity, what if the pods aren't created at the same time? If Karpenter only sees the first pod, it will launch a node for it. If the next pod comes in a minute later, it's too late for that pod to be included in the scheduling algorithm. This doesn't exist for anti-affinity because scheduling apart achieves the same objective. FWIW, I think this is a similar problem for the cluster autoscaler. I'd love to make progress on podaffinity, but I don't have a viable path forward. How are you envisioning it? I could potentially see soft pod affinity, where we attempt to schedule them if we happen to see the pods together. |
Hi thanks for the suggestion. As the doc says
Agones itself admits it isn't perfect, but they seem to know it works well at most cases. btw I'll also try the cluster autoscaler to see how Agones works with it, thanks! |
Reading through agones, preferred pod affinity would be doable. |
Sounds great! Dedicated game servers tend to need much rapid auto scaling, so Karpenter with Agones would be fantastic I believe 👍 |
Further, as I study the podaffintiy page, I think it would also be doable to implement zonal podaffinity (e.g. schedule all pods to a specific zone). |
Just another vote for podAffinity. preferred podAffinity
|
implement pod affinity & anti-affinity - implement pod affinity/anti-affinity - rework topology spread support Fixes #942 and #985 Co-authored-by: Ellis Tarn <[email protected]>
If you've got a fairly recent version of Karpenter already running, you can drop in this snapshot controller image to test out pod affinity & pod anti-affinity. This is just a snapshot, not meant for production, etc.:
|
Tell us about your request
What do you want us to build?
I know you'll love this: Support
podAffinity
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
This is a sibling issue to #942 , but specifically on
podAffinity
.Cilium's hubble has component called hubble-relay that needs to run on the same node as a cilium agent.
The cilium install manifests has this podAffinity as non-optional: https://github.com/cilium/cilium/blob/master/install/kubernetes/cilium/templates/hubble-relay/deployment.yaml#L34-L43
This is a fairly essential component to those using Cilium.
On many clusters, one can assume cilium agent on all nodes, but there are clusters with Fargate nodes and Windows nodes using Cilium, where the agent wouldn't run.
Are you currently working around this issue?
In our case, we can patch out that podAffinity.
Community Note
The text was updated successfully, but these errors were encountered: