-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make setting hostPort on daemonset configurable in the helm chart. #3613
Comments
Hi @jo-carter thanks for reporting! Be sure to check out the docs while you wait for a human to take a look at this 🙂 Cheers! |
Hi @jo-carter thanks for this suggestion. I'll bring this to the attention of the team to see how we will proceed. Will update you when I can! |
Hi @jo-carter can you provide us with more details of how you are deploying simultaneous Ingress Controllers? Are you deploying multiple replicas in a single node or are you deploying across multiple nodes? Thanks! |
Hi @shaun-nx we are deploying multiple ingress controllers, each with multiple replicas - across the same worker nodes (hence we get a hostPort clash). The purpose of the multiple Ingress Controllers is segregation of workloads. We are deploying using the Helm Ingress Operator on Openshift 4.10, deploying NIC to different namespaces. That project sources it's charts from this repo, hence I've attached the issue here. Sure, let me know if you want the IO manifests, or I'm happy to provide one with NIC's chart alone if needed - the result will be the same, you cannot bind to the same hostPort twice. |
@jo-carter any specific reason why you're using |
Hi @lucacome it's a single replica per node per ingress controller (for use with nodePorts) - it's only multiple pods per node due to the multiple ingress controller daemonsets that are scheduled this as a daemonset - from the docs this is for workload segregation, each ingress controller handles it's own subset of total applications. This is also an issue if any other daemonset wishes to use 443 or 80 hostPorts too - outside the of my use case which is deploying multiple ingress controllers. |
Hi, I have the exact same need for the exact same reason. We currently debate if we want to patch the Daemonset after each helm upgrade via CI - but making the single non-configurable part of the original template customizable, would be the much better solution. Thank you. |
One thing that we would like to better understand is why in this situation are you using a (not that we might not address the issue) |
Hi @brianehlert I fail to see how using A The alternative is to hack together a fake version of |
There is nothing I can add. We also followed the approach of using a deployment, just because it seemed to be the obvious way, judging by chart design. But we gave up trying to replicate ds behaviour, cause there always were cases with unintended effects or needing manual override. We have a set of n nodes which are configured (network segment, dimensions, fw Integration) to run ICs only, each on different ports. Deploying n additional nodes for each Stack of ICs would be huge overhead. Just as mimicing ds functions. It would even be nice to disable the hostPort altogether so the service with its externalIPs and the already implemented options for custom ports could kick in properly. Thank you. |
Where We have customers that run deployment for scaling flexibility but use anti-affinity to force spread across nodes. Still valid, and supports the dynamic scaling need. Now, I won't forget that this depends on a dynamic disaggregation (aka load balancer) in front of your cluster(s). And that is not always available or feasible. Honestly, we have customers doing all kinds of things - because that is what suits their needs. And I always want to understand the reasons why so that we can solve the problems in the better way for the most folks. I would be happy for anyone that wants to, to pick this up and take a stab at it. |
Thanks @brianehlert Just a side note - you can indeed use taints and tolerations, and node selectors with You'll notice one is implicitly in-force for regular daemonsets to ensure that it does not schedule on control plane(master) nodes. I'd say daemonset should be the defacto standard for NIC - with NGINX there is little reason to schedule more than 1 pod per node(per IC), given it's ability to scale vertically via the worker processes model. |
Problem
Multiple NIC deployments cannot be deployed simultaneous if using daemonset with nodeports with the current helm chart due to hostPort being hard coded into the template pod spec for port 80 and port 443.
daemon-set-controller-template
Describe the solution you'd like
Make setting hostPort optional in the helm chart/helm values file.
The text was updated successfully, but these errors were encountered: