You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The autoscaling profile is fixed and can't be influenced.
Possible Solution
To provide nodesToReplicas configuration of the typha autoscaler nested under Installation resource somewhere, where the default value of this configuration mimics the current implementation.
Context
I'd like this feature to avoid forcing additional nodes to a small cluster just to house these pods that can't schedule next to each other, as that incurr cloud cost but also for hogs available compute in clouds and wastes energy for the world.
In a small k8s clusters having for example just four nodes, but where three of them are reserved for other things and only one is available to run the calico-typha pod, will fail to schedule 2/3 calico-typha pods (# node(s) didn't have free ports for the requested pod ports). When this happen a cluster-autoscaler could end up creating additional nodes even though the admin determines just one or two calico-typha pod would have sufficed.
Your Environment
AKS 1.28.3 using tigera operator tigera/operator:v1.28.13
We have "core nodes" and "user nodes", where we typically just have one or possibly two core nodes where workloads like calico-typha should run, but often a few additional "user nodes" where those are forbidden to run via taints.
The text was updated successfully, but these errors were encountered:
Expected Behavior
I expect that the logic to map cluster node count to typha replica's isn't hardcoded here:
operator/pkg/common/autoscale.go
Lines 17 to 56 in ff57548
Practically, it could be made configurable with a
nodesToReplicas
ladder like done in GKE's managed calico deployment via a configmap.Current Behavior
The autoscaling profile is fixed and can't be influenced.
Possible Solution
To provide
nodesToReplicas
configuration of the typha autoscaler nested underInstallation
resource somewhere, where the default value of this configuration mimics the current implementation.Context
I'd like this feature to avoid forcing additional nodes to a small cluster just to house these pods that can't schedule next to each other, as that incurr cloud cost but also for hogs available compute in clouds and wastes energy for the world.
In a small k8s clusters having for example just four nodes, but where three of them are reserved for other things and only one is available to run the calico-typha pod, will fail to schedule 2/3 calico-typha pods (
# node(s) didn't have free ports for the requested pod ports
). When this happen a cluster-autoscaler could end up creating additional nodes even though the admin determines just one or two calico-typha pod would have sufficed.Your Environment
tigera/operator:v1.28.13
calico-typha
should run, but often a few additional "user nodes" where those are forbidden to run via taints.The text was updated successfully, but these errors were encountered: