-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AKS / UToronto: calico-typha scaled to three replicas, forcing three core nodes #3592
Labels
Comments
Findings
A way forwardThere is no great solution, only upstream feature request and mitigation is reasonable at the moment.
|
7 tasks
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
This is similar to #2490 where the same problem is considered for GKE clusters. In the AKS clusters though, calico is deployed in the "calico-system" namespace via the "tigera-operator" in the namespace named the same thing.
I've not yet figured out how to re-configure it to not require 3 replicas, but it would be good to reduce that number or allow at least two out of three pods to co-locate on a node to avoid forcing use of 3 nodes.
The text was updated successfully, but these errors were encountered: