You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jan 11, 2023. It is now read-only.
Is this an ISSUE or FEATURE REQUEST? (choose one):
---Issue
What version of acs-engine?:
---0.12.5
Orchestrator and version (e.g. Kubernetes, DC/OS, Swarm)
Kubernetes
What happened: When deploying a template that contains multiple masters the calico CNI config doesen't get written to all master nodes. Ran through it three different times and each time only one master picked up the calico CNI config. Copying the net.d/ calico config from healthy master fixed the issue
What you expected to happen: Healthy cluster with all CNI configs present
How to reproduce it (as minimally and precisely as possible):
Run config with multiple masters like below
{
"apiVersion": "vlabs",
"properties": {
"orchestratorProfile": {
"orchestratorType": "Kubernetes",
"orchestratorRelease": "1.9",
"kubernetesConfig": {
"networkPolicy": "calico",
"enableDataEncryptionAtRest": true
}
},
"masterProfile": {
"count": 3,
"dnsPrefix": "xx-xxxxxxxx",
"vmSize": "Standard_DS2_v2_Promo"
},
Anything else we need to know:
The text was updated successfully, but these errors were encountered:
I just deployed a new cluster using my PR #2521 using 3 master nodes with 1.9 and all three master nodes have identical files under the /etc/cni/net.d directory, specifically the 10-calico.conflist is I believe what you're referring to. Please feel free to check out the PR and let me know if still see the same issue, otherwise I will assume the PR fixes whatever this problem is.
Is this a request for help?:
---No
Is this an ISSUE or FEATURE REQUEST? (choose one):
---Issue
What version of acs-engine?:
---0.12.5
Orchestrator and version (e.g. Kubernetes, DC/OS, Swarm)
Kubernetes
What happened: When deploying a template that contains multiple masters the calico CNI config doesen't get written to all master nodes. Ran through it three different times and each time only one master picked up the calico CNI config. Copying the net.d/ calico config from healthy master fixed the issue
What you expected to happen: Healthy cluster with all CNI configs present
How to reproduce it (as minimally and precisely as possible):
Run config with multiple masters like below
{
"apiVersion": "vlabs",
"properties": {
"orchestratorProfile": {
"orchestratorType": "Kubernetes",
"orchestratorRelease": "1.9",
"kubernetesConfig": {
"networkPolicy": "calico",
"enableDataEncryptionAtRest": true
}
},
"masterProfile": {
"count": 3,
"dnsPrefix": "xx-xxxxxxxx",
"vmSize": "Standard_DS2_v2_Promo"
},
Anything else we need to know:
The text was updated successfully, but these errors were encountered: