-
Notifications
You must be signed in to change notification settings - Fork 558
Windows Networking Issues with ExpressRoute #1713
Comments
Also confirming that switching the container network to NAT restores interwebs and gets traffic across the ExpressRoute, obviously not an ideal solution as this horribly breaks interpod communication. I can't see anything obviously misconfigured with the network itself;
or the adapters themselves
|
@JiangtianLi Is this a known behavioral limitation of Windows clusters w/ VNET? |
@jackfrancis This is possibly a limitation for RS1 Windows container networking. @LiamLeane What is your apimodel and custom vnet setup config? @madhanrm Is this a known issue to you? |
API model is at the end of the OP. ARM for network is below. Fairly simple setup, single subnet 10.59.236.0/23 with an ExpressRoute peering. The only real curiosity with our setup beyond the ExpressRoute is keeping kubernetes networks outside of 10.0.0.0/8 as its used internally. I confirmed this problem does not occur from a peered vnet and tried it in a different subscription with a peered network to confirm it wasn't subscription specific. Initially I wasn't sure if it was something up with routing or masquerading on the Windows node but adding routes to the Windows node to send internal (other end of the ExpressRoute) to the vnet gateway didn't resolve the issue. Route table included at the end of this comment.
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contribution. Note that acs-engine is deprecated--see https://github.com/Azure/aks-engine instead. |
Is this a request for help?:
Yes
Is this an ISSUE or FEATURE REQUEST? (choose one):
Issue
What version of acs-engine?:
0.8.0 & build from master (both have the same issue)
Orchestrator and version (e.g. Kubernetes, DC/OS, Swarm)
Kubernetes 1.6, 1.7 & 1.8 (all have the same issue)
What happened:
Windows based pods can reach other pods and services but cannot reach internet or internal (10.x) subnets which are peered in. Pods can reach other hosts on the same subnet (EG static IP of master).
Less important (easy to fix within the pod itself) Windows pods are not using kube-dns for resolving addresses, instead its pointing at the bridge gateway;
Directly querying kube-dns works ok and resetting DNS on pod interfaces from within the pod works for correcting this issue;
What you expected to happen:
Windows pods to have network connectivity and set dns server correctly.
How to reproduce it (as minimally and precisely as possible):
Every possible configuration combination which uses a custom vnet peered to an ExpressRoute seems to cause this problem. As well as the configuration bellow I have tried different ip configurations, no direct ip configuration etc.
Anything else we need to know:
Linux based pods have both internet and access to peered subnets ok. Custom vnet on peered network is using 10.59.236.0/23 (all network interfaces attached to nodes are on this vnet and are reachable over the ExpressRoute), routing table is attached to the subnet. Same commands which fail on Windows based pod work on Windows node. 10.71.44.72 is an on-prem address on other side of ExpressRoute.
Linux based pod
Windows based pod;
acs-engine json;
The text was updated successfully, but these errors were encountered: