-
Notifications
You must be signed in to change notification settings - Fork 558
[WIP] Fix #1767 Custom VNET support for RS3 Windows #1810
Conversation
@JiangtianLi Could you add a deployment for |
@jackfrancis Sure thing. Will update. Note, custom vnet for windows doesn't work fully yet due to issue with windows container network config. So this is still WIP. |
@jackfrancis @JiangtianLi @tamilmani1989 We know that azure cni integration for windows is not finished. We will take up windows/azure-cni once we sort out any remaining issues with Linux. |
@sharmasushant Thanks. The issue I referred to is without azure network policy and cni, but the config in windows cni. |
@JiangtianLi rebase should be relatively easy, it's due to this reorganization of the |
@jackfrancis Thanks! Will rebase after fix the Windows subnet issue. |
@JiangtianLi Could you explain the problem in details? |
@JiangtianLi can you update us on the estimated time to get the checks completed? appears to be on hold currently |
@feiskyer The root cause is that Windows container network subnet configured only supports */24 CIDR and custom vnet usually configures master and agent subnet in range that doesn't work with Windows container network limitation. |
@JiangtianLi with regards to the windows container network only supporting a /24 subnet, then surely this shouldnt be an issue based on the fact that when creating an agent pool with custom vnet we define each agent pool subnet? So if we have the windows agent pool configured to a /34 this will work? or is this a larger issue based on the master networking to nodes? |
@jay-stillman To clarify, in order for Windows node to communicate with master node or agent node, all the nodes need to be in the /24 subnet. For example, if the Windows node has IP address 10.240.0.4, the master node has IP address 10.240.255.5 and the other agent node has 10.240.0.5, the master node can't talk to Windows node while the agent node can. You need to configure master and agent vnet to be in the same 24 range. |
@JiangtianLi this is not correct..... the vnet cidr can be somethign such as 10.2.0.0/16 while the master subnet can be something like 10.2.10.0/24 and the linux pool can be something like 10.2.16.0/21 if running 5 agent nodes (each node requires its own /24 subnet) this config works with custom subnet... even with multiple agent pools. So I am guessing you are actually referring to something else, ie related to windows networking.... however if all the subnets are in the vnet and the agent pool subnets are bound to the route table, then they can still route..... Can you provide some clearer detail on this, as the above works and is how we for one run our various acs environments |
@jay-stillman Sorry for the confusion. There is no limitation to configure vnet cidr in custom vnet on Azure. The limitation is in Windows container networking on Windows node, and that limits the connection/routing to Windows POD from another subnet, i.e, from master's node. Linux agent has no such issue. It is Windows container networking only. |
@JiangtianLi can you please provide any indication to when this will be resolved? We currently are unable to use windows containers.... Is there any work around for this? |
/cc @madhanrm @dineshgovindasamy @jay-stillman Sorry about the delay. We are working with networking team (cc-ed) for this. I don't see a straightforward workaround at this point but @madhanrm @dineshgovindasamy can chime in. |
Given the limitation, I understand that it's currently not possible to deploy Windows Containers to an existing Vnet that are not properly sized; however, it is possible to deploy a Hybrid cluster into an existing Vnet that does match how Windows Container Networking expects the subnet to look? I tried doing that with:
And this as my template:
But instead got the following error:
|
@lastcoolnameleft You are using master HEAD, not this PR, right? The error is due to https://github.com/Azure/acs-engine/blob/master/parts/k8s/kuberneteswindowssetup.ps1#L57 and the same as #1767. It is because if your json defines custom vnet and variable subnet is not defined in that case https://github.com/Azure/acs-engine/blob/master/parts/k8s/kubernetesmastervars.t#L199 |
My apologies, I posted my comment in the PR, when I meant to post it in the issue. That said, I believe that there's still an issue where the variable subnet is not defined by the output template and should be caught by the ACS-engine validator and fail the generation prior to trying to deploy. |
@JiangtianLi I've tested this PR with an hybrid cluster and a custom Vnet and I've been able to create a cluster. I'm using this template:
After the cluster creation I've updated the vnet subnets to add the k8s cluster route table. Few points:
|
@JiangtianLi Any update about this issue? |
@mboret acs-engine has switched to use Azure CNI as default for Windows cluster and custom VNET is supported with Azure CNI. With kubenet, custom VNET is still under investigation. |
@JiangtianLi This still doesn't work for me with another acs-engine version and an hybrid cluster with custom VNET. Even if I'm using Azure CNI(I ran into the same issue: #2565 with acs-engine v0.15.0 and kubernetes 1.10 and the same defintion I've post previously). Summary, I'm able to deploy an hybrid cluster with custom Vnet, only if I'm using this PR with kubenet. |
@mboret Thanks for reporting to us. If you uses Azure CNI, which is default in latest master, you wouldn't need this PR for custom VNET. If you use kubenet (networkpolicy; none), then you need this PR but custom VNET doesn't completely work yet. If Azure CNI doens't work for custom VNET in hybrid cluster, could you please file a separate issue? |
For sure. Created: #2612 |
What this PR does / why we need it:
Fix #1767 Custom VNET support for RS3 Windows