Skip to content
This repository has been archived by the owner on Jan 11, 2023. It is now read-only.

The scale command unsuccessfully tries to modify the VNET address space #1790

Closed
rocketraman opened this issue Nov 17, 2017 · 10 comments
Closed
Labels

Comments

@rocketraman
Copy link
Contributor

Is this a request for help?: NO


Is this an ISSUE or FEATURE REQUEST? ISSUE


What version of acs-engine?: latest master


Orchestrator and version (e.g. Kubernetes, DC/OS, Swarm)
Kubernetes 1.7

What happened:
When the vnet associated with the cluster has peerings, the acs-engine scale command fails to complete.

What you expected to happen:
The acs-engine scale command should work.

How to reproduce it (as minimally and precisely as possible):
Add VNET peerings to the kubernetes vnet after deployment, then try to run scale.

Anything else we need to know:
This is a followup to #1714.

The CLI reports:

FATA[0016] resources.DeploymentsClient#CreateOrUpdate: Failure sending request: StatusCode=200 -- Original Error: Long running operation terminated with status 'Failed': Code="DeploymentFailed" Message="At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details." 

and the additional failure information in the Deployment in the portal contains:

{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details.","details":[{"code":"BadRequest","message":"{\r\n \"error\": {\r\n \"code\": \"VnetAddressSpaceCannotChangeDueToPeerings\",\r\n \"message\": \"Address space of the virtual network /subscriptions/xxx/resourceGroups/my-Dev-Kube1/providers/Microsoft.Network/virtualNetworks/k8s-vnet-999 cannot change when virtual network has peerings.\",\r\n \"details\": []\r\n }\r\n}"}]}

Note that the deployment appears to be modifying the address space i.e. 10.0.0.0/8, not just adding a single IP for a new node to the VNET.

Also note that I am using custom cluster subnets. I don't know if this would be an issue without custom cluster subnets.

@mwieczorek
Copy link

@rocketraman Hi, I tried to get the same error but I was able to scale successfully.

  • I created 'custom' vnet and subnets
  • I deployed k8s there
  • I added new vnet and peering between it and k8s vnet
  • I run the 'scale'
    and everything was ok (I was able to scale up and down)

I used v0.11.0 of acs-engine.
Could you provide more deatils about your case?

@jalberto
Copy link

@mwieczorek I tried to upgrade (not scale) using 0.11 and got same error (#2022)

@rocketraman
Copy link
Contributor Author

@mwieczorek Perhaps my original acs-engine conf will help? Did you try it with subnet overrides like this?

{
  "apiVersion": "vlabs",
  "properties": {
    "orchestratorProfile": {
      "orchestratorType": "Kubernetes",
      "orchestratorRelease": "1.7",
      "kubernetesConfig": {
        "clusterSubnet": "10.2.0.0/17",
        "serviceCidr": "10.2.128.0/18",
        "dnsServiceIP": "10.2.128.10",
        "networkPolicy": "azure",
        "maxPods": 110
      }
    },
    "masterProfile": {
      "count": 1,
      "dnsPrefix": "devkube1",
      "vmSize": "Standard_D2_v2_Promo",
      "firstConsecutiveStaticIP": "10.2.223.239",
      "ipAddressCount": 256,
      "vnetCidr": "10.2.0.0/16"
    },
    "agentPoolProfiles": [
      {
        "name": "agentpool1",
        "count": 6,
        "vmSize": "Standard_DS2_v2_Promo",
        "storageProfile" : "ManagedDisks",
        "availabilityProfile": "AvailabilitySet"
      }
    ],
    "linuxProfile": { ... },
    "servicePrincipalProfile": { ... }
  }
}

@itowlson
Copy link
Contributor

#2095 might fix this - it should prevent the scale command from trying to modify the vnet - but I'm not sure...

@rocketraman
Copy link
Contributor Author

Further information -- I removed the network peerings and ran the scale command again. The address space for the vnet was completely modified... before the scale up it was 10.2.0.0/16, and after the scale-up it was back to the default of 10.0.0.0/8.

@itowlson Thanks for the info -- yeah, looks like it might.

@rocketraman
Copy link
Contributor Author

rocketraman commented Mar 3, 2018

@itowlson Looks like your latest changes on that pull handle the upgrade path too -- I think that makes it pretty likely this issue will be fixed as a result of that.

Edit: never mind, I see this issue was for scale, not upgrade.

@jalberto
Copy link

jalberto commented Mar 30, 2018

This still happening in 0.14.5 during upgrade.

It this being worked on?

@rocketraman
Copy link
Contributor Author

I've run this recently on a cluster with an updated acs-engine without any issues. I'm going to go ahead and close it. Thanks!

@rocketraman
Copy link
Contributor Author

Just ran into this again while trying to run an acs-engine upgrade command from 1.12.1 to 1.12.2, using acs-engine 0.25.3.

INFO[0209] Finished ARM Deployment (master-18-11-21T23.38.16-648331296). Error: Code="DeploymentFailed" Message="At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details." Details=[{"code":"BadRequest","message":"{\r\n  \"error\": {\r\n    \"code\": \"VnetAddressSpaceCannotChangeDueToPeerings\",\r\n    \"message\": \"Address space of the virtual network /subscriptions/xxx/resourceGroups/my-rg/providers/Microsoft.Network/virtualNetworks/k8s-vnet-36147952 cannot change when virtual network has peerings.\",\r\n    \"details\": []\r\n  }\r\n}"}] 

@stale
Copy link

stale bot commented Mar 9, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contribution. Note that acs-engine is deprecated--see https://github.com/Azure/aks-engine instead.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

4 participants