Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to decrease node pool size of AKS cluster #15477

Closed
tanalam2411 opened this issue Feb 17, 2022 · 2 comments
Closed

Unable to decrease node pool size of AKS cluster #15477

tanalam2411 opened this issue Feb 17, 2022 · 2 comments

Comments

@tanalam2411
Copy link

tanalam2411 commented Feb 17, 2022

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

provider registry.terraform.io/hashicorp/azurerm v2.75.0
Terraform v1.1.5

Affected Resource(s)

  • azurerm_kubernetes_cluster

Terraform Configuration Files

module "aks" {
  depends_on = [data.azurerm_virtual_network.vnet]
  #######################################
  ## AKS Cluster config
  #######################################
  source               = "Azure/aks/azurerm"
  resource_group_name  = data.azurerm_resource_group.rg.name
  kubernetes_version   = var.kubernetes_version
  orchestrator_version = var.kubernetes_version
  prefix               = var.cluster_name
  sku_tier             = "Paid"

  ## Access Control
  enable_role_based_access_control = true
  rbac_aad_managed                 = true
  enable_azure_policy              = true
  public_ssh_key                   = var.public_ssh_key

  tags = local.tags

  #######################################
  ## Default NodePool / Agent config
  #######################################

  # TODO: enable autoscaling, issue: resource quotasazurerm_XXXXX
  # enable_auto_scaling = true
  agents_pool_name          = "workerpool"
  agents_size               = var.default_nodepool_vm_size
  os_disk_size_gb           = 50
  agents_count              = var.default_nodepool_node_count
  agents_max_pods           = 110
  agents_availability_zones = var.agents_availability_zones
  vnet_subnet_id            = data.azurerm_subnet.subnet.id

  agents_labels = merge(
    {
      "nodepool" : "workerpool"
    }
  )

  agents_tags = merge(
    local.tags,
    {
      "nodepool" : "workerpool"
      "node" : "workernode"
    }
  )

  #######################
  ## Network config
  #######################
  private_cluster_enabled = var.private_cluster_enabled
  network_plugin          = "azure"
  network_policy          = "azure"

  # Internal Kubernetes Service CIDRs
  net_profile_service_cidr   = "x.x.x.x/16"
  net_profile_dns_service_ip = "x.x.x.x"

  net_profile_docker_bridge_cidr = "x.x.x.x/16"
}

Debug Output

   "kubernetesVersion": "1.21.2",
  {
     "name": "workerpool",
     "count": 5,
     "vmSize": "Standard_DS3_v2",
     "osDiskSizeGB": 50,
     "osDiskType": "Managed",
     "kubeletDiskType": "OS",
     "maxPods": 110,
     "type": "VirtualMachineScaleSets",
     "enableAutoScaling": false,
     "provisioningState": "Succeeded",
     "powerState": {
      "code": "Running"
     },
     "orchestratorVersion": "1.21.2",
     "enableNodePublicIP": false,
     "nodeLabels": {
      "nodepool": "workerpool"
     },
     "mode": "System",
     "enableEncryptionAtHost": false,
     "enableUltraSSD": false,
     "osType": "Linux",
     "osSKU": "Ubuntu",
     "nodeImageVersion": "AKSUbuntu-1804gen2containerd-2021.10.23",
     "upgradeSettings": {},
     "enableFIPS": false
    }
Terraform will perform the following actions:

  # module.ihcp_dev.module.aks.azurerm_kubernetes_cluster.main will be updated in-place
  ~ resource "azurerm_kubernetes_cluster" "main" {
      ~ default_node_pool {
            name                         = "workerpool"
          ~ node_count              = 5 -> 4
        }

╷
│ Error: 
│ The Kubernetes/Orchestrator Version "1.21.2" is not available for Node Pool "workerpool".
│ 
│ Please confirm that this version is supported by the Kubernetes Cluster "dev-aks"
│ (Resource Group "rg-x-dev") - which may need to be upgraded first.
│ 
│ The Kubernetes Cluster is running version "1.21.2".
│ 
│ The supported Orchestrator Versions for this Node Pool/supported by this Kubernetes Cluster are:
│  * 1.19.13
│  * 1.19.11
│  * 1.20.15
│  * 1.20.13
│ 
│ Node Pools cannot use a version of Kubernetes that is not supported on the Control Plane. More
│ details can be found at https://aka.ms/version-skew-policy.
│ 
│ 
│   with module.dev.module.aks.azurerm_kubernetes_cluster.main,
│   on .terraform/modules/dev.aks/main.tf line 10, in resource "azurerm_kubernetes_cluster" "main":
│   10: resource "azurerm_kubernetes_cluster" "main" {
│ 
╵

Panic Output

Expected Behaviour

terraform apply should apply the changes i.e it should reduce the node pool size 5 -> 4

I had previously increased and decreased node pool count multiple times using terraform, now all of sudden it's failing to change node pool size and throwing the following error on terraform apply

Terraform will perform the following actions:

  # module.ihcp_dev.module.aks.azurerm_kubernetes_cluster.main will be updated in-place
  ~ resource "azurerm_kubernetes_cluster" "main" {
      ~ default_node_pool {
            name                         = "workerpool"
          ~ node_count              = 5 -> 4
        }

Actual Behaviour

Steps to Reproduce

  1. terraform apply

Important Factoids

References

  • #0000
@tombuildsstuff
Copy link
Contributor

hi @tanalam2411

Thanks for opening this issue.

Taking a look through this appears to be a duplicate of #8147 - rather than having multiple issues open tracking the same thing I'm going to close this issue in favour of that one; would you mind subscribing to #8147 for updates?

Thanks!

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Mar 20, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants