You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Trying to increase the size of (scale) an aks cluster using Terraform from 4 to 5 nodes.
Actual Behaviour
Terraform cli threw an error. The main error output is captured in the above log file but a short version of the error output is:
The Kubernetes/Orchestrator Version "1.20.9" is not available for Node Pool "<node-pool-name>".
Despite the fact that the node pool does indeed have version 1.20.9 of kubernetes running which is confirmed by both looking at the Azure Portal UI as well as the JSON resource for the cluster I am trying to scale.
Steps to Reproduce
terraform apply
Important Factoids
The cluster was created originally using the following Terraform providers:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">= 2.51"
}
local = {
source = "hashicorp/local"
version = ">= 2.1.0"
}
tls = {
source = "hashicorp/tls"
version = ">= 3.1.0"
}
}
}
This cluster was originally created a while ago on a version of Kubernetes that I'm not entirely sure of sadly.
I've also tried manually scaling the vmss nodepool (through the Azure Portal UI for the actual AKS cluster), and was unable to successfully scale the VMSS. For some reason the node that was being created was not being added to the routing table that all the other nodes are a part of. This is important because without those routes the kubenet networking plugin fails and pods running on the new node cannot connect directly to the pod running on older nodes leading to a headache of a debugging problem.
I was hoping that using Terraform to scale the AKS cluster would fix the routing table problem however it seems I'm unable to even apply the current Terraform manifest. If I can't somehow scale the existing cluster manually I'm probably also going to be reaching out directly to Microsoft support.
References
#0000
EDIT: Moving formatting around to fit community guidlines
The text was updated successfully, but these errors were encountered:
Hi @stephybun thanks for taking the time to read over the issue. I will say that I agree this issue is closely related to the other one that is open however the error being shown is different which is why I wanted to capture it in this issue. The full error (which I saved to a gist) is shown below:
╷
│ Error:
│ The Kubernetes/Orchestrator Version "1.20.9" is not available for Node Pool "<node-pool-name>".
│
│ Please confirm that this version is supported by the Kubernetes Cluster "<aks-cluster-name>"
│ (Resource Group "<Resource-Group-Name>") - which may need to be upgraded first.
│
│ The Kubernetes Cluster is running version "1.20.9".
│
│ The supported Orchestrator Versions for this Node Pool/supported by this Kubernetes Cluster are:
│
│
│ Node Pools cannot use a version of Kubernetes that is not supported on the Control Plane. More
│ details can be found at https://aka.ms/version-skew-policy.
│
│
│ with module.cluster.azurerm_kubernetes_cluster.<aks-cluster-name>,
│ on modules/cluster/main.tf line 42, in resource "azurerm_kubernetes_cluster" "<aks-cluster-name>":
│ 42: resource "azurerm_kubernetes_cluster" "<aks-cluster-name>" {
│
╵
Releasing state lock. This may take a few moments...
The main difference between issue #8147 and this one is that there is no orchestrator versions listed as being supported by the Kubernetes cluster even though I confirmed the nodepool does indeed support version 1.20.9.
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Community Note
Terraform (and AzureRM Provider) Version
`$ terraform -v
Terraform v1.1.7
on linux_amd64
`
Affected Resource(s)
azurerm_kubernetes_cluster
Terraform Configuration Files
Debug Output
I set the TF_LOG environment variable to JSON and then to TRACE but still didn't see any additional output. Here is the resulting stderr trace:
<script src="https://gist.github.com/lui2131/f6505bac93759f7ccbf42074868cf1b1.js"></script>Panic Output
No panic output was observed.
Expected Behaviour
Trying to increase the size of (scale) an aks cluster using Terraform from 4 to 5 nodes.
Actual Behaviour
Terraform cli threw an error. The main error output is captured in the above log file but a short version of the error output is:
Despite the fact that the node pool does indeed have version 1.20.9 of kubernetes running which is confirmed by both looking at the Azure Portal UI as well as the JSON resource for the cluster I am trying to scale.
Steps to Reproduce
Important Factoids
The cluster was created originally using the following Terraform providers:
This cluster was originally created a while ago on a version of Kubernetes that I'm not entirely sure of sadly.
I've also tried manually scaling the vmss nodepool (through the Azure Portal UI for the actual AKS cluster), and was unable to successfully scale the VMSS. For some reason the node that was being created was not being added to the routing table that all the other nodes are a part of. This is important because without those routes the kubenet networking plugin fails and pods running on the new node cannot connect directly to the pod running on older nodes leading to a headache of a debugging problem.
I was hoping that using Terraform to scale the AKS cluster would fix the routing table problem however it seems I'm unable to even apply the current Terraform manifest. If I can't somehow scale the existing cluster manually I'm probably also going to be reaching out directly to Microsoft support.
References
EDIT: Moving formatting around to fit community guidlines
The text was updated successfully, but these errors were encountered: