-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
azurerm_kubernetes_cluster: Adding node pools causes AKS cluster replacement #3971
Comments
@titilambert can you add this to your list ? |
when can we expect this to be fixed? |
When is this likely to have a fix? It's the only thing in the way of Terraforming hybrid Linux/Windows clusters at the moment because it fails when trying to create the two nodepools during cluster creation. I can workaround it by creating the ARM template for the Windows nodepool directly, but then Terraform wants to re-create the cluster every time because it doesn't know about that nodepool. |
I think altogether these problems are interweaved in issue #4001 as well. There are multiple issues in terraform at the moment it would help if we can consolidate questions there. The AKS team is aware of these issues and while we work through the main feature we will try to provide proper guidance for TF as well across all of these. |
In fact, if you use nodepools, even retrying an apply with the very same terraform configuration might end up trigerring a re-creation of the aks cluster. We can reproduce this very easily by creating an aks cluster with 3 If you are unlucky, the order in which This happens because the You can avoid this bug by changing the order in your terraform source code, but it's not intuitive at all, and is still a bug from our point of view. Versions used:
|
Can i ask humbly for some update of status or roadmap, when we could expect that feautre works correctly ? Thank You in advance, |
This has been released in version 1.37.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example: provider "azurerm" {
version = "~> 1.37.0"
}
# ... other configuration ... |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
Community Note
Terraform (and AzureRM Provider) Version
Affected Resource(s)
azurerm_kubernetes_cluster
Terraform Configuration Files
Expected Behavior
A node pool should have been added to the existing AKS cluster without needing to destroy it first.
Actual Behavior
The entire AKS cluster is destroyed and recreated with the additional node pool
Steps to Reproduce
terraform apply
agent_pool_profile
nested resourceterraform apply
Important Factoids
N/A
References
This issue looks related (Terraform replacing AKS nodepool cluster when changing VM count)
#3835
The text was updated successfully, but these errors were encountered: