-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Modifying node pool for deprecated k8s version is not possible #8147
Comments
we are seeing the same issue with v1.18.4, which stopped being supported about a week ago. we had to completely destroy and rebuild the cluster (v1.18.6) which is not ideal |
We hit this issue too and simply updating tags in the node pool forced a replacement of the whole node. The node pool was on an outdated version of k8s. We "fixed" this by passing through the k8s version both to master and node pool, which meant we did not need to recreate the cluster but to update it in place. Patch versions are usually supported for a while and do not have any breaking changes in the api so it should be safe to upgrade k8s version |
I ran into this when changing the This is sort of painful since there is a number of things we can update on an existing nodepool without having to check the supported version (tags, node_count, etc...). An example scenario is to scale a nodepool running an unsupported version to 0 nodes, verify everything is good, then delete the nodepool. |
Ran into this today enabling autoscaling for my node pool. Control plane and node pool on 1.20.2 (deprecated) Using v2.64.0 of the azure provider The Kubernetes/Orchestrator Version "1.20.2" is not available for Node Pool "***".
Please confirm that this version is supported by the Kubernetes Cluster "***"
(Resource Group "***") - which may need to be upgraded first.
The Kubernetes Cluster is running version "1.20.2".
The supported Orchestrator Versions for this Node Pool/supported by this Kubernetes Cluster are:
* 1.18.19
* 1.18.17
* 1.19.11
* 1.19.9
Node Pools cannot use a version of Kubernetes that is not supported on the Control Plane. More
details can be found at https://aka.ms/version-skew-policy.
on ../../modules/azure/cluster/main.tf line 13, in resource "azurerm_kubernetes_cluster" "cluster":
13: resource azurerm_kubernetes_cluster cluster { |
Thank you for opening this issue. Unfortunately there's little we can do from the provider side, since we're just talking to the APIs. In-place update of the clusters seems to be the only option here, so I'm going to close this issue. |
@favoretti - can you explain your reasoning a bit more? This issue did not exist before an update to the provider was done (as linked in my original report). So it seems strange that nothing can be done on the provider side to resolve the issue. |
Checked out for today, sorry, but I'll come back to it either Sunday or early next week. Thank you for your patience! |
From my perspective, if the APIs of Azure deprecate a version of k8s to the point that it doesn't exist - what's there we can do? Guess a version you'd need and start an upgrade? Sounds kinda dangerous? |
Since that specific version was working previously, I don't think we are guessing a version. We know that the version works and already have at-least one node running on that version. |
Correct me if I'm wrong, but the error message that appears isn't saying that the k8s version is unsupported by Azure. It's saying it may not supported by the cluster, which isn't true at all. This original report pertains to a valid change that can be done through the Azure portal, through ARM, or through Azure CLI. So why wouldn't I be able to do it through the Azure terraform provider? |
Reopening since this is a Terraform bug which needs to be fixed - we should look to add the current version of Kube used on the control plane/node pools into the validation list which I believe should fix this issue |
There's an API call for that, although it's region-specific, but we could populate it somewhere before initializing the client or so? That said, even if we do I still don't follow how this will help. Say, your TF config says you want a cluster version |
Coming to think of it... Would setting |
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This functionality has been released in v3.4.0 of the Terraform Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you! |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
Community Note
Terraform (and AzureRM Provider) Version
Terraform v0.12.26
Affected Resource(s)
azurerm_kubernetes_cluster
Terraform Configuration Files
No special configuration is necessary but you must have an AKS using a version of k8s that is no longer supported.
Debug Output
I have debug output but haven't removed any sensitive data from it. If it's necessary to share, I will do so.
Expected Behavior
Terraform modification of a node pool using an unsupported k8s version should be successful.
Actual Behavior
An error is displayed:
Steps to Reproduce
Important Factoids
This error appears to have been introduced with the June AKS updates.
I hope this isn't meant to imply that you must be on a supported k8s version in order to make modifications through terraform. I can still make modifications to my unsupported cluster version through the Portal UI. I would think this is a pretty common scenario where a tweak may need to be done and upgrading isn't possible due to the impact a k8s upgrade has on the cluster and deployed services.
The text was updated successfully, but these errors were encountered: