-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error updating maintenance_window in AKS using Terraform when computed window start is in the past #22762
Comments
Thanks for raising this issue @aydosman. I appreciate that there's information in your configuration that's sensitive, but I am unable to reproduce the error with the information provided. Would you be able to supply a minimal terraform config (no modules, variables etc.) that can reproduce the error? |
Getting a similar bug here @stephybun @aydosman
It looks like if you set up a schedule this startDate is entered into the state Then if you wait a while and try to change the schedule after that date you will get this error (the provider doesn't update the date) Replication steps
automatic_channel_upgrade = "patch"
maintenance_window_auto_upgrade {
frequency = "Weekly"
interval = 1
day_of_week = "Friday"
start_time = "00:00"
utc_offset = "+00:00"
duration = 4
}
node_os_channel_upgrade = "NodeImage"
maintenance_window_node_os {
frequency = "Daily"
interval = 1
start_time = "00:00"
utc_offset = "+00:00"
duration = 4
}
automatic_channel_upgrade = "patch"
maintenance_window_auto_upgrade {
frequency = "Weekly"
interval = 1
day_of_week = "Saturday"
start_time = "00:00"
utc_offset = "+00:00"
duration = 6
node_os_channel_upgrade = "NodeImage"
maintenance_window_node_os {
frequency = "Weekly"
interval = 1
day_of_week = "Saturday"
start_time = "00:00"
utc_offset = "+00:00"
duration = 6
} Extra info This is the presented plan The current terraform state from "maintenance_window_auto_upgrade": [
{
"day_of_month": 0,
"day_of_week": "Friday",
"duration": 4,
"frequency": "Weekly",
"interval": 1,
"not_allowed": [],
"start_date": "2023-07-14T00:00:00Z",
"start_time": "00:00",
"utc_offset": "+00:00",
"week_index": ""
}
],
"maintenance_window_node_os": [
{
"day_of_month": 0,
"day_of_week": "",
"duration": 4,
"frequency": "Daily",
"interval": 1,
"not_allowed": [],
"start_date": "2023-07-14T00:00:00Z",
"start_time": "00:00",
"utc_offset": "+00:00",
"week_index": ""
}
], |
Suggestions Solution 1: The provider could calculate a timestamp if Solution 2: It might be sufficient just to add a helpful warning printed to console when this happens or update some of the provider documentation to resolve this issue |
@bamarch as the provider is calculating the start date already and in this scenario it doesn't meet the constraints; I think the only valid solution would be for the provider to correctly calculate the start date. |
I'm facing a similar issue. But as it comes with the hour, minute and second of the timestamp I needed to format it's value with |
I also am currently running into this issue. I created a maint winodw last month. Now I want to modify the window to update the start time and the duration to new values. When the update runs it fails because its complaining that my start date is in the past. However, I never provided a start date when I created it and I'm not providing one now with this update. |
@TheFuzz4 output "current_time" { maintenance_window_node_os { you can also use time_rotating resource in terraform but time_static will not work here |
@pankaj1203 that's not a solution as it will churn every time a change is made. The workaround is to create a start date from the input and then lock it until the input changes. |
@stevehipwell my problem is that if you don't provide a startdate the system defaults to the date of when the window was created. With AZ CLI you can run updates all day and night without providing a startdate. So terraform should be able to also work the same as the CLI. |
@TheFuzz4 one of my team opened this issue so I fully understand the context and how it really isn't working correctly at the moment. We also have a working solution involving a |
I am facing the same issue when I try to update time schedule . This is my error on terraform :
As workaround I used start_date as format below but the main problem is that OS(NodeImage) upgrade does not happen . Not sure if I am missing something ?
I have set these options :
and a maintenance window (maintenance_window_node_os) to happen on every Day ( I have tested last week too ) but nothing happens or triggered . AKS version 1.25.6 .
Is there a way to check why does not it start or maybe what is the issue that it is not triggered ( AKS version etc) ?. |
Thank you @stevehipwell for opening this issue. I'm working with my POC at MSFT with this issue as well hoping to get some traction on this. |
@TheFuzz4 thank you for your reply . I am not seeing any log on Activity log . I saw that part of preview flag but as mentioned one section above of that for Prerequisites SecurityPatch
Here it is specified that NodeOsUpgradeChannelPreview feature flag must be enabled only if SecurityPatch channel is used meanwhile i have Node-Image in place .
Should I register the preview flag? |
@Klodjangogo so we set ours to non for the upgradechannel because there is no need for us to update our K8s automagically. We want to do it when we're ready to do so. For the nodeOsUpgradeChannel we are set to SecurityPatch. So that was my apologies for not thinking about that only being applicable to that particular channel. We are currently patiently waiting for the next securitypatch image to be released. Right now my nodes are on kernel 5.15.0-1049. I had to use the az cli to update some settings for our window because of this issue so I'm hoping to see if my nodes bounce anyday now. |
Hi all, I've opened a PR to fix this issue, and here's a workaround which uses azapi provider to manage the maintenance configs: https://gist.github.com/ms-henglu/df1119f4243f86e25722ab9320c48bfc |
thank you @ms-henglu any idea when your PR will be merged in? Do you know if this will be backwards compatible or will we need to change all of our providers? |
Hi @TheFuzz4 , it would be merged by the end of this month. Yes, it will be backwards compatible as long as you didn't specify a start_date which is before the current date in the config. |
@ms-henglu yeah we don't pass in the start date we just want it to function like the az cli does where if you don't specify one it just defaults to current date/time. |
@TheFuzz4 I have configured exactly that case as follows:
|
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
Is there an existing issue for this?
Community Note
Terraform Version
1.5.4
AzureRM Provider Version
3.67.0
Affected Resource(s)/Data Source(s)
azurerm_kubernetes_cluster
Terraform Configuration
Debug Output/Panic Output
When I apply these changes, I get the following error:
Problem/Expected Behaviour
I am experiencing an issue when attempting to update an existing maintenance_window in Azure Kubernetes Service (AKS) using Terraform version 1.5.4 and the Azure provider (azurerm) version 3.67.0. The issue occurs when running the terraform apply command, specifically when the computed window start time falls in the past.
The AKS cluster is located in a specific region and is running the latest AKS version as of 2023-07-23. I am trying to update the day and time of the maintenance window. Above are the changes I am making.
The key point of this issue is that the logic to calculate the window start date doesn't seem to consider the current time, thereby allowing a past date to be used where only future dates should be valid. This results in a failure during the apply step.
I would greatly appreciate any insights into why this might be happening.
Extra
When modifying the maintenance window through the az command-line interface, there are no issues and the action is successful. The start date is set for as required.
Deleting the maintenance configuration and re-applying it in the past in addition has no issue
Details:
The text was updated successfully, but these errors were encountered: