-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Terraform wants to change azurerm_monitor_diagnostic_setting log category settings #5673
Comments
Are these issues related? #2466 |
Is there any solution for this? I am struggling with this issue while setting up the diagnostic settings of Recovery Service Vault and Azure SQL Database. |
Yes, it seems so. |
Seeing the same for diagnostic logs on subscription resource |
The issue is that the activity logs do not support Retention Policy which is mandatory on the Terraform provider This one should probably be optional in the provider's code since it also seems optional (omitEmpty) in the Azure SDK "retention_policy": {
}, |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
I'm not sure I understand why my comment was marked as off-topic. This issue hasn't been fixed by @nyuen 's PR, since it's still appearing in provider version |
Hi @rudolphjacksonm , the intent of my change was to make the retention policy optional as the new Activity log experience doesn't seem to provide the option to specify a retention policy anymore (as per the portal UI). To make the workflow idempotent I've considered what is returned by the terraform state: if you have a look at what the diff indicates on the subsequent terraform apply you will see that the retention_policy is not stored at all which is causing a diff. below is the Terraform code that I'm now using to create activity logs in Terraform as per the changes I've made to the azurerm provider Sample code
|
Hi @nyuen, I've tried the same on my end but Terraform still wants to change the category for each entry. I've tried applying this several times and inspected the tfstate, which shows the retention_policy value is set to an empty array. Let me know if I'm doing something wrong here: Sample Coderesource "azurerm_monitor_diagnostic_setting" "aks_cluster_diagnostics" {
count = var.aks_enable_diagnostics == "true" && var.aks_diagnostic_event_hub_name != "" ? 1 : 0
name = "aks-cluster-to-eventhub"
target_resource_id = azurerm_kubernetes_cluster.aks_with_aad_parameters.id
eventhub_name = "aks-cluster-diagnostics"
eventhub_authorization_rule_id = "${data.azurerm_subscription.current.id}/resourceGroups/${var.aks_rg_name}/providers/Microsoft.EventHub/namespaces/${var.aks_diagnostic_event_hub_name}/AuthorizationRules/RootManageSharedAccessKey"
log {
category = "kube-apiserver"
enabled = true
}
log {
category = "kube-controller-manager"
enabled = true
}
log {
category = "kube-scheduler"
enabled = true
}
log {
category = "kube-audit"
enabled = true
}
log {
category = "cluster-autoscaler"
enabled = true
}
metric {
category = "AllMetrics"
enabled = true
}
depends_on = [azurerm_kubernetes_cluster.aks_with_aad_parameters]
}
resource "azurerm_monitor_diagnostic_setting" "aks_nsg_diagnostics" {
count = var.aks_enable_diagnostics == "true" && var.aks_diagnostic_event_hub_name != "" ? 1 : 0
name = "aks-nsg-to-eventhub"
target_resource_id = data.azurerm_resources.aks_cluster_managed_nsg.resources[0].id
eventhub_name = "aks-nsg-diagnostics"
eventhub_authorization_rule_id = "${data.azurerm_subscription.current.id}/resourceGroups/${var.aks_rg_name}/providers/Microsoft.EventHub/namespaces/${var.aks_diagnostic_event_hub_name}/AuthorizationRules/RootManageSharedAccessKey"
log {
category = "NetworkSecurityGroupEvent"
enabled = true
}
log {
category = "NetworkSecurityGroupRuleCounter"
enabled = true
}
depends_on = [
azurerm_kubernetes_cluster.aks_with_aad_parameters
]
}
Plan Output# module.aks-cluster.azurerm_monitor_diagnostic_setting.aks_cluster_diagnostics[0] will be updated in-place
~ resource "azurerm_monitor_diagnostic_setting" "aks_cluster_diagnostics" {
eventhub_authorization_rule_id = "/subscriptions/000000-00000-00000-00000/resourceGroups/devuks1/providers/Microsoft.EventHub/namespaces/devuks1-logging-ns-primary/AuthorizationRules/RootManageSharedAccessKey"
eventhub_name = "aks-cluster-diagnostics"
id = "/subscriptions/000000-00000-00000-00000/resourcegroups/devuks1/providers/Microsoft.ContainerService/managedClusters/devuks1|aks-cluster-to-eventhub"
name = "aks-cluster-to-eventhub"
target_resource_id = "/subscriptions/000000-00000-00000-00000/resourcegroups/devuks1/providers/Microsoft.ContainerService/managedClusters/devuks1"
- log {
- category = "cluster-autoscaler" -> null
- enabled = true -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
+ log {
+ category = "cluster-autoscaler"
+ enabled = true
}
- log {
- category = "kube-apiserver" -> null
- enabled = true -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
+ log {
+ category = "kube-apiserver"
+ enabled = true
}
- log {
- category = "kube-audit" -> null
- enabled = true -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
+ log {
+ category = "kube-audit"
+ enabled = true
}
- log {
- category = "kube-controller-manager" -> null
- enabled = true -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
+ log {
+ category = "kube-controller-manager"
+ enabled = true
}
- log {
- category = "kube-scheduler" -> null
- enabled = true -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
+ log {
+ category = "kube-scheduler"
+ enabled = true
}
- metric {
- category = "AllMetrics" -> null
- enabled = true -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
+ metric {
+ category = "AllMetrics"
+ enabled = true
}
}
# module.aks-cluster.azurerm_monitor_diagnostic_setting.aks_nsg_diagnostics[0] must be replaced
-/+ resource "azurerm_monitor_diagnostic_setting" "aks_nsg_diagnostics" {
eventhub_authorization_rule_id = "/subscriptions/000000-00000-00000-000009/resourceGroups/devuks1/providers/Microsoft.EventHub/namespaces/devuks1-logging-ns-primary/AuthorizationRules/RootManageSharedAccessKey"
eventhub_name = "aks-nsg-diagnostics"
~ id = "/subscriptions/000000-00000-00000-00000/resourceGroups/mc_devuks1_uksouth/providers/Microsoft.Network/networkSecurityGroups/aks-agentpool-28835032-nsg|aks-nsg-to-eventhub" -> (known after apply)
name = "aks-nsg-to-eventhub"
~ target_resource_id = "/subscriptions/000000-00000-00000-00000/resourceGroups/mc_devuks1_uksouth/providers/Microsoft.Network/networkSecurityGroups/aks-agentpool-28835032-nsg" -> (known after apply) # forces replacement
- log {
- category = "NetworkSecurityGroupEvent" -> null
- enabled = true -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
+ log {
+ category = "NetworkSecurityGroupEvent"
+ enabled = true
}
- log {
- category = "NetworkSecurityGroupRuleCounter" -> null
- enabled = true -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
+ log {
+ category = "NetworkSecurityGroupRuleCounter"
+ enabled = true
}
} "attributes": {
"eventhub_authorization_rule_id": "/subscriptions/00000-00000-00000-00000/resourceGroups/devuks1/providers/Microsoft.EventHub/namespaces/devuks1-logging-ns-primary/AuthorizationRules/RootManageSharedAccessKey",
"eventhub_name": "aks-cluster-diagnostics",
"id": "/subscriptions/00000-00000-00000-00000/resourcegroups/devuks1/providers/Microsoft.ContainerService/managedClusters/devuks1|aks-cluster-to-eventhub",
"log": [
{
"category": "cluster-autoscaler",
"enabled": true,
"retention_policy": []
},
{
"category": "kube-apiserver",
"enabled": true,
"retention_policy": []
},
{
"category": "kube-audit",
"enabled": true,
"retention_policy": []
},
{
"category": "kube-controller-manager",
"enabled": true,
"retention_policy": []
},
{
"category": "kube-scheduler",
"enabled": true,
"retention_policy": []
}
], |
My fix addresses specifically the Activity log which doesn't support the retention_policy even when stored in a storage account. For the Kubernetes related diagnostic settings it seems that the retention policy shouldn't be empty (even though you're not storing the settings to a storage account). I would try:
|
@nyuen that worked! I've applied the same change for our Eventhub diagnostic settings which were getting recreated on every apply due to the same issue. Thanks so much for your help, that's been bothering me for ages! |
The issue being discussed here is that though user has specified all the available diag settings, terraform still reports diff, which has been addressed by #6603. So I'm gonna close this issue for now. For others who gets diff because of not specifying all the available diag settings, you can subscribe #7235 for any update for that issue. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
Community Note
Terraform (and AzureRM Provider) Version
Terraform v0.12.20
Affected Resource(s)
azurerm_monitor_diagnostic_setting
Terraform Configuration Files
Expected Behavior
After an initial
terraform apply,
when I runterraform plan
or anotherterraform apply
, I should see no changes.Actual Behavior
After an initial
terraform apply,
when I runterraform plan
or anotherterraform apply
, I see settings for log categories that I defined in my configuration being changed.Steps to Reproduce
terraform apply
to set the initial diagnostic settingsterraform apply
(orterraform plan
) again to observe the planned changesReferences
The text was updated successfully, but these errors were encountered: