You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Persistent Volume should be properly created in kubernetes, and subsequent terraform plan/apply runs shouldn't try to create additional resources.
Actual Behavior
The first run of terraform plan/apply works just fine, subsequent runs of the same terraform see that Kubernetes creates additional labels in the background:
This in itself is not the problem, i have no problem creating those labels in terraform, but terraform won't allow me to specify the values as variables as follows:
kubernetes_persistent_volume.jenkins-home: metadata.0.labels ("${aws_ebs_volume.jenkins-data.availability_zone}") must match the regex (([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])? (e.g. 'MyValue' or 'my_value' or '12345')
kubernetes_persistent_volume.jenkins-home: metadata.0.labels ("${var.region}") must match the regex (([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])? (e.g. 'MyValue' or 'my_value' or '12345')
Steps to Reproduce
Run terraform apply once with no labels specified terraform apply
Run terraform plan again and see that kubernetes added the region and zone labels so terraform wants to create them.
Declare the region and zone as variables.
Run terraform plan and see error.
Workaround
As a workaround i've used lifecycle and ignore_changes which is not ideal for anyone seeing the same thing.
Hi @millerthomasj
thanks for the report. We already have a filter in place for the internal annotations, but I never thought kubelet would also be labelling resources with internal labels. I was under the impression that labels are for users and annotations are for such internal or generally machine-maintained things.
Nonetheless I'm happy to fix this. I just want to reproduce it first as I think it would be generally helpful for us to have a working K8S cluster in AWS to test against. So far we have only been testing against GKE in Google Cloud and do so nightly.
I will keep you posted.
Re temporary workaround: Have you tried ignore_changes = ["metadata.0.labels"]? That should reduce the scope to labels only, so that you can still keep track of changes of other metadata, like name or namespace.
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
ghost
locked and limited conversation to collaborators
Apr 11, 2020
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Terraform Version
[tmiller:jenkins_terraform]$ terraform -v
Terraform v0.9.3
Affected Resource(s)
kubernetes_persistent_volume
Terraform Configuration Files
Expected Behavior
Persistent Volume should be properly created in kubernetes, and subsequent terraform plan/apply runs shouldn't try to create additional resources.
Actual Behavior
The first run of terraform plan/apply works just fine, subsequent runs of the same terraform see that Kubernetes creates additional labels in the background:
~ kubernetes_persistent_volume.jenkins-home
metadata.0.labels.%: "2" => "0"
metadata.0.labels.failure-domain.beta.kubernetes.io/region: "us-west-1" => ""
metadata.0.labels.failure-domain.beta.kubernetes.io/zone: "us-west-1b" => ""
This in itself is not the problem, i have no problem creating those labels in terraform, but terraform won't allow me to specify the values as variables as follows:
terraform plan:
2 error(s) occurred:
Steps to Reproduce
terraform apply
Workaround
As a workaround i've used lifecycle and ignore_changes which is not ideal for anyone seeing the same thing.
The text was updated successfully, but these errors were encountered: