Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes_persistent_volume automatic labels issue #13716

Closed
millerthomasj opened this issue Apr 17, 2017 · 3 comments · Fixed by #15017
Closed

kubernetes_persistent_volume automatic labels issue #13716

millerthomasj opened this issue Apr 17, 2017 · 3 comments · Fixed by #15017

Comments

@millerthomasj
Copy link

millerthomasj commented Apr 17, 2017

Terraform Version

[tmiller:jenkins_terraform]$ terraform -v
Terraform v0.9.3

Affected Resource(s)

kubernetes_persistent_volume

Terraform Configuration Files

resource "kubernetes_persistent_volume" "jenkins-home" {
  metadata {
    name = "jenkins-home"
  }

  spec {
    access_modes = ["ReadWriteOnce"]
    capacity {
      storage = "50Gi"
    }
    persistent_volume_source {
      aws_elastic_block_store {
        fs_type = "ext4"
        volume_id = "${aws_ebs_volume.jenkins-data.id}"
      }
    }
  }
}

Expected Behavior

Persistent Volume should be properly created in kubernetes, and subsequent terraform plan/apply runs shouldn't try to create additional resources.

Actual Behavior

The first run of terraform plan/apply works just fine, subsequent runs of the same terraform see that Kubernetes creates additional labels in the background:

~ kubernetes_persistent_volume.jenkins-home
metadata.0.labels.%: "2" => "0"
metadata.0.labels.failure-domain.beta.kubernetes.io/region: "us-west-1" => ""
metadata.0.labels.failure-domain.beta.kubernetes.io/zone: "us-west-1b" => ""

This in itself is not the problem, i have no problem creating those labels in terraform, but terraform won't allow me to specify the values as variables as follows:

resource "kubernetes_persistent_volume" "jenkins-home" {
  metadata {
    name = "jenkins-home"
    labels {
      "failure-domain.beta.kubernetes.io/region" = "${var.region}"
      "failure-domain.beta.kubernetes.io/zone" = "${aws_ebs_volume.jenkins-data.availability_zone}"
    }
  }

  spec {
    access_modes = ["ReadWriteOnce"]
    capacity {
      storage = "50Gi"
    }
    persistent_volume_source {
      aws_elastic_block_store {
        fs_type = "ext4"
        volume_id = "${aws_ebs_volume.jenkins-data.id}"
      }
    }
  }
}

terraform plan:
2 error(s) occurred:

  • kubernetes_persistent_volume.jenkins-home: metadata.0.labels ("${aws_ebs_volume.jenkins-data.availability_zone}") must match the regex (([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])? (e.g. 'MyValue' or 'my_value' or '12345')
  • kubernetes_persistent_volume.jenkins-home: metadata.0.labels ("${var.region}") must match the regex (([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])? (e.g. 'MyValue' or 'my_value' or '12345')

Steps to Reproduce

  1. Run terraform apply once with no labels specified terraform apply
  2. Run terraform plan again and see that kubernetes added the region and zone labels so terraform wants to create them.
  3. Declare the region and zone as variables.
  4. Run terraform plan and see error.

Workaround

As a workaround i've used lifecycle and ignore_changes which is not ideal for anyone seeing the same thing.

resource "kubernetes_persistent_volume" "jenkins-home" {
  metadata {
    name = "jenkins-home"
  }

  spec {
    access_modes = ["ReadWriteOnce"]
    capacity {
      storage = "50Gi"
    }
    persistent_volume_source {
      aws_elastic_block_store {
        fs_type = "ext4"
        volume_id = "${aws_ebs_volume.jenkins-data.id}"
      }
    }
  }

  lifecycle {
    ignore_changes = ["metadata"]
  }
}
@radeksimko
Copy link
Member

Hi @millerthomasj
thanks for the report. We already have a filter in place for the internal annotations, but I never thought kubelet would also be labelling resources with internal labels. I was under the impression that labels are for users and annotations are for such internal or generally machine-maintained things.

Nonetheless I'm happy to fix this. I just want to reproduce it first as I think it would be generally helpful for us to have a working K8S cluster in AWS to test against. So far we have only been testing against GKE in Google Cloud and do so nightly.

I will keep you posted.

Re temporary workaround: Have you tried ignore_changes = ["metadata.0.labels"]? That should reduce the scope to labels only, so that you can still keep track of changes of other metadata, like name or namespace.

@millerthomasj
Copy link
Author

Thanks for the reply, I am using your ignore_changes filter and that works just fine for now.

@ghost
Copy link

ghost commented Apr 11, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 11, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants