Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Beta] Compute Resource Policy recreates disk when attaching a policy #4511

Closed
dkothari-clgx opened this issue Sep 19, 2019 · 3 comments
Closed
Assignees
Labels

Comments

@dkothari-clgx
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
  • If an issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to "hashibot", a community member has claimed the issue already.

Terraform Version

Terraform v0.11.4
GCP Provider 2.14.0

Affected Resource(s)

  • google_compute_disk
  • google_compute_resource_policy

Terraform Configuration Files

Previous Terraform code

resource "google_compute_disk" "data" {
  name  = "myDisk0"

  labels = "myDisk0"

  zone = "us-west1-a"

  type = "${var.data_volume_type}"
  size = "${var.data_volume_size}"
}
resource "google_compute_disk" "data" {
  name  = "myDisk0"
  provider = "beta" 
  labels = "myDisk0"

  zone = "us-west1-a"

  type = "${var.data_volume_type}"
  size = "${var.data_volume_size}"
resource_policies = ["${google_compute_resource_policy.snapshot_policy.name}"]
}

resource "google_compute_resource_policy" "snapshot" {
  provider = "google-beta"
  name = "myDisk0-snapshot-policy"
  region = "us-west1"
  snapshot_schedule_policy {
    schedule {
      daily_schedule {
        days_in_cycle = 1
        start_time = "07:00"
      }
    }
    retention_policy {
      max_retention_days = "${var.disks_snapshot_rention_days}"
      on_source_disk_delete = "KEEP_AUTO_SNAPSHOTS"
    }
    snapshot_properties {
      labels = "myDisk0"
      storage_locations = ["us"]
      guest_flush = true
    }
  }
}

Expected Behavior

The policy should be created and be attached.

Actual Behavior

Disk gets recreated and then policy gets created and attached according to the plan.

Important Factoids

Understand this is in beta.

@ghost ghost added the bug label Sep 19, 2019
@dkothari-clgx dkothari-clgx changed the title Compute Resource Policy recreates disk when attaching a policy [Beta] Compute Resource Policy recreates disk when attaching a policy Sep 19, 2019
@rileykarson
Copy link
Collaborator

Related: GoogleCloudPlatform/magic-modules#2228 will add a fine-grained resource to allow attaching resource policies (+ bumps the feature to GA) in our next release.

@slevenick
Copy link
Collaborator

I'd suggest waiting for the next release to use the fine-grained resource mentioned by @rileykarson to manage policies. Currently changing any field on a compute disk will force a recreate, but the fine-grained resource will help fix this

@slevenick slevenick self-assigned this Sep 23, 2019
@ghost
Copy link

ghost commented Oct 24, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Oct 24, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

3 participants