Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deletion of resource_aws_appautoscaling_target Incomplete #1765

Closed
bbernays opened this issue Sep 27, 2017 · 15 comments · Fixed by #1808
Closed

Deletion of resource_aws_appautoscaling_target Incomplete #1765

bbernays opened this issue Sep 27, 2017 · 15 comments · Fixed by #1808
Assignees
Labels
bug Addresses a defect in current functionality.

Comments

@bbernays
Copy link

Terraform Version

Terraform v0.10.5
AWS Provider v1.0.0

Affected Resource(s)

Please list the resources as a list, for example:

  • resource_aws_appautoscaling_target

Terraform Configuration Files

resource "random_id" "environment_id" {
  keepers = {
    never = "never"
  }
  byte_length = 4
}

module "table-1" {
  source = "./Table"
  environment = "dev_test"
  environment_id = "${random_id.environment_id.hex}"
  aws_region = "us-west-2"
  table_name = "table-1"
}
# /Table/Main.tf

variable "aws_region" {}
provider "aws" {
  region= "${var.aws_region}"
}
variable "table_name" {}
variable "environment" {}
variable "environment_id" {}

resource "aws_dynamodb_table" "base_table" {
  name           = "${var.table_name}"
  read_capacity  = 1
  write_capacity = 1
  hash_key       = "id"
  stream_enabled = "false"
  attribute {
    name = "id"
    type = "S"
  }
  tags {
    Environment = "${var.environment}"
    environment_id = "${var.environment_id}"
    Terraform   = "True"
  }
}
module "auto_scaling" {
  source = "./AutoScaling/Table"
  aws_region = "us-west-2"
  table_name = "${aws_dynamodb_table.base_table.id}"
}

Module: auto scaling:

variable "table_name" {}
variable "aws_region" {}
provider "aws" {
  region= "${var.aws_region}"
}


resource "aws_appautoscaling_target" "Write" {
  max_capacity       = 10
  min_capacity       = 1
  resource_id        = "table/${var.table_name}"
  role_arn           = "arn:aws:iam::<ACCOUNT-ID>:role/service-role/DynamoDBAutoscaleRole"
  scalable_dimension = "dynamodb:table:WriteCapacityUnits"
  service_namespace  = "dynamodb"
}
resource "aws_appautoscaling_policy" "Write" {
  name = "${aws_appautoscaling_target.Write.id}"
  service_namespace = "dynamodb"
  policy_type = "TargetTrackingScaling"
  resource_id = "table/${var.table_name}"
  scalable_dimension = "dynamodb:table:WriteCapacityUnits"
  target_tracking_scaling_policy_configuration {
    predefined_metric_specification {
      predefined_metric_type = "DynamoDBWriteCapacityUtilization"
    }
    scale_in_cooldown = 10
    scale_out_cooldown = 10
    target_value = 70
  }
  depends_on = ["aws_appautoscaling_target.Write"]
}
resource "aws_appautoscaling_target" "Read" {
  max_capacity       = 10
  min_capacity       = 1
  resource_id        = "table/${var.table_name}"
  role_arn           = "arn:aws:iam::<ACCOUNT-ID>:role/service-role/DynamoDBAutoscaleRole"
  scalable_dimension = "dynamodb:table:ReadCapacityUnits"
  service_namespace  = "dynamodb"
}
resource "aws_appautoscaling_policy" "Read" {
  name = "${aws_appautoscaling_target.Read.id}"
  service_namespace = "dynamodb"
  policy_type = "TargetTrackingScaling"
  resource_id = "table/${var.table_name}"
  scalable_dimension = "dynamodb:table:ReadCapacityUnits"
  target_tracking_scaling_policy_configuration {
    predefined_metric_specification {
      predefined_metric_type = "DynamoDBReadCapacityUtilization"
    }
    scale_in_cooldown = 10
    scale_out_cooldown = 10
    target_value = 70
  }
  depends_on = ["aws_appautoscaling_target.Read"]
}

Expected Behavior

All Tables, Scaling targets and policies should created then destroyed

Actual Behavior

Everything is created then when you destroy the resources it fails to delete some of the app scaling targets. It will try for 5 minutes to destroy the targets then it will produce an error:
module.table-1.module.auto_scaling.aws_appautoscaling_policy.Write (destroy): 1 error(s) occurred:

* aws_appautoscaling_policy.Write: Application AutoScaling Policy: ObjectNotFoundException: No scaling policy found for service namespace: dynamodb, resource ID: table/table-1, scalable dimension: dynamodb:table:WriteCapacityUnits, policy name: table/table-1
	status code: 400, request id: b35e9e6c-a3da-11e7-a34b-fd91ab37b205
* module.table-1.module.auto_scaling.aws_appautoscaling_target.Read (destroy): 1 error(s) occurred:

* aws_appautoscaling_target.Read: Application AutoScaling Target still exists

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply
  2. terraform destroy
@bflad
Copy link
Contributor

bflad commented Sep 28, 2017

The problem is actually worse, you're just seeing a symptom of it during deletion.

The aws_appautoscaling_target resource id field within Terraform does not support multiple scalable dimensions (e.g. a read target and write target to the same DynamoDB resource id). The Terraform identifier is currently set as such: https://github.com/terraform-providers/terraform-provider-aws/blob/master/aws/resource_aws_appautoscaling_target.go#L90

d.SetId(d.Get("resource_id").(string))

This is only internal to Terraform as the AWS CLI for example always requires combination of resource id and scalable dimension flags to identify a target. The resource id needs to be updated to include some form of the scalable dimension.

@adriantodorov
Copy link

Encountering the same issue when changing the "min_capacity" and "max_capacity" to a different value after the auto scaling is in place for a DynamoDB table which was applied by Terraform. Using almost the same structure as @bbernays .

@humayunjamal
Copy link

Getting the same issue for an ECS based service . It gets created fine but can not update or destroy it . It even hangs at plan

@radeksimko
Copy link
Member

@adriantodorov Hey, can you clarify how is it related to this issue?

If you're getting diffs for read_capacity and/or write_capacity of aws_dynamodb_table then you'll need to use the lifecycle block as documented at https://www.terraform.io/docs/providers/aws/r/dynamodb_table.html

screen shot 2017-10-09 at 15 04 38

@humayunjamal Hey, are you sure it's related to this issue? Can you try applying my patch from #1808 locally + compiling and verifying if it fixes your issue (otherwise wait for it to be released)? If not, can you check your symptoms don't match any of the existing issues:
https://github.com/terraform-providers/terraform-provider-aws/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aopen%20appautoscaling%20
https://github.com/terraform-providers/terraform-provider-aws/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aopen%20dynamodb
and if not reporting this as a new issue would be greatly appreciated! Thanks.

@bbernays
Copy link
Author

bbernays commented Oct 9, 2017

@radeksimko- Not sure if I need to open a new issue, but I don't think that "ignore_changes" works for index capacity (might not be @humayunjamal exact use case but still a valid one)

@radeksimko
Copy link
Member

@bbernays You're right, that won't work because it's set field which has computed index, you'd have to ignore the whole index field.

Either way unless there's an existing issue covering this please file it separately as it's not related to this issue.

Thanks all.

@anbotero
Copy link

I’m having the same issue as @bflad. Have you created another issue for this? I cannot find anything, so not sure if I should create one.

When having multiple targets (read and write, for example) and multiple policies, whenever we try to plan again, it will always detect changes, and it will replace one of the targets for the other (and hence the policies as well). This is because even though both targets have different name, it seems Terraform assigns the same id to both based on the resource_id.

@radeksimko
Copy link
Member

@anbotero I believe this was fixed in #1808 as mentioned above

screen shot 2017-10-24 at 06 25 39

and released in 1.1.0. Can you try upgrading to that version? If the problem still appears there, then feel free to open an issue. Thanks.

@anbotero
Copy link

@radeksimko it was the version indeed! I was still on previous one. Thank you so much for the heads up! Somehow I missed that last issue reference!

@binarylogic
Copy link

@radeksimko and @anbotero I'm experiencing the same issue where the aws_appautoscaling_target.id is continually changing every time I plan. I've updated to 1.6.0 of the AWS terraform provider and it's still happening. Was there anything else you did to resolve this?

@anbotero
Copy link

@binarylogic weird. I didn't really find how to update providers, so I just deleted the .terraform folder where I ran the Terraform commands and did terraform init again.

@bensquire
Copy link

I am on v1.7.1 of the AWS provider and I'm still seeing this problem... Did you ever work this out @binarylogic ?

@binarylogic
Copy link

Hi @bensquire, sorry for the delay. And no, I never got it to work, I had to move on. We use it for our dynamo tables and enable it manually while adding the following lifecycle block:

lifecycle {
  ignore_changes  = ["read_capacity", "write_capacity"]
  prevent_destroy = "true"
}

Hope that helps.

@bensquire
Copy link

@binarylogic I've also moved away from this for the time being, but I appreciate the time you took coming back to me. We're not using DynamoDB, but your lifecycle idea might work...

@ghost
Copy link

ghost commented Apr 8, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked and limited conversation to collaborators Apr 8, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality.
Projects
None yet
8 participants