Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for scaling DynamoDB in Application Scaling Policy #888

Closed
stephencoe opened this issue Jun 16, 2017 · 13 comments · Fixed by #1650
Closed

Add support for scaling DynamoDB in Application Scaling Policy #888

stephencoe opened this issue Jun 16, 2017 · 13 comments · Fixed by #1650
Labels
enhancement Requests to existing resources that expand the functionality or scope.

Comments

@stephencoe
Copy link
Contributor

As of the sdk release 1.8.42 Dynamodb is now supported by application autoscaling.

This is an addition to the appautoscaling_policy to add the TargetTrackingScalingPolicyConfiguration

SDK Link
http://docs.aws.amazon.com/sdk-for-go/api/service/applicationautoscaling/#PutScalingPolicyInput

@stephencoe
Copy link
Contributor Author

Question
I started to investigate this in the providers but noticed the format for aws_appautoscaling_policy has the keys for StepScalingPolicyConfiguration on the top level. Is this correct? I would have expected the format for this to be:

resource "aws_appautoscaling_policy" "ecs_policy" {
  name                    = "scale-down"
  resource_id             = "service/clusterName/serviceName"
  scalable_dimension      = "ecs:service:DesiredCount"
  service_namespace       = "ecs"

  step_scaling_policy_configuration {
    adjustment_type         = "ChangeInCapacity"
    cooldown                = 60
    metric_aggregation_type = "Maximum"
    
    step_adjustment {
      metric_interval_upper_bound = 0
      scaling_adjustment          = -1
    }

  } 
...
}

Does HCL not support nested objects as shown in this example? I can't recall an example where I have seen it

If this is the case, would the target_tracking_scaling_policy_configuration format be top level as follows:

resource "aws_appautoscaling_policy" "ecs_policy" {
  name                    = "scale-down"
  resource_id             = "service/clusterName/serviceName"
  scalable_dimension      = "ecs:service:DesiredCount"
  service_namespace       = "ecs"
  depends_on = ["aws_appautoscaling_target.ecs_target"]

  // step_scaling_policy_configuration
  adjustment_type         = "ChangeInCapacity"
  cooldown                = 60
  metric_aggregation_type = "Maximum"
  
  step_adjustment {
    metric_interval_upper_bound = 0
    scaling_adjustment          = -1
  }

  //target_tracking_scaling_policy_configuration
  customized_metric_specification = {
    dimensions = []
    metric_name = "foo"
    namespace = "dyn"
    statistic = "Average | Minimum | Maximum | SampleCount | Sum"
    unit = 1
  }

  predefined_metric_specification = {
    PredefinedMetricType = "DynamoDBReadCapacityUtilization | DynamoDBWriteCapacityUtilization"
    ResourceLabel = "..."
  }
  
  scale_in_cooldown = 10
  scale_out_cooldown = 10
  target_value = 50.0
}

@radeksimko radeksimko added the enhancement Requests to existing resources that expand the functionality or scope. label Jun 16, 2017
@kfrn
Copy link

kfrn commented Jul 19, 2017

Hi there, any update on this PR? :)

@devshorts
Copy link

Any progress on this? Auto scaling would be really nice

@blaltarriba
Copy link

+1

@arevell89
Copy link

+1

1 similar comment
@gleg
Copy link

gleg commented Aug 30, 2017

👍

@Clausewitz45
Copy link

Imho @stephencoe, I would use a different approach as I submitted here. If we want to keep the aws_appautoscaling_policy and aws_appautoscaling_target there will be a huge confusion to have the same command/resource to manage two (or more) different type of resource (ECS service, DynamoDB). But I know, coding standards, align with AWS SDK...

@bbernays
Copy link

bbernays commented Sep 12, 2017

My short term work around is to use a local provisioner to call the CLI once the table has been created.

Below is the code for the workaround:

variable "aws_region" {
  default="us-east-1"
}
resource "aws_dynamodb_table" "DynamoTableName" {
  .
  .
  .
  provisioner "local-exec" {
    command = "
      aws application-autoscaling register-scalable-target --service-namespace dynamodb --resource-id \"table/${aws_dynamodb_table.DynamoTableName.id}\" --scalable-dimension \"dynamodb:table:WriteCapacityUnits\" --min-capacity 1 --max-capacity 10 --role-arn arn:aws:iam::<ACCOUNT-ID>:role/service-role/<ROLE-NAME> --region ${var.aws_region}; \
      aws application-autoscaling register-scalable-target --service-namespace dynamodb --resource-id \"table/${aws_dynamodb_table.DynamoTableName.id}\" --scalable-dimension \"dynamodb:table:ReadCapacityUnits\" --min-capacity 1 --max-capacity 10 --role-arn arn:aws:iam::<ACCOUNT-ID>:role/service-role/<ROLE-NAME> --region ${var.aws_region}; \
      aws application-autoscaling put-scaling-policy --service-namespace dynamodb --resource-id \"table/${aws_dynamodb_table.DynamoTableName.id}\" --scalable-dimension \"dynamodb:table:WriteCapacityUnits\" --policy-name \"Write-${aws_dynamodb_table.DynamoTableName.id}\" --policy-type \"TargetTrackingScaling\" --target-tracking-scaling-policy-configuration '{\"PredefinedMetricSpecification\":{\"PredefinedMetricType\":\"DynamoDBWriteCapacityUtilization\"},\"ScaleOutCooldown\":60,\"ScaleInCooldown\":60,\"TargetValue\":50}' --region ${var.aws_region}; \
      aws application-autoscaling put-scaling-policy --service-namespace dynamodb --resource-id \"table/${aws_dynamodb_table.DynamoTableName.id}\" --scalable-dimension \"dynamodb:table:ReadCapacityUnits\" --policy-name \"Read-${aws_dynamodb_table.DynamoTableName.id}\" --policy-type \"TargetTrackingScaling\" --target-tracking-scaling-policy-configuration '{\"PredefinedMetricSpecification\":{\"PredefinedMetricType\":\"DynamoDBReadCapacityUtilization\"},\"ScaleOutCooldown\":60,\"ScaleInCooldown\":60,\"TargetValue\":50}' --region ${var.aws_region}"
  }

  provisioner "local-exec" {
    when    = "destroy"
    command = "
      aws application-autoscaling deregister-scalable-target --service-namespace dynamodb --resource-id \"table/${aws_dynamodb_table.DynamoTableName.id}\" --scalable-dimension \"dynamodb:table:WriteCapacityUnits\" --region ${var.aws_region};
      aws application-autoscaling deregister-scalable-target --service-namespace dynamodb --resource-id \"table/${aws_dynamodb_table.DynamoTableName.id}\" --scalable-dimension \"dynamodb:table:ReadCapacityUnits\" --region ${var.aws_region};
      aws application-autoscaling delete-scaling-policy --service-namespace dynamodb --resource-id \"table/${aws_dynamodb_table.DynamoTableName.id}\" --scalable-dimension \"dynamodb:table:WriteCapacityUnits\" --policy-name \"Write-${aws_dynamodb_table.DynamoTableName.id}\" --region ${var.aws_region}; \
      aws application-autoscaling delete-scaling-policy --service-namespace dynamodb --resource-id \"table/${aws_dynamodb_table.DynamoTableName.id}\" --scalable-dimension \"dynamodb:table:ReadCapacityUnits\" --policy-name \"Read-${aws_dynamodb_table.DynamoTableName.id}\" --region ${var.aws_region}"
  }

}

UPDATE:

  1. Without the local-exec on destroy then the cloudwatch alarms would be orphaned
  2. When creating large number of dynamo tables with this method you have to reduce the number of parallel threads to <3 otherwise the creation of cloudwatch alarms for the auto scaling policy will be throttled

@Xabur
Copy link

Xabur commented Sep 19, 2017

+1

@toamarnath
Copy link

I am confused.

If I have a dynamodb table with initially created read/write though put as 10 , later I would like to both read/write capacity as 1000, how can we do this in terraform (without using local provisioner to call the CLI)

@aterreno
Copy link

aterreno commented Nov 1, 2017

The way we are doing this right now is:

resource "aws_dynamodb_table" "table_lab_process" {
  name             = "lab-${var.environment}-process"
  read_capacity    = "${var.process_dynamo_rcu_low}"
  write_capacity   = "${var.process_dynamo_wcu_high}"
  hash_key         = "gid-brand"
  range_key        = "ts"
  stream_enabled   = true
  stream_view_type = "NEW_IMAGE"

  attribute {
    name = "gid-brand"
    type = "S"
  }

  attribute {
    name = "ts"
    type = "S"
  }

  lifecycle {
    ignore_changes = [
      "read_capacity",
      "write_capacity",
    ]
  }
}

This way, TF won't complain when the read/write capacity changes on AWS because of autoscale, but we can also change the variables and set it up again. Is that more clear now?

@toamarnath
Copy link

toamarnath commented Nov 6, 2017 via email

@ghost
Copy link

ghost commented Apr 10, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked and limited conversation to collaborators Apr 10, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement Requests to existing resources that expand the functionality or scope.
Projects
None yet
Development

Successfully merging a pull request may close this issue.