Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

resource aws_elasticache_replication_group forces new resource when paramter number_cache_clusters changes #236

Closed
hashibot opened this issue Jun 13, 2017 · 7 comments · Fixed by #4504
Assignees
Labels
enhancement Requests to existing resources that expand the functionality or scope. service/elasticache Issues and PRs that pertain to the elasticache service.
Milestone

Comments

@hashibot
Copy link

This issue was originally opened by @martin-flaregames as hashicorp/terraform#8410. It was migrated here as part of the provider split. The original body of the issue is below.


Not sure if this is really intended by design, but using the AWS console you can remove or add cluster members without the need of recreating the whole cluster + replication group.

Terraform Version

Terraform v0.7.1

Affected Resource(s)

  • aws_elasticache_replication_group

Expected Behavior

Just reduce or increase the number of cluster members to the specified one.

This may be a problem just because you need automatic_failover_enabled = true for this to work.

I guess there is also an edge case if you specify number_cache_clusters = 1, because it depends on how the multiple-az mode is setup and automatic_failover_enabled should be then false AFAIK.

Actual Behavior

The redis replication group and cluster were deleted and created anew.

Steps to Reproduce

  1. create a replication group with, say, 4 cache cluster members
resource "aws_elasticache_replication_group" "redis_replication_group" {
  replication_group_id = "${var.replication_group_id}"
  replication_group_description = "${var.replication_group_id} redis replication group"
  node_type = "${var.node_type}"
  number_cache_clusters = 4
  port = "${var.port}"
  automatic_failover_enabled = "${data.template_file.automatic_failover_enabled.rendered}"
  engine_version = "${var.engine_version}"
  parameter_group_name = "${var.parameter_group_name}"
  subnet_group_name = "${aws_elasticache_subnet_group.subnet_group.name}"
  security_group_ids = [ "${module.securitygroup.id}" ]
  maintenance_window = "${var.maintenance_window}"
  apply_immediately = "${var.apply_immediately}"
}
  1. terraform apply (will create the cluster)
  2. modify the number_cache_clusters to another value, like 2.
resource "aws_elasticache_replication_group" "redis_replication_group" {
  replication_group_id = "${var.replication_group_id}"
  replication_group_description = "${var.replication_group_id} redis replication group"
  node_type = "${var.node_type}"
  number_cache_clusters = 2
  port = "${var.port}"
  automatic_failover_enabled = "${data.template_file.automatic_failover_enabled.rendered}"
  engine_version = "${var.engine_version}"
  parameter_group_name = "${var.parameter_group_name}"
  subnet_group_name = "${aws_elasticache_subnet_group.subnet_group.name}"
  security_group_ids = [ "${module.securitygroup.id}" ]
  maintenance_window = "${var.maintenance_window}"
  apply_immediately = "${var.apply_immediately}"
}
  1. terraform plan will tell us that the cluster will be destroyed, which actually occurs when executing terraform apply
-/+ module.redis.aws_elasticache_replication_group.redis_replication_group
    apply_immediately:             "true" => "true"
    automatic_failover_enabled:    "true" => "true"
    engine:                        "redis" => "redis"
    engine_version:                "2.8.24" => "2.8.24"
    maintenance_window:            "tue:09:00-tue:10:30" => "tue:09:00-tue:10:30"
    node_type:                     "cache.m3.medium" => "cache.m3.medium"
    number_cache_clusters:         "4" => "2" (forces new resource)
    parameter_group_name:          "default.redis2.8" => "default.redis2.8"
    port:                          "6379" => "6379"
    replication_group_description: "devops-1 replication group" => "devops-1 redis replication group"
    replication_group_id:          "devops-1" => "devops-1"
    security_group_ids.#:          "1" => "1"
    security_group_ids.3159527089: "sg-2b74d751" => "sg-2b74d751"
    security_group_names.#:        "0" => "<computed>"
    snapshot_window:               "07:30-08:30" => "<computed>"
    subnet_group_name:             "zgi-us-vir-devops-1-redeem-sng" => "zgi-us-vir-devops-1-redeem-sng"

Important Factoids

Running in VPC mode.

References

@hashibot hashibot added the enhancement Requests to existing resources that expand the functionality or scope. label Jun 13, 2017
@radeksimko radeksimko added the service/elasticache Issues and PRs that pertain to the elasticache service. label Jan 25, 2018
@bflad bflad self-assigned this Mar 21, 2018
@bflad bflad modified the milestones: v1.12.0, v1.13.0 Mar 21, 2018
@bflad bflad modified the milestones: v1.13.0, v1.14.0 Mar 29, 2018
@bflad bflad modified the milestones: v1.14.0, v1.15.0 Apr 11, 2018
@bflad
Copy link
Contributor

bflad commented May 10, 2018

Hi folks 👋 I'm currently working on an implementation for this, which hopefully can get out in the next week or two.

Its probably worth mentioning that the initial implementation will come with some caveats like those noted in the original issue, including but not limited to:

  • Lack of granularity for selecting the naming and preferred availability zone of added replicas
  • Lack of granularity for selecting which replicas are deleted (especially if you are looking for cross-zone coverage) outside of it being a replica and not primary
  • Lack of full plan-time error handling (e.g. attempting to remove the only replica with multi-az enabled, changing the setting on ) -- we may be able to fix some of these later

For full control of replicas, I would suggest checking out the Redis Cluster Mode Disabled example in the aws_elasticache_replication_group resource documentation, which describes a setup using aws_elasticache_cluster resources to fully manage replicas.

@bflad
Copy link
Contributor

bflad commented May 10, 2018

Enhancement PR submitted: #4504

@bcornils
Copy link
Contributor

@bflad did this get merged?

@bflad
Copy link
Contributor

bflad commented May 25, 2018

@bcornils the linked PR, #4504, is still waiting review.

@bflad bflad added this to the v1.21.0 milestone May 30, 2018
@bflad
Copy link
Contributor

bflad commented May 30, 2018

Initial support for updating number_cache_clusters has been merged and will release in v1.21.0 of the AWS provider, later today.

@bflad
Copy link
Contributor

bflad commented May 31, 2018

This has been released in version 1.21.0 of the AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

@ghost
Copy link

ghost commented Apr 5, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked and limited conversation to collaborators Apr 5, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement Requests to existing resources that expand the functionality or scope. service/elasticache Issues and PRs that pertain to the elasticache service.
Projects
None yet
4 participants