Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Aurora launches instances in at least 3 AZ even if less are specified #1111

Closed
Gufran opened this issue Jul 11, 2017 · 9 comments
Closed

Aurora launches instances in at least 3 AZ even if less are specified #1111

Gufran opened this issue Jul 11, 2017 · 9 comments
Labels
bug Addresses a defect in current functionality. service/rds Issues and PRs that pertain to the rds service.

Comments

@Gufran
Copy link
Contributor

Gufran commented Jul 11, 2017

Terraform Version

v0.9.11

Affected Resource(s)

Please list the resources as a list, for example:

  • aws_rds_cluster
  • aws_rds_cluster_instance

Terraform Configuration Files

resource "random_id" "rds_password" {
  byte_length = 20
  keepers {
    version = "${var.password_version}"
  }
}

resource "aws_rds_cluster" "rds" {
  cluster_identifier              = "rds-cluster"
  master_password                 = "${random_id.rds_password.hex}"
  master_username                 = "root"
  availability_zones              = ["us-east-1a", "us-east-1b"] # <- Problem
}

resource "aws_rds_cluster_instance" "master" {
  identifier              = "rds-instance-1"
  cluster_identifier      = "${aws_rds_cluster.rds.id}"
  instance_class          = "db.r3.large"
  promotion_tier          = "0"
}

Expected Behavior

A new RDS instance created across AWS availability zones "us-east-1a" and "us-east-1b". After creating the cluster terraform should not report any diff on subsequent runs over the same configuration.

Actual Behavior

Aurora replicates the data 6 ways across 3 availability zones. if less than three availability zones are provided on cluster creation, Aurora will pick up remainder of AZs to bring the count up to 3.
Terraform is not aware of this behaviour and will see extra availability zones as diff in configuration. Subsequent apply will therefore delete the entire cluster and try to re-create it across provided AZs without success.

Steps to Reproduce

Create an RDS cluster across 2 AZs, run terraform plan again to see the diff.
Run terraform apply, and then plan again to repeat the behaviour.

This is not an actual bug in the provider but it can — at the very least — leave terraform in a state where it won't be possible to use the configuration because of the possibility of complete data loss.

@radeksimko radeksimko added the bug Addresses a defect in current functionality. label Oct 23, 2017
@rfink
Copy link

rfink commented Nov 29, 2017

Also having this problem in 0.10.8 and 0.11.0

@bflad bflad added the service/rds Issues and PRs that pertain to the rds service. label Jan 28, 2018
@awolski
Copy link

awolski commented Mar 15, 2018

Also just hit this issue in 1.10.0.

Update:

We got around the problem by removing the availability_zone attribute from the cluster adding it (availability_zone attribute) to the cluster instance using count (where var.availability_zones here is ["eu-west-2a","eu-west-2b"]):

resource "aws_rds_cluster" "db" {
  cluster_identifier           = "db-cluster"
  db_subnet_group_name         = "db-subnet"
  final_snapshot_identifier    = "db-final-snapshot"
  database_name                = "dbname"
  master_username              = "user"
  master_password              = "pass"
  vpc_security_group_ids       = [ "vpc-a1b2c3d4" ]
}

resource "aws_rds_cluster_instance" "db" {
  count = "${length(var.availability_zones)}" 

  identifier           = "db-${count.index + 1}"
  cluster_identifier   = "${aws_rds_cluster.db.id}"
  instance_class       = "db.t2.small"
  db_subnet_group_name = "db-subnet"
  availability_zone    = "${element(var.availability_zones, count.index)}"
}

@rhardouin
Copy link

Here is the workaround if instances > AZs:

locals {
  num_az = "${length(var.availability_zones)}"
}

resource "aws_rds_cluster_instance" "cluster_instances" {
  count = "${var.instance_count}"
  ...
  availability_zone       = "${element(var.availability_zones, count.index % local.num_az)}"

@bbhenry
Copy link

bbhenry commented Apr 11, 2018

@awolski I tried the workaround but still getting the same issue that the cluster created still reports to be in all availability zones. I have only specified eu-central-la but it actually created in the following result:

-/+ module.rds-aurora.aws_rds_cluster.this (new resource required)
      id:                                            "warehouse-cluster" => <computed> (forces new resource)
      apply_immediately:                             "" => <computed>
      availability_zones.#:                          "3" => "1" (forces new resource)
      availability_zones.1126047633:                 "eu-central-1a" => "eu-central-1a"
      availability_zones.2903539389:                 "eu-central-1c" => "" (forces new resource)
      availability_zones.3658960427:                 "eu-central-1b" => "" (forces new resource)
      backup_retention_period:                       "7" => "7"

@esteban-angee
Copy link

Same here with 1.14.1

@errm
Copy link

errm commented May 4, 2018

We bumped into the same issue, but removing availability_zones from the cluster and adding availability_zone = "${element(var.availability_zones, count.index % length(var.availability_zones))}" to the instances did the trick. There seem to be no downsides to this strategy so far, indeed it is much simpler to start managing clusters that were manually created on the console like this since they always seem to have the AZs in a random order.

@bflad
Copy link
Contributor

bflad commented Sep 26, 2018

Hi folks! 👋 In this particular case, I would argue that Terraform is working as designed.

Aurora RDS clusters automatically will replicate across 3 Availability Zones. If you are purposefully configuring less than 3 Availability Zones, Terraform can and should report the difference as your Terraform configuration is different than reality.

If you would instead prefer to have Terraform ignore this difference, you can use ignore_changes to hide that attribute change during plan/apply, e.g.

resource "aws_rds_cluster" "example" {
  # ... other configuration ...
  lifecycle {
    ignore_changes = ["availability_zones"]
  }
}

Otherwise, if you would prefer to manage which specific Availability Zones your cluster instances live in, the above trick mentioned in #1111 (comment) seems appropriate. 👍

@bflad bflad closed this as completed Sep 26, 2018
@errm
Copy link

errm commented Sep 27, 2018

In regions with only 3Azs it makes sense to just leave this alone, since whatever you do your cluster will replicate 3 ways by design.

It does make sense to use my trick e.g. in us-east1 where there are more than 3Azs if you need to target particular zones...

@ghost
Copy link

ghost commented Apr 3, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked and limited conversation to collaborators Apr 3, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/rds Issues and PRs that pertain to the rds service.
Projects
None yet
Development

No branches or pull requests

9 participants