-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Aurora launches instances in at least 3 AZ even if less are specified #1111
Comments
Also having this problem in 0.10.8 and 0.11.0 |
Also just hit this issue in 1.10.0. Update: We got around the problem by removing the resource "aws_rds_cluster" "db" {
cluster_identifier = "db-cluster"
db_subnet_group_name = "db-subnet"
final_snapshot_identifier = "db-final-snapshot"
database_name = "dbname"
master_username = "user"
master_password = "pass"
vpc_security_group_ids = [ "vpc-a1b2c3d4" ]
}
resource "aws_rds_cluster_instance" "db" {
count = "${length(var.availability_zones)}"
identifier = "db-${count.index + 1}"
cluster_identifier = "${aws_rds_cluster.db.id}"
instance_class = "db.t2.small"
db_subnet_group_name = "db-subnet"
availability_zone = "${element(var.availability_zones, count.index)}"
} |
Here is the workaround if locals {
num_az = "${length(var.availability_zones)}"
}
resource "aws_rds_cluster_instance" "cluster_instances" {
count = "${var.instance_count}"
...
availability_zone = "${element(var.availability_zones, count.index % local.num_az)}" |
@awolski I tried the workaround but still getting the same issue that the cluster created still reports to be in all availability zones. I have only specified
|
Same here with 1.14.1 |
We bumped into the same issue, but removing |
Hi folks! 👋 In this particular case, I would argue that Terraform is working as designed. Aurora RDS clusters automatically will replicate across 3 Availability Zones. If you are purposefully configuring less than 3 Availability Zones, Terraform can and should report the difference as your Terraform configuration is different than reality. If you would instead prefer to have Terraform ignore this difference, you can use resource "aws_rds_cluster" "example" {
# ... other configuration ...
lifecycle {
ignore_changes = ["availability_zones"]
}
} Otherwise, if you would prefer to manage which specific Availability Zones your cluster instances live in, the above trick mentioned in #1111 (comment) seems appropriate. 👍 |
In regions with only 3Azs it makes sense to just leave this alone, since whatever you do your cluster will replicate 3 ways by design. It does make sense to use my trick e.g. in us-east1 where there are more than 3Azs if you need to target particular zones... |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks! |
Terraform Version
v0.9.11
Affected Resource(s)
Please list the resources as a list, for example:
Terraform Configuration Files
Expected Behavior
A new RDS instance created across AWS availability zones "us-east-1a" and "us-east-1b". After creating the cluster terraform should not report any diff on subsequent runs over the same configuration.
Actual Behavior
Aurora replicates the data 6 ways across 3 availability zones. if less than three availability zones are provided on cluster creation, Aurora will pick up remainder of AZs to bring the count up to 3.
Terraform is not aware of this behaviour and will see extra availability zones as diff in configuration. Subsequent apply will therefore delete the entire cluster and try to re-create it across provided AZs without success.
Steps to Reproduce
Create an RDS cluster across 2 AZs, run
terraform plan
again to see the diff.Run terraform apply, and then plan again to repeat the behaviour.
This is not an actual bug in the provider but it can — at the very least — leave terraform in a state where it won't be possible to use the configuration because of the possibility of complete data loss.
The text was updated successfully, but these errors were encountered: