Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Circular dependency when provisioner for destroy references array member for AWS #17944

Closed
artur-fijalkowski opened this issue Apr 27, 2018 · 7 comments · Fixed by #24083
Closed
Labels
bug config v0.11 Issues (primarily bugs) reported against v0.11 releases v0.12 Issues (primarily bugs) reported against v0.12 releases

Comments

@artur-fijalkowski
Copy link

To resolve problem with disk attachment on AWS (disk attachement cannot be destroyed when disk is mounted on OS level) I created following code:

resource "aws_volume_attachment" "attachement_myservice" {
  count			=	"${length(var.network_myservice_subnet_ids)}"
  device_name   = "/dev/xvdg"
  volume_id     =   "${element(aws_ebs_volume.ebs_myservice.*.id, count.index)}"
  instance_id   =   "${element(aws_instance.myservice.*.id, count.index)}"

  provisioner "local-exec" {
    command = "aws ec2 stop-instances --instance-ids ${element(aws_instance.myservice.*.id, count.index)} --region ${var.region} && sleep 30"
    when = "destroy"
  }
}

When trying to rebuild instances (because AMI instances depend on) terraform reports circular dependency for this code. Experiment shows that the problem is with shell command - when reference ${element(aws_instance.myservice.*.id, count.index)} is replaced with something different terraform works like a charm (production solution for me is to use AWS CLI to first find instance IDs and then use them to stop instances).

Most probably root cause is wrong interpretation of interpolation of array member when building graph for rebuild.

Problem doesn't exist when terraform destroy is executed - only on terraform apply which is expected to re-build instances.

Terraform Version

terraform -v
Terraform v0.11.3

Your version of Terraform is out of date! The latest version
is 0.11.7. You can update by downloading from www.terraform.io/downloads.html

Terraform Configuration Files

...

Debug Output

Crash Output

Expected Behavior

Actual Behavior

Steps to Reproduce

Additional Context

References

@jbardin
Copy link
Member

jbardin commented May 9, 2018

Hi @artur-fijalkowski,

Thanks for filing this.
Is it possible for you to get the debug log output when you encounter the cycle? I haven't been able to reproduce this yet, though I may have oversimplified things a bit.

Also, have you tested this with the current release? There were a few internal changes to the evaluation of destroy provisioners recently which may have indirectly fixed this.

@jbardin jbardin added the waiting-response An issue/pull request is waiting for a response from the community label May 9, 2018
@artur-fijalkowski
Copy link
Author

artur-fijalkowski commented May 9, 2018

@jbardin

I will try to do it. The reason I haven't yet is that this is part of bigger solution and I had no time to extract this for testing. But If you can't reproduce I have to do it anyway.

@jbardin
Copy link
Member

jbardin commented May 9, 2018

Hi @artur-fijalkowski,

I had some more time to experiment and expanded this out to reproduce the cycle.
Thanks!

@jbardin jbardin removed the waiting-response An issue/pull request is waiting for a response from the community label May 9, 2018
@jbardin
Copy link
Member

jbardin commented May 9, 2018

A minimal test case for investigation:

resource "null_resource" "a" {
  triggers = {
    "b" = "${null_resource.b.id}"
  }

  provisioner "local-exec" {
    command = "echo ${null_resource.b.id}"
    when = "destroy"
  }
}

resource "null_resource" "b" {
}

Tainting null_resource.b will trigger the cycle, because null_resource.a references it in its configuration as well as in the destroy provisioner.

* Cycle: null_resource.b (destroy), null_resource.b, null_resource.a (destroy)

@artur-fijalkowski
Copy link
Author

@jbardin exactly this is simplified to maximum my case ;)

I guess that root cause is that this destroy provisioner is should not enforce dependency on resources referenced in it. Because in general this is what it is for - to remove this dependency prior to destruction ;)

@jbardin
Copy link
Member

jbardin commented May 9, 2018

An interesting workaround here is adding create_before_destroy to the affected resource.

The reason for the cycle is that normally destroy dependencies flow in reverse. Node a is created before b, so b is destroyed before a. The problem arrises when you add a destroy provisioner, which is an action that depends the references existing, so the dependencies of the provisioner flow in the opposite direction of the destroy.

The dependencies showing the cycle look like:

graph

And with create_before_destroy, the graph is is converted to:

graph2

We'll have to determine how we want to handle this after the next major release. I'm not sure if there's a good solution other than inverting the order, but at least we know it can be detected and reported fairly clearly to the user.

@apparentlymart apparentlymart added config and removed core labels Nov 8, 2018
@hashibot hashibot added v0.11 Issues (primarily bugs) reported against v0.11 releases v0.12 Issues (primarily bugs) reported against v0.12 releases labels Aug 29, 2019
@ghost
Copy link

ghost commented Apr 1, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 1, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug config v0.11 Issues (primarily bugs) reported against v0.11 releases v0.12 Issues (primarily bugs) reported against v0.12 releases
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants