-
Notifications
You must be signed in to change notification settings - Fork 9.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow destroy-time provisioners to access variables #23679
Comments
The second use case seems especially important, to be able to define variables in the state of the In our use case, the |
Hi @shaunc, You can already use resource "null_resource" "foo" {
triggers {
interpreter = var.local_exec_interpreter
}
provisioner {
when = destroy
interpreter = self.triggers.interpreter
...
}
} We don't intend to make an exception for referring to variables because within descendant modules a variable is just as likely to cause a dependency as anything else. While it is true that root module variables can never depend on anything else by definition, Terraform treats variables the same way in all modules in order to avoid a situation where a module would be invalid as a descendant module but not as a root module. We'd be interested to hear what your use-case is for customizing the interpreter via a variable like that. Usually a provisioner's interpreter is fixed by whatever syntax the command itself is written in. It may be more practical to address whatever was causing you to do this in the first place and make it unnecessary to do so. |
Thanks for the explanation. The interpreter is customized just because I was contributing to terraform-aws-eks-cluster -- you'd have to ask them about the design; I do imagine that there are many more uses for variables in when=destroy provisioners, though. However, I think that using them in triggers may be exactly what I meant when I talked about "two passes". I'm curious in your implementation why referring to variables via triggers doesn't cause loops, while referring to them directly does. And whether direct reference could just be syntactic sugar for reference via trigger. Is there a declarative semantic difference between the two? |
I think I'm running into a similar situation for the same reasons, though perhaps more mundane. @shaunc's proposals would seem to have merit. Like him, I also understand the concerns around possibly unknown state at destroy-time. While the documentation currently indicates this should be possible, I'm getting the deprecation warning for the
This is problematic because the private key file isn't stored locally in the same place on my local mac as, say, my co-worker's windows laptop. This crops up in a couple of other places for us with
The Relatedly, while not a user-supplied variable, To elaborate the example briefly, our vault cluster consists of AWS instances whose names all begin with AFAIK, in none of these cases are we able to store these values in the Local environment configuration (ie the path to the private key or |
My use case is granting permissions (injecting the connection key) using external variables when running terraform. How would one accomplish that now ?
|
It would be nice if the error message could be improved. When you read:
it's not at all obvious that the root cause is the variable cannot be interpolated. |
Unfortunately, those triggers also cause a new resource to be created if they change. If you have more complex idempotence requirements then this won't work. In my case I compare data from a If the trigger value changes, then I need access to the data/config so I can do things. If I were to place the actual data/config into the trigger block then my I like the fact that terraform will isolate the destruction provisioner. But that does necessitate an additional block in the |
there's also the use case of:
if i want to use a it's not really cool if all my resources get recreated if the path of the module changes (which is what would happen if i stored this value in the |
@sdickhoven the |
I like the idea of a separate block of variables that don't trigger create on change, but are usable in destroy. |
I'm having a similar issue with
which also results in
This wasn't the case with 0.11, and I'm hitting this now with 0.12 (upgrading atm) |
I'm using a destroy provisioner to uninstall an application on a remote device, and the only way to provide connection details currently is to hard-code them. The |
Also, would someone from the TF team mind providing an example of how this could produce a circular reference? It's not really clear to me why preventing direct use but allowing indirect use through |
Still seeing warnings in the create_layer module due to references to path.module. This issue is currently being discussed here: hashicorp/terraform#23679
Still seeing warnings in the create_layer module due to references to path.module. This issue is currently being discussed here: hashicorp/terraform#23679
I have yet another use case where I get this warning and I can't tell how to avoid it. I have
|
Our use case is bastion hosts/users for remote execs. They work fine for the forward provisioners, but not the destroy time provisioner. The bastion user is calculated from the creation of the bastion host with Terraform - which in turn depends upon which cloud operating system image the end user selects. Again I can't see how to get around this without splitting the runs into two. |
We are also running into something similar. Following @apparentlymart's suggestion in this thread I tried to rewrite the following problematic template: resource "null_resource" "kops_delete_flag" {
triggers = {
cluster_delete = module.kops.should_recreate
}
provisioner "local-exec" {
when = destroy
command = <<CMD
if kops get cluster "${local.cluster_domain_name}" --state "s3://${local.kops_state_bucket}" 2> /dev/null; then
kops delete cluster --state "s3://${local.kops_state_bucket}" --name "${local.cluster_domain_name}" --yes
fi
CMD
}
} into resource "null_resource" "kops_delete_flag" {
triggers = {
cluster_delete = module.kops.should_recreate
cluster_domain_name = local.cluster_domain_name
kops_state_bucket = local.kops_state_bucket
}
provisioner "local-exec" {
when = destroy
command = <<CMD
if kops get cluster "${self.triggers.cluster_domain_name}" --state "s3://${self.triggers.kops_state_bucket}" 2> /dev/null; then
kops delete cluster --state "s3://${self.triggers.kops_state_bucket}" --name "${self.triggers.cluster_domain_name}" --yes
fi
CMD
}
} This creates two problems:
|
Here's a use-case for variable access: I need to work around #516 / https://support.hashicorp.com/hc/en-us/requests/19325 to avoid Terraform leaking the database master password into the state file. Note for Hashicorp staff: All of this would be unnecessary if Terraform had the equivalent of CloudFormation's resolve feature or something similar to make The solution I have was to store that value in SSM using KMS and have a local-exec provisioner populate the RDS instance as soon as it's created. This works well except that it's now dependent on that destroy provisioner to clean up the SSM parameter before deleting the KMS instance. resource "aws_kms_key" "SharedSecrets" {
…
# See the note in the README for why we're generating the secret which this
# key is used for outside of Terraform. The new password must comply with the
# restrictions in the RDS documentation:
# https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Limits.html
provisioner "local-exec" {
command = "./scripts/set-db-password-parameter ${aws_kms_key.SharedSecrets.key_id} ${local.db_password_ssm_parameter_name}"
}
provisioner "local-exec" {
when = destroy
command = "aws ssm delete-parameter --name ${local.db_password_ssm_parameter_name}"
}
} resource "aws_db_instance" "postgres" {
…
# This is the other side of the dance described in the README to avoid leaking
# the real password into the Terraform state file. We set the DB master
# password to a temporary value and change it immediately after creation. Note
# that both this and the new password must comply with the AWS RDS policy
# requirements:
# https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Limits.html
password = "rotatemerotatemerotateme"
provisioner "local-exec" {
command = "./scripts/set-db-password-from-ssm ${self.identifier} ${local.db_password_ssm_parameter_name}"
}
} |
I had the same problem as @Bowbaq. Presumably it's because the saved tfstate doesn't have an old value saved for the new trigger. (In my case, the local variable I was referencing is based on |
Hello, Here is my use case:
I'm only referencing mandatory variables ( ${var.region} & ${terraform.workspace} ), so... Where is the cycle dependency risk ? I have used this kind of stuff many time so far and never get a cycle dependency on it (creating/destroying infrastructure ten times a day). Furthermore, is it really needed to set this kind of useful way of working "deprecated" ? |
Colleagues, is there any decision on the topic from terraform devs? We're facing pretty much the same issue when some cleanup is required before destroying a VM and we need to provide root password that is constantly being changed via PAM system
I would be great to understand whether it's ever going to be fixed or it's a solid 'no'. |
@cathode911 I think they have settled on no; at the very least, it’s gone on long enough that I have had to implement a workaround months ago. Now I only use the destroy time provisioned to trigger an ansible playbook, which is capable of fetching the correct credentials and executing my cleanup. As a side benefit, it doesn’t put anything sensitive into the terraform state. |
@cathode911 like @ajchiarello our intent is to work around it. Once I can find the time, I'm going to implement an AWS Cloudwatch mechanism that will delete a node from chef when the instance is destroyed, rather than trying to make that call from Terraform to Chef in a destroy-time provisioner. In your case, maybe pulling the password as a A little bit outside of the direct question you're asking, perhaps you could avoid the situation by
Terraform's changes in this area of provisioners, both create- and destroy- time, have made us think harder about how we're handling things that live outside TF's control, like our chef CM. I know at least one of TF's ideas is to use Packer to make an image for everything, but that's not going to happen so we've had to look at other ways, in combination with TF, to manage our stuff. |
Thanks for the suggestions, colleagues! |
@sdesousa86 This is a shell injection vulnerability via AWS tags. In order to do it safely, you have to pass the variables in the environment. Please modify your solution, like:
Also tagging @baurmatt who might have taken advice. |
that is good idea. |
One option that I found convenient for this was is to store whatever I needed as an output, and call P.S.: It appears that this method doesn't work with |
Hi all, I'm having this same issue. It's something that takes 10 extra lines of code in the resource. I've added this to the null_resource provider which is where I had faced the issue, however similar changes can be implemented appropriately for any resource. This allows us to add an extra resource (I call non_triggers) that function identical to triggers, but do not force a recreate. internal/provider/resource.go
Line 32: Added lines to schema:
Line 45: Added function:
You can find the changes for the same under the Pls @ teamterraform add this functionality. I've tested this working in general cases with a custom provider I created. I fully understand other work might be necessary to add this or terraform wouldn't want to allow this for some reason, but it works. |
I recently realized in a different context that it is possible to ignore_changes on a particular trigger, which sometimes results in a useful workaround: variable "foo" { default = "original_foo" }
variable "bar" { default = "original_bar" }
resource "null_resource" "a" {
triggers = {
foo = var.foo
bar = var.bar
}
lifecycle {
ignore_changes = [triggers["bar"]]
}
provisioner "local-exec" {
when = destroy
command = "echo foo was ${self.triggers.foo} and bar was ${self.triggers.bar}"
}
}
output "foo" {
value = resource.null_resource.a.triggers.foo
}
output "bar" {
value = resource.null_resource.a.triggers.bar
} After applying once, changing Later on, changing $ terraform apply
null_resource.a: Refreshing state... [id=6595475011890936325]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
Terraform will perform the following actions:
# null_resource.a must be replaced
-/+ resource "null_resource" "a" {
~ id = "6595475011890936325" -> (known after apply)
~ triggers = { # forces replacement
~ "bar" = "original_bar" -> "new_bar"
~ "foo" = "original_foo" -> "new_foo"
}
}
Plan: 1 to add, 0 to change, 1 to destroy.
Changes to Outputs:
~ bar = "original_bar" -> "new_bar"
~ foo = "original_foo" -> "new_foo"
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
null_resource.a: Destroying... [id=6595475011890936325]
null_resource.a: Provisioning with 'local-exec'...
null_resource.a (local-exec): Executing: ["/bin/sh" "-c" "echo foo was original_foo and bar was original_bar"]
null_resource.a (local-exec): foo was original_foo and bar was original_bar
null_resource.a: Destruction complete after 0s
null_resource.a: Creating...
null_resource.a: Creation complete after 0s [id=1459100522563100161]
Apply complete! Resources: 1 added, 0 changed, 1 destroyed.
Outputs:
bar = "new_bar"
foo = "new_foo" The advantage I see of the proposed |
I tweaked the suggestion in the last comment. In my use case, I just need to pass some constant vars to the destroy-provisioner. Here is what worked for me. resource "null_resource" "a" {
triggers = {
invokes_me_everytime = uuid()
foo = local.some_constant
bar = local.some_other_constant
}
# You might need to add the following if this is a synchronous cleanup provisioner:
# depends_on = [<SOME RESOURCE THAT IS ULTIMATELY MAPPED TO MY CONSTANT VARS>]
provisioner "local-exec" {
when = destroy
command = "/bin/bash some-script.sh"
environment = {
foo = self.triggers.foo
bar = self.triggers.foo
}
}
} Explanation
|
By referring to the trigger content, the environment and command do not directly refer to external resources. This makes the Terraform dependency resolution happy, because it knows about the lifetime of resources referenced in triggers. Credit for this solution goes to the post here: hashicorp/terraform#23679 (comment)
By referring to the trigger content, the environment and command do not directly refer to external resources. This makes the Terraform dependency resolution happy, because it knows about the lifetime of resources referenced in triggers. Credit for this solution goes to the post here: hashicorp/terraform#23679 (comment)
Is there any reason while I tried to add a What a mess for just preventing bad practices while highly restricting very basic use cases ... |
@secureoptions , your solution works perfectly. and also it hides the sensitive variables from debug console too. 👍 |
Still it's not possible to pass a variable to a NON null resource (triggers are not available) + when tags are not available (in my case it's helm_release resource). I've implemented that in the following manner (see the command + ${each.key}): variable "karpenter_provisioner_name" {
type = string
default = "default"
}
resource "helm_release" "karpenter" {
for_each = toset(split(",", var.karpenter_provisioner_name))
name = "karpenter"
chart = "./templates/karpenter"
namespace = "karpenter"
create_namespace = true
values = [data.template_file.karpenter.rendered]
provisioner "local-exec" {
when = destroy
command = "aws ec2 terminate-instances --instance-ids $(aws ec2 describe-instances --query 'Reservations[].Instances[].InstanceId' --filters 'Name=tag:Name,Values=karpenter.sh/provisioner-name/${each.key}' --output text)"
}
}
resource "kubectl_manifest" "karpenter_provisioner" {
yaml_body = <<-YAML
apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
name: ${var.karpenter_provisioner_name}
spec:
requirements:
... Is there a better way to do that? |
@jbardin many thanks for your solution, it's really what I was looking for. I was looking for a solution for deregistering and deleting task definitions revisions in ECS after destroying my infra with terraform . |
Use case: remote server operations through terraform
There should be a support for short-lived credentials in destroy-time provisioners as well 🙏 |
I am trying to use a destroy-time provisioner within a module to kick off an ansible-playbook to run some clean up tasks when a vsphere virtual machine is destroyed - to do this I need to reference variables in the module to pass to the ansible-playbook command. I have now discovered this used to be possible but now is not, with many examples in this issue showing that it should not have been removed and that any issues arising from circular dependencies are the problem of the end-user to fix - removing the ability to reference variables makes what should be a simple task incredibly painful, because it can no longer be done within the module in my case, and must instead be done as a separate resource alongside each instance of the module (becuase we also cannot put provisioners inside module resources for some reason, which would have been better, if not ideal). Please re-instate the ability to reference variables in destroy-time provisioners, it is bonkers that this has been removed when you have been presented with multiple cases that prove we need this functionality. |
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry that the direction of this issue has remained unclear. As issues accumulate replies the important updates tend to get hidden. The removal of an existing ability was not undertaken lightly, and much consideration was put into making the decision. You can see the associated PRs and major issues resolved here (#24083) and here (#23252), with more background here (#23559). The assumption that users can avoid the problems on their own is flawed, because the errors this causes are not directly visible within the configuration. That last PR has a diagram of the simplest case which triggers a cycle; the addition of Since there is no version of this feature which can be arbitrarily composed within modules, or can be restricted in such a way that users don't find themselves with unresolvable states that can't be applied, it's unlikely that it will be reinstated as it was before. Given that destroy provisioners are already inadequate for what users want in many cases, and provisioners in general are a tool of last resort, a more general solution is likely only going arise from a new workflow of some sort. The fact that some new workflow could be created in the future to handle this type of situation is primarily why the issue has remained open. As this issue has gotten quite long, I'm going to lock it to reduce noise for anyone still following. New ideas can submitted separately and linked back here. |
Current Terraform Version
Use-cases
Using a local-exec provisioner when=destroy in a null resource, to remove local changes; currently, the setup provisioner has:
interpreter = [var.local_exec_interpreter, "-c"]
Attempted Solutions
Include:
interpreter = [var.local_exec_interpreter, "-c"]
in the 'when=destroy' provisioner.
This works for the moment, but produces a deprecation warning.
Proposal
The most straightforward way to address this would be to allow variables to be used in destroy provisioners. Direct use of passed-in variables does not present the danger of cyclic dependences created by resource references.
A more ambitious alternate solution would be to allow resources to declare arbitrary attributes at create time, which could be referenced at destroy time. E.g.:
This would allow the provisioner (or other resources) to access values calculated in state
before application. It would require 2-pass evaluation (AFAIK), and thus a much more ambitious change to the code base. Even if the community likes this, it would seem quicker, (and not really hacky) to allow reference to passed in variables in destroy provisioners.
References
#23675
The text was updated successfully, but these errors were encountered: