-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature request to extend/change lifecycle.prevent_destroy #2159
Comments
Hi @ketzacoatl - thanks for the well explained use case. Here's what I believe the simplest way to accomplish what you're looking for:
What do you think? |
@phinze, I am certainly willing to explore how that plays out. It sounds like it would work to cover all I can throw at it right now. |
@phinze I am very interested in this use-case as well. I'd like to provision the database along with the other infrastructure. But that database is the only piece of the infrastructure that I want to keep around, even after Terraform destroy actions. However, I also like Terraform to still use its variables when re-creating the infrastructure, so it can reconnect to that same database again. |
@JeanMertz, maybe the |
@ketzacoatl you mean tainting all but the database? Yes, that would be cumbersome given the size of the infrastructure setup. Also, I am not sure what happens if I taint a resource on which the DB resource depends, it should probably be destroyed as well? I'm not even sure how that would work with the proposal in this issue. If the |
Yes, if you taint a resource the DB has a dependency on, TF will want to rm your db unless you take some action to prevent that. My proposal here would allow me to tell TF not to touch my db, but also not to error out either, just keep on processing the request to recreate the dependency. |
@ketzacoatl yes, but then wat? Usually TF makes that decision because that property of the DB instance has to be changed to keep the DB working with the latest infrastructure changes. So if you change your subnet IDs, and TF has to recreate your DB for it to work with the new subnets, but you prevent it to, and don't want it to show any errors, then you basically end up with a running - but disconnected database and no errors telling you so? |
I'd like to second the proposed second behavior in the initial description and I'm with @JeanMertz. My ideal use case would be a database that can persist destructions/constructions, as I don't want that data destroyed. |
After some thinking, I posted a though in #1139 (comment). I'm going to play around with separating the configs and see if that's a viable solution in the interim. |
Also, can someone explain this portion of the error message?
How could we change it so that it's not factored into the diff? |
@nathanielks that's meant to refer to scenarios where a user has changed a parameter that's forcing the resource to be replaced. It's probably confusing to users invoking |
@Phize re: other tickets. That sounds about right to me as well! |
@JeanMertz, sorry for the delayed reply.
But sometimes TF is just flat out wrong, and I spend precious minutes (or hours!) working to trick Terraform into leaving some particular resource alone when I run |
@ketzacoatl thanks. That makes sense. In fact, I too have been seeing situations where Terraform was incorrectly assuming changes, so I can relate to your problems! |
This pull request is a good start as well, by adding an |
The closer the environment is to production, the harder to work around this issue. Ideally everything should be recreatable.. MongoDB replica sets instance by instance.. but the reality in our case is that Using Atlas and the otherwise great workflow of GitHub pull requests and plan/apply in Atlas makes this even more painful. |
To give you a sense of how horrible it can be in production, I have (manually) enabled termination protection on key instances, and I allow terraform to think it'll recreate my SG/instances when applying updates, and I allow terrform to fail in doing so.. I don't want to waste anymore time futzing around here, so while it's messy, it has been the least painful method to deal with this while waiting on the issue. |
@ketzacoatl that's what we're doing for the time being, hoping that everything else is applied by Terraform. It does not look nice on Atlas (all runs fail), but it works. Looking forward to a solution, let me know if we can help. |
I'm a new user to terraform and I just want to prevent_destroy on the object I don't want destroyed. If its a global don't destroy anything in the entire plan, just make it a global option, what difference does it make if its on a specific object? The fact its on a specific object suggests that it only affects that object, which is a surprise to a new user when the whole plan can't be destroyed. If you need a dsl change to add the behavior we are asking for perhaps a
would work, but like I said it makes more sense to provide a global prevent_destroy function for your block-the-plan prevent_destroy, and a prevent_destroy on a resource only prevents from destroying that specific resource. |
I'd like to +1 this. I'm also interested in easily tearing down a QA environment except for the database instance. Would also be nice if there could be pre_terminate actions (similar to an OOP destructor method) like making a snapshot of a disk and storing a reference to that snapshot in the state file for future use. |
I'm surprised variables aren't allowed here as that seems like it would help resource "aws_subnet" "subnet" {
count = length(var.azs)
lifecycle {
# only let people who really want to destroy a subnet do so by passing -var really_destroy_my_subnets=true
prevent_destroy = !var.really_destroy_my_subnets
}
# ...
} terraform plan -var really_destroy_my_subnets=true
Error: Variables not allowed
on ../../../modules/vpc_subnet/main.tf line 10, in resource "aws_subnet" "subnet":
10: prevent_destroy = !var.really_destroy_my_subnets
Variables may not be used here. |
2023, and still no movement on implementing a feature like this? or is this tracked somewhere else? |
Hello, is there a solution for this? Seems people have been waiting 10 years. |
lifecycle.prevent_destroy
is totally awesome, if only because it prevents you from shooting yourself in the foot (or worse..)But at present, if you enable this flag, and Terraform decides it wants to destroy your instance, you are stuck:
In other words, there are two logical ways to come at using this flag:
Terraform covers the first very nicely, but elects to error out completely. If this is optional, or TF otherwise does not error out hard, we can cover the second. I have also worked around this by going into the AWS console and enabling termination protection on the instances in the cluster.. this effectively allows TF to think it can continue, but has AWS tell it no. I do not like doing that, and must be extremely cautious when doing so, for fear of mistake.
Is it a bad idea to tell TF to ignore the reason why it wants to destory an instance of a resource?
My common use case is user_data.. and this often makes life rather annoying and cumbersome. For example, I will create a cluster of hosts using user_data, I will go on to do other things, but in between some small request to open a port comes through.. user_data changed, and TF wants to destroy my cluster.. all I want to do is open a port, but now I'm left here fighting TF to keep it from rm'ing my cluster. When in this situation, I want to tell TF to ignore everything but my one change, and I generally have to find innovative ways to trick TF into leaving one or another resources alone.
The text was updated successfully, but these errors were encountered: