-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: move app_engine sub-block to its own resource #2118
Comments
It makes sense to remove app engine from project resource. App engine is a thing-in-itself. Its able to create an enormous mess with resources available to the application, from tasks to data store tables. No reason to keep given internals resources is pretty much unmanageable. I'd also put a big warning for users: "resource can't be deleted, terraform is going to ignore destroy. Ask your friendly google account manager to have it implemented properly". |
I've encountered a lot of issues with the current How would we deal with people including multiple app_engine configs for a single project? That's something which should be caught before it hits the API IMO.
Let's make sure this error is very descriptive and points to a clear doc explaining the tight coupling and how to get around it. |
I agree, but unfortunately, I don't think we have any way to accomplish this currently. I imagine it would work the same as any other singleton resource that's defined against an API, like
Agreed. I'm imagining something like:
Wording could be changed and tweaked, but that was my general thought for what the error would be. Also, I'm imagining an error at plan time, not at apply time, which should be a bit less disruptive? |
At a minimum, could project be a required field for app_engine (instead of inferring it from the provider/context)? I think being explicit will at least give people somewhat less rope to hang themselves with. Also, I presume the following will work (modulo race conditions), but want to confirm:
Yes, definitely should be a plan-time error. |
That is my understanding. I can't promise that without actually implementing it, but that's the idea. At the very least, I know that if there's an error creating one, removing it from your config won't create an error, because it never wound up in state. I don't know if you'll actually get an error trying to create the second or not, unfortunately--I'll have to see what the API does.
I think that would require a bit more conversation about its impact on module authors/user experience, but I wouldn't say it's off the table entirely. |
Deprecate the app_engine sub-block of google_project, and create a google_app_engine_application resource instead. Also, add some tests for its behaviour, as well as some documentation for it. Note that this is largely an implementation of the ideas discussed in #2118, except we're not using CustomizeDiff to reject deletions without our special flag set, because CustomizeDiff apparently doesn't run on Delete. Who knew? This leaves us rejecting the deletion at apply time, which is less than ideal, but the only other option I see is to silently not delete the resource, and that's... not ideal, either. This also stops the app_engine sub-block on google_project from forcing new when it's removed, and sets it to computed, so users can safely move from using the sub-block to using the resource without state surgery or deleting their entire project. This does mean it's impossible to delete an App Engine application from a sub-block now, but seeing as that was the same situation before, and we just papered over it by making the project recreate itself in that situation, and people Were Not Fans of that, I'm considering that an acceptable casualty.
Right now the current behaviour of wanting to delete an entire project (and sometimes silently without warning) just because App Engine was enabled via Console (and the API was turned on as a result) is, well, impractical at best. I've just had a production GCP compute engine focussed project destroyed along with 50 servers by accident because a targeted apply of an unrelated DNS Made Easy provider resource using 0.10.7 and the latest provider went wild and did things outside the scope of the -target= specified and destroyed my project without warning. Thankfully this is easily recovered (for the most part) in GCP, but it could have been a lot worse. I'm now faced with not being able to upgrade the google provider in many projects because I don't know if people have enabled App Engine or not via console. Just my $0.02 worth. |
and yes, lesson learned, I'm now on 0.11.8 but shouldn't have to be on the bleeding edge to avoid top level resources as important as projects being deleted without warning. |
@allandrick Really sorry for that user experience. That shouldn't have happened, and I'm interested in understanding how it did happen to prevent future errors. I worked on a PR for this: #2147. Unfortunately, in testing, we don't actually have the ability to override the diff you see when deleting a resource, so we can't force it to fail at plan time. We could force it to fail at apply time if a field isn't set, but my colleagues who maintain other providers have warned me that that has a bunch of hidden, unintuitive problems (like needing to apply before you can delete, and so on). As a result, #2147 just has a log and a noop for delete on the standalone resource, and some documentation on the page right up front to call out the limitation. For the moment, it's the best we can do. I hear 0.12 is bringing us some new capabilities around this that will let us surface a warning to the end user, which would let us at least put the warning in front of them when they run the command, so I'm satisfied there's a future upgrade path here. Thank you all for your feedback. If anyone strenuously objects to this path, I'm open to hearing alternatives, but our window of opportunity is closing rapidly. I'm going to close this issue out at the end of the workday today to signal that. I apologise for the little warning; the modification of the plan due to implementation limitations was unexpected, and we're in a time-sensitive period. |
@paddycarver so the root cause of my experience was, as it turns out, as follows:
So, ultimately, the thing I missed between two apply executions was another plan which would have revealed the impending doom. That, together with not using 0.11.x where terraform apply would have prompted for a yes/no confirmation, was the eye of the storm. So the only likely bug in this case was the fact that the dme provider targeted apply didn't limit itself to the dme resource. Reading the https://www.terraform.io/docs/commands/plan.html#resource-targeting docs, -target is not recommended to use for routine operations. Mea culpa on two counts I guess! We've updated to 0.11.8 ;-) |
How would I migrate? I have:
and receive a warning:
If I remove the
I get an error from terraform apply:
|
@ensonic As per #2175, you will need to import your app with @paddycarver This migration story still feels pretty poor - forcing manual state surgery for a provider upgrade is suboptimal. Isn't it possible to auto-import app engine if the we try to create and it already exists (like how APIs are handled)? |
It's possible, I'm just not sure it's wise. (I also don't really consider this "manual state surgery", because we usually reserve that for editing the JSON file directly or using |
Deprecate the app_engine sub-block of google_project, and create a google_app_engine_application resource instead. Also, add some tests for its behaviour, as well as some documentation for it. Note that this is largely an implementation of the ideas discussed in hashicorp#2118, except we're not using CustomizeDiff to reject deletions without our special flag set, because CustomizeDiff apparently doesn't run on Delete. Who knew? This leaves us rejecting the deletion at apply time, which is less than ideal, but the only other option I see is to silently not delete the resource, and that's... not ideal, either. This also stops the app_engine sub-block on google_project from forcing new when it's removed, and sets it to computed, so users can safely move from using the sub-block to using the resource without state surgery or deleting their entire project. This does mean it's impossible to delete an App Engine application from a sub-block now, but seeing as that was the same situation before, and we just papered over it by making the project recreate itself in that situation, and people Were Not Fans of that, I'm considering that an acceptable casualty.
I am now putting this into our dev-scripts:
It is a bit ugly. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
This is a proposal issue to gather feedback and discussion about our contentious
app_engine
design. Currently, App Engine applications are created by adding anapp_engine
sub-block to thegoogle_project
resource. This has a few benefits:But it comes with drawbacks, too:
What we've heard from feedback is that people don't feel the pros outweigh the cons. As we look into version 2.0.0 of the provider, and have the opportunity to make some changes that we otherwise couldn't, we're reevaluating this decision and seeking input.
My current thought for a design is a
google_app_engine_app
resource, with the same fields as the current block ingoogle_project
. However, because there's no API to delete App Engine applications, any plan that would involve deleting the app--includingterraform destroy
--would throw an error. To actually delete the app, you'd need to set a special field--possibly calledack_noop_destroy
, or some other name we'll decide on--on thegoogle_app_engine_app
resource, acknowledging that you understand that just because Terraform says it's deleted does not mean it's actually deleted. This would allow us to get around the problem of Terraform doing something other than what it says inplan
, which is an important property of Terraform.Does this design work for people? Do people have any feedback on this? We'd love any thoughts and experience reports that might influence the design.
Related: #1561 #1973 #1728 #638
The text was updated successfully, but these errors were encountered: