-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Workspace resource is stuck with external-create-pending annotation if provider-terraform pod is deleted during Create #340
Comments
Unfortunately that won't cover the key case this logic was added for, which is when Observe might not be unable to know whether the resource exists. Some resources have non-deterministic identifiers - e.g. when you call Create the external system generates an identifier and returns it to you. If we have a "partial create" such that we successfully created a resource in the external system but didn't record its identifier it will appear on the next reconcile that it was never created and Observe will move on to calling Create again, resulting in creating duplicates. You can read a little more about this in crossplane/crossplane#3037 (comment). So this is kind of "working as intended" - we intentionally take the conservative route and ask that a human checks and removes the offending annotation when we get into this state. I can see that this isn't great for provider-terraform that is at higher risk of getting into this state due to its very long-running Create logic though. I'm open to ideas to address this - maybe we make it possible to opt out of this functionality? |
Thanks Nic, that makes sense. What about if we allow the Observe to happen, which would give provider-terraform a chance to remove or update the external-create-pending annotation, and then do the same check that is there today, just after the Observe instead of before? Otherwise I think we would need to add some more complex "opt out" logic so the Reconciler would "know" to not enfore that check. If we let the provider manage the annotations (when it wants to) then the Reconciler can continue as it does today. |
We see that behaviour quite often as well using the grafana-provider, even without Pod restarts. In our observations that often happens when bursts of resources are synchronized (created). We see some throttling warnings from the API-server (managed AKS). We have about 500 managed resources for that provider |
@negz @bobh66 With upbound/provider-terraform#231 landed, this is now my team's biggest pain point when using provider-terraform. How do you feel about adding a |
@toastwaffle I'm not thrilled about bypassing the annotation but I agree we have to do something if this is going to be a common problem. How often are you seeing this? What is causing the hung Workspace? Is the provider pod restarting? Is the context timing out? I wonder if provider-terraform should be exempt from this annotation given that it runs |
It definitely happens when the provider pod is restarted while an apply is running, but I think timeouts might also have the same result given an error is still returned on timeout. In our specific case, our CI/CD pipeline caused the pod to be restarted frequently - we're using a sidecar to provide credentials for our internal module registry, and every time CD ran it rolled the pod. We disabled CI/CD for provider-terraform, and now the pod is mostly only restarting when we want it to (yay!). That means we are seeing this much less often, but it does make releasing changes and upgrading the provider a little more work than it could be. IIRC, the context used by crossplane-runtime to update the annotations is cancelled when the pod receives the signal to die, which means it can't apply the external-create-success annotation even with the new graceful shutdown from your PR. We'd have to plumb a separate context through (or use One alternative model we could use is disabling the check on a per-resource level, rather than globally for the controller. That could be done with an annotation, or an explicit field. |
It occurs to me that upbound/provider-terraform#189 could also provide a solution for this. If long-running applies/destroys were performed as separate Jobs, the "external create" which the annotation refers to would actually be creating the job, which should be ~instantaneous and thus less likely to hit this problem if the provider restarts. The provider can keep track of the job in a status field to avoid recreating it. I think using Jobs in this way would come at the cost of needing to schedule the job and re-initialise the workspace (with no access to a plugin cache), but for long running applies that cost should be negligible in comparison, while also providing scalability benefits. This would need to be controlled by a parameter on the @bobh66 what do you think of this? If you agree with the idea in principle, I can hopefully persuade my manager to give me some time to develop a proof of concept |
We are hitting the same problem in provider-helm and even provider-k8s. Basically, resources are stuck if the create request is interrupted somehow, e.g. a pod restart or api server misbehaving.
This doesn't hold for all type of resources. It makes sense for a VPC but not a Bucket for example. Provider Helm for example, always knows the name of the release (as it passes it to the client), so no risk of leaking releases for a given MR.
I believe opting out for individually managed resources sounds like a decent solution. |
We're having this problem mainly with The Given that kubernetes objects have deterministic names, the usefulness of the |
What happened?
The provider-terraform Create() implementation runs terraform apply which can take "a while" - certainly tens of seconds and often several minutes.
If the provider pod is terminated while terraform apply is running, the external-create-pending annotation can get stuck on the Workspace resource and the managed Reconciler will not process it without manual intervention to remove the annotation.
This is most easily demonstrated when provider-terraform has code that detected the context cancellation and returns failure from Create():
crossplane-contrib/provider-terraform#76
This should cause the Reconciler to set the ExternalCreateFailed annotation, which would allow the resource to be retried on the next cycle. However the update of the resource to set that annotation fails because the context has already been cancelled, and so the resource is stuck with the pending annotation.
The logs indicate that the critical annotations cannot be updated because the context has been canceled:
How can we reproduce it?
Create a terraform Workspace object that uses the kubernetes backend with a time_sleep resource that has a create_duration of 5 minutes
After applying the Workspace object, delete the provider-terraform pod within 5 minutes
Observe the above log messages and that the resource cannot be managed by the Reconciler because of the orphaned external-create-pending annotation
What environment did it happen in?
Crossplane version: 1.8.1
Question - should the Create() process be considered to be idempotent? Should it always be possible to rerun Create() even when the external-create-pending annotation is set?
The existing implementation of the external-create annotations has the "incomplete" check immediately at the start of reconciliation. I'm wondering if it would make sense to allow the Observe to happen and only check for incomplete external creation if the Observe indicates that the resource exists? If the resource does not exist then maybe Create should be responsible for creating it, even if the pending annotation is set?
@ytsarev @negz FYI
The text was updated successfully, but these errors were encountered: