-
Notifications
You must be signed in to change notification settings - Fork 374
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Plan stalls due to failed tiller during helm_resource
state refresh
#315
Comments
Able to reproduce this. I'll be working on a fix in the coming days. |
I am seeing this issue. Any updates? |
I'm seeing this same issue however Tiller never failed and is still healthy Stuck on: In Tiller, I see: [storage] 2019/10/31 17:14:19 getting last revision of "heroic-seahorse"
[storage] 2019/10/31 17:14:19 getting release history for "heroic-seahorse"
[storage] 2019/10/31 17:14:20 getting last revision of "heroic-seahorse"
[storage] 2019/10/31 17:14:20 getting release history for "heroic-seahorse" Cancelling and running the plan again seems to fix it. |
+1 constantly stuck with |
Hi, same issue here |
btw, since my resource is recyclable, workarounded the problem by issuing |
I found a workaround to this. You need to delete the failed After this, |
In my case I had made a mistake and the deploy never started the tiller pod because the sa account could not be found
Deleting the deploy fixed my issue. Can't wait for helm 3 |
I'm experiencing a similar issue. I've created an eks cluster with terraform and deployed tiller and some helm_resources with the helm provider. After that I deleted my eks cluster with Of course, all my pods were deleted with the cluster and I cannot perform any of the following comands
Just to let you guys know...
|
after removing tiller pod manually, terraform was unable to refresh the state got stack in Removing missing resource from the state file resolved the issue
|
The issue is intermittent, I am also facing the same. |
Closing this issue since is making reference to a version based on Helm 2, if this is still valid to the master branch please reopen it. Thanks. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
Terraform Version
Terraform v0.12.6
Affected Resource(s)
Terraform Configuration Files
main.tf
module file (located in
./helm-consul
)Debug Output
https://gist.github.com/joatmon08/c77de83d65709c06e5313331f3aa8c4a
Expected Behavior
Tiller pod should be re-initialized or error message should return "could not find a ready tiller pod".
Actual Behavior
Steps to Reproduce
Create a Kubernetes cluster.
Run
terraform init
withinstall_tiller = true
. Tiller initializes correctly in cluster.Successfully deploy a
helm_resource
usingterraform apply
. This gets logged into Terraform state.Scale Tiller deployment down using
kubectl scale deployment/tiller-deploy -n kube-system --replicas=0
. (To mimic failed tiller.)Run
terraform plan
. It will wait for available tiller pod and times out.Important Factoids
Initially, we discovered this when we created a managed Kubernetes cluster and updated some configuration. This caused the Kubernetes cluster to destroy and re-create itself. When the cluster re-initialized, Tiller was stuck in a failed state. Running
helm init
again re-deployed the Tiller pod and allows the plan to complete.While this does not apply to Helm v3, any cluster running Helm v2 that is re-created could result in a failed tiller pod and cause the plan to stall. Initially discussed this with @alexsomesan, posting here to collect input.
References
N/A
The text was updated successfully, but these errors were encountered: