-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
data sources that generate temporary credentials should not persist values in plan tfstate #24886
Comments
This is biting us in the butt (also AWS EKS)--the kubernetes auth token is generated at the very start of the planning process, so when the plan process takes >15 minutes, the apply process fails. |
Is there any way to influence this 15 minutes timeout? (Perhaps from the AWS EKS side?) |
There isn't. However, what you can do (and what we have done to work around this) is to wrap all kubernetes operations in helm charts, because the Helm provider can refresh credentials at execute time and therefore the plan can be executed at any point (assuming no state changes occur that would invalidate the plan in the meantime). Thanks to @bear454 for demonstrating this workaround in the SUSE/cap-terraform PR that Github has helpfully linked above. |
That kind of defeats the purpose of using the kubernetes terraform provider to track resources. |
Well, if it's broken, sometimes you have to work around it. |
Is anyone working on this ? |
anyone addressing this? It's a pain for AWS eks k8s clusters. |
/bump |
@danieldreier you labeled it as an enhancement but it sounds more like a bug. It's impossible at the moment to apply a saved plan due to this issue. Can you relabel it as a bug? |
Hello, Then after the Pull Request was reviewed, the plan can be applied with a comment. There is an issue for this in the atlantis project here: Would be really nice to have this fixed. Maybe as an idea how this could be solved: Terraform could re-create the plan always when using |
Is there a known workaround here? This is a significant problem for some CI workflows. |
+1 on this issue. This makes it impossible to automate workflows with auth tokens. |
For the kubernetes provider I solved it with the help of exec plugins: https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#exec-plugins Although it depends on some binary being available, it should be relatively easy to achieve in most CI/CD workflows. Example for GKE:
|
Terraform Version
Terraform Configuration Files
Debug Output
From terraform apply:
From EKS authorization log:
Note the very stale X-Amz-Date token (2020-05-06 01:09:00Z) relative to the current time (2020-05-06T05:17:59Z). The token date corresponds to the time at which the terraform plan was run, several hours earlier.
Crash Output
Expected Behavior
data.aws_eks_cluster_auth.example.token should be refreshed on apply. Authentication tokens should not be cached as part of the plan.
Actual Behavior
data.aws_eks_cluster_auth.example.token is cached in the plan, and attempted to be reused later on apply. But, tokens only have validity for 15 minutes.
Steps to Reproduce
terraform plan
and save plan outputterraform apply
existing cached planAdditional Context
EKS clusters get their authentication tokens via a backdoor mechanism with AWS IAM. The current AWS IAM role is used to communicate with a service in the EKS cluster to obtain a K8S token that can be used for only 15 minutes to access the cluster. The aws_eks_cluster_auth data source is the way to get this token in terraform. It should not be cached as part of the plan, but regenerated on every terraform call.
References
I filed the below issue on the terraform-provider-aws project, but they said this is actually a shortcoming of the Terraform Core.
The text was updated successfully, but these errors were encountered: