-
Notifications
You must be signed in to change notification settings - Fork 374
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Configuring provider using another resource's outputs is not possible anymore since v2.0.0 #647
Comments
Thanks for opening this @mcanevet can you share the config you used to configure the provider? I suspect this might be a bug in some additional validation we added when the provider gets configured. |
Same error with the following:
Works fine with the following and hence an issue with 2.0.0
|
I've put out a patch release that moves the validation logic so it happens after the provider is configured and should remedy this issue. Please let me know if this error is still surfacing for you. |
@jrhouston Thanks for your help. I am experiencing the same issue as identified by @mcanevet even when using the Terraform Helm provider 2.0.1 patch, which I believe represents this commit. Execution of the 'terraform plan' phase results in the following error:
To provide context, I'm spinning up a new AKS cluster as the first phase of our environment formation. The second phase of Terraform modules is focused on workload installation using the Helm provider. I'm using Terraform 0.14.3, Azure RM provider 2.41.0 and the Terraform Helm provider v2.0.1. Any insights are appreciated. I can, of course, package any logs needed. I'll start to debug back from this line as it seems to be the source of the error message. |
@mprimeaux Thanks for getting back to me. Are you able to share a gist with more of your config? I have a config similar to the one posted above, and broken into two modules with one for the cluster and one for the charts but I can't get it to error. If you are set up to build the provider you could also try commenting out these lines: terraform-provider-helm/helm/provider.go Lines 359 to 361 in 9de5e32
Did you have |
Looks like it works for me with v2.0.1. |
@mcanevet Thanks for reporting back. We can leave this open for now in case anyone else has issues with this. |
@jrhouston After a bit of debugging, I am pleased to report the 2.0.1 patches does indeed work as intended. I no longer experience the error as per above. We'll test a bit more across various formations and report back should we hit another exception case. In the near term, we are modifying our formation strategy to reflect infrastructure formation, workload deployment and workload upgrade phases to gracefully avoid the 'pain'. Sincerely appreciate everyone's help on this one. |
Glad to hear that @mprimeaux! 😄 Thanks for contributing, please open issues generously if you run into any more problems. |
@jrhouston Hi there, |
@FischlerA is there a reason you can't explicitly specify We offer some context for why we made this breaking change in the Upgrade Guide. |
@jrhouston we are able to specify the path if necessary. Having the ability to specify the path if necessary is nice and in some cases definitely necessary. Also to provide more information, we deploying our terraform through pipelines, updating the kubeconfig right before we are applying the changes, so we actually only have one kubeconig and it's always the correct one. Thanks for providing the link to the Upgrade Guide, stating your reason helped me understand the changes you made a lot better. While i personally do not agree with the reasons listed i can understand the need for the change. For me personally the security of not applying my configuration to the wrong cluster should not be depending on the path. Especially since all of my team mates work with a single kubeconfig file containing multiple cluster configurations, so it is actually of no benefit to us. BUT i don't want to discourage the changes, i just wanted to share my opinion on this topic. Thanks for your time btw :) |
@FischlerA Absolutely – I can see that in your use case the problem of picking up the wrong cluster by mistake couldn't happen because your apply runs in a completely isolated environment, which makes sense and is a good practice I like to see. We did some user research on this, where we talked to a sample of users across the helm and k8s providers, and uncertainty and confusion between the default config path, If you feel strongly about this change please feel free to open a new issue to advocate for reversing it and we can have more discussion there. Thanks for contributing @FischlerA! 😄 |
mine still fails in 2.0.1 eerything is:
config:
reverting back to 1.3.2 and its fine. |
@jurgenweber does it still succeed on v1.3.2 if you set |
@jrhouston yes, I have to put that setting back. |
I think there is something wrong with aliasing. If I use,
It breaks with
but if I remove the alias attribute in the provider, it works. I'm trying to pass the provider to a module as
|
also |
Hi everyone My terraform code breaks completely with the 2.0.1 version of the helm provider, reverting to < 2.0.0 fixes the problem:
I am installing the latest version of this chart: https://jupyterhub.github.io/helm-chart/ |
@AndreaGiardini could you file a separate issue with all your info please? This seems unrelated to this particular issue. |
I'm still having the same issue with the last patch 2.0.1
The problem is that I can't provide any kube config file as the cluster is not created yet, here is my conf :
|
@ghassencherni can you share your full config so we can see how this provider block is being used? (also include your terraform version and debug output) |
@aareet Thank you for your response,
Apart from this error, there is nothing intersting in debug output :
|
v1.3.2 is not working for me ( even by adding load_config_file ) , can you share you final conf please ? your aws provider version ( not sure that it can be related ) ? |
For all that still have issues with v2.0.1, did you also upgrade to Terraform v0.14.x meanwhile? I don't have issue anymore with v2.0.1 on Terraform v0.13.x, but I have one on Terraform v0.14.x (issue #652). |
I have Terraform v0.14.x and Helm provider v2.0.1. I am using the alias in my provider and having the issue (same as @madushan1000 ) . I appreciate any help on this. If explicitly give the Kubeconfig path, still it is looking for it is giving error.
I am passing the provider to a module using alias and using the helm_release for Ingress deployment. Is there any workaround to make it work? Thanks |
Since 2.0.0, I'm also encountering similar issues (tested with Fails with:
If I downgrade the provider to My provider config looks like this:
|
Hello everyone, thanks for reporting these issues. I've been unable to reproduce this, so far. If someone has a config they can share to help reproduce this issue it would be very helpful. |
I'm using rancher2 and I can usually use rancher url + token to authenticate to the k8s clusters. But it doesn't work with terrafrom. My config is as follows
then I use it in a tf file to pass it to a module like bellow
The module instantiate "helm" resource to create deployments
I get this error
|
@redeux make sure to move your kubeconfig file before testing (mv ~/.kube/config{,.bak}) |
Everyone reporting this, we have a small favor to ask 🥰 We've been making changes to credentials handling in both this provider as well as the Kubernetes provider. At the same time, Terraform itself is moving quite fast recently and churning out new releases with significant changes. This really increases the potential for corner cases not being caught in our testing. I would like to ask a favor of all who reported here seeing this issue if you could please also test with a build of the Kubernetes provider in the same scenario / environment. So far we're not having much luck reproducing these issues on our side, which suggests they might be caused by particularities in real-life environments we might have not foreseen in our tests. So please, if you have spare cycles and are seeing issues similar to what's reported here, do us a favor and test with a build from master of the K8s provider as well. We're close to a major release there and want to make sure this kind of issue is not carried over there. Thanks a lot! |
Here is how to reproduce this issue:
I guess the same will also apply for other clusters like GKE or EKS. |
Works fine with the following
|
@derkoe Thanks for the very clear and helpful documentation of this issue! I was able to reproduce this particular setup and found it's present in both version 1 and version 2 of the Helm and Kubernetes providers. This issue appears to be due to Terraform's lack of support for dependency replacement, which could have helped us to mark the Kubernetes/Helm resources as dependent on the cluster, and therefore replaced them when the cluster was re-created. Thanks to this very concrete example, I was able to write an example config and guide to help users work around this. It's currently in progress here. Please let me know if this does not solve the problem you encountered. The above guide will also help with anyone who is replacing cluster credentials and passing them into the Kubernetes or Helm providers. The basic idea is that all Kubernetes and Helm resources should be placed into a Terraform module separate from the code that creates the underlying EKS/GKE/AKS cluster. Otherwise you end up passing outdated credentials to the providers, and you get generic provider configuration errors such as:
And any errors intended to shield users from the more vague errors above, such as:
All of these errors mean that there's a problem with the configuration being passed into the Helm or Kubernetes provider. Often times, the credentials are just cached and expired. I'm hoping these example guides will provide some tips on avoiding this confusing error scenario in AKS, EKS, and GKE. |
Another workaround is to make the changes outside Terraform and then re-import the state. This is also necessary when you want to have a zero-downtime change of the Kubernetes cluster. This is a good description how to achieve this: Zero downtime migration of Azure Kubernetes clusters managed by Terraform. |
We have cut another release that removes the strict check on the kubernetes block that produces this error. Please report back if you are still seeing an error when upgrading to v2.0.2. |
We are going to close this as there hasn't been any additional activity on this issue since the latest release. Please open a new issue with as much detail as you can if you are still experiencing problems with v2 of the provider. |
We have a similar issue on v2.0.2:
It seems that the helm provider does not pick up the kubernetes authentication block properly. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
Terraform, Provider, Kubernetes and Helm Versions
Affected Resource(s)
Expected Behavior
With provider < 2.x I used to create an AKS or EKS cluster and deploy some helm charts in the same terraform workspace, configuring Helm provider with credentials coming from Azure or AWS' specific resources to create the Kubernetes cluster.
It looks like this is not possible anymore.
Actual Behavior
Important Factoids
It used to with with v1.x
References
Community Note
The text was updated successfully, but these errors were encountered: