-
Notifications
You must be signed in to change notification settings - Fork 986
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v2.0.1: Resources cannot be created. Does kubectl refference to kube config properly? #1127
Comments
@tantweiler could you share your whole config and a trace log (https://www.terraform.io/docs/internals/debugging.html)? The error message does not seem to be related to a credential error |
Offhand, this looks related to RBAC rules in the cluster (which may have been installed by the helm chart). This command might help diagnose the permissions issues relating to the service account in the error message.
You might be able to compare that list with other users on the cluster:
And investigate related clusterroles:
My guess is that the chart or Terraform config in question is responsible for creating the service account, and the [cluster] roles and rolebindings, but it might be doing so in the wrong order, or not idempotently (so you get different results on re-install vs the initial install). But we would need to see a configuration that reproduces this error. In my testing of version 2 of the providers on AKS, EKS, GKE, and minikube, I haven't seen this issue come up. Feel free to browse these working examples of building specific clusters and using them with Kubernetes and Helm providers. Giving the config a skim might give you some ideas for troubleshooting further. |
I have the same error, today suddenly all the CD pipelines to my Kubernetes cluster stopped working |
same here, at my case its look like the provider failed to read the kubeconfig file and use the proper context |
@alon-dotan-starkware @tantweiler @elpapi42 can you share some info about your environment and how your cluster is being provisioned and how your kubeconfig is generated so we can try and reproduce this? AFAIK we didn't change anything about the way the kubeconfig gets loaded, just that you have to explicitly specify the path to the file now. |
At second glance, this error looks like it is trying to use the default service account. I see this error when I run terraform inside a pod that doesn't have a service account associated with it. When I assign a serviceaccount with the correct permissions then I don't get the error anymore. Are you running terraform inside a Kubernetes pod but intending to use a config file inside of the container instead of the serviceaccount token? |
Hello everyone, Let me explain in a bit more detail what I'm doing here. We run a GitLab instance within a GKE cluster. I created a pipeline in GitLab that deploys cloud infrastructure on different hyper-scalers (GCP and Azure). For authenticating against each hyper-scaler and to be able to install any kind of infrastructure component, we use service accounts (GKE) or service principals (Azure) with administrative rights. Let's have look on the helm part of the pipeline:
This is my provider section in the main.tf file now where I point to kubernetes version 1.13.3 (and also to helm v1.3.2 which has a state issue which I mentioned in my first comment) since I don't run into this issue with that version:
So some guys here mentioned that there might be a an issue with not enough rights or an RBAC issue. Again, the CLIENT_ID and the CLIENT_SECRET that we use to authenticate against the Azure cloud has administrative rights! With provider version v1.13.3 everything is working fine but with v2.0.1 something has changed. |
@jrhouston
helm chart:
providers.tf.json:
with provider version > 1.13.0 I got the following error:
which looks like the k8s provider cant identify the right context and cluster config from ~/.kube/config file |
@aareet I uploaded two logfiles for each Kubernetes provider version to paste.in. Here is the output for v2.0.1 which does not work: And here is the output for v1.13.3 which does work: |
@tantweiler In your example I see your |
@jrhouston holy moly! That did the trick! I always thought that the config path only had to be defined for the helm provider which uses kubernetes but kubernetes itself uses the default which is ~/.kube/config. In my pipeline I use the kubernetes provider to install the namespaces first and then the helm releases. But the job already crashed at the point where it tried to created those namespaces. But then there was a change in v2.0.1 somehow that the provider does not look into the default kube config file anymore. v1.13.3 does this for sure. From now on I will definitely define the config path for the kubernetes provider as well! |
@jrhouston you said you didn't change "anything about the way the kubeconfig gets loaded". But the changelog says something different: 2.0.0 (January 21, 2021) BREAKING CHANGES: Remove default of ~/.kube/config for config_path (#1052) Honestly I don't understand that. ~/.kube/config is the standard! So why removing a standard that everyone is actually using? |
@tantweiler we discuss it in the upgrade guide - one of the reasons was that it was causing confusion for folks who manage multiple clusters with Terraform |
I worded this poorly, sorry for the confusion! We changed how you configure the path to the config file in the provider block (i.e you have to set it or use the
I responded to another user with this question on the helm provider with some backstory here: hashicorp/terraform-provider-helm#647 (comment) We also talk about this in the Upgrade Guide here: https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/guides/v2-upgrade-guide#changes-in-v200 And we made an issue soliciting community reactions about these changes here: #909 If you feel strongly about this change please open a new issue advocating to change it back and we can discuss it! tl;dr is that there was a set of users who would get caught out by the implicit default of ~/.kube/config and using I see what's happened here though, is that because you run terraform inside Kubernetes but didn't supply a path to a config file the loader has defaulted to using the in-cluster config. Perhaps this is an argument for adding an |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
Steps to Reproduce
I use a GitLab pipeline to deploy helm charts on my Kubernetes cluster by using the helm terraform provider.
Since version v2.0.1 of the Kubernetes provider the Helm provider is not able to access to the kube config file properly. The error message looks like:
The reason why I use Helm provider v1.3.2 is described in this bug report:
hashicorp/terraform-provider-helm#662
Temporary solution
Revert back to version v1.13.3
Community Note
The text was updated successfully, but these errors were encountered: