-
Notifications
You must be signed in to change notification settings - Fork 988
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support KUBECONFIG environment variable #1973
Comments
This is a bad idea because you might be managing different clusters that are not in your kubeconfig, nor have access to them, through terraform. |
@txomon The same argument would be true for kubectl. Should kubectl not have KUBECONFIG support either? Your argument would also apply to KUBE_CONFIG_PATH environment variable support as well. I don't understand what you want to say there. People do use the environment variable to configure cluster context. Also the request is not about using KUBECONFIG environment variables exclusively, but to also support the environment variable to configure the provider to use a kubeconfig file, which it already supports. Try thinking with other people's heads first please. |
So the proposal would be to have KUBECONFIG support if no other configuration variable would be available? Just in case, this is already how the system works. EDIT: Seems like I was hitting a debugger config and it was working because of it. You are right on it not working. Also, it seems like my tone was a bit off in my comment, apologies for that. |
Pretty much the same behavior that KUBE_CONFIG_PATH does, but supporting the more standard KUBECONFIG environment variable for the same purpose as well. I've verified that neither this provider, nor the helm provider works with simply setting KUBECONFIG, I have to do double-bookkeeping by setting terraform's own env var for kubecontext to keep the terraform provider satisfied. This is surprising behaviour to a newcomer. I think a good tool is 'boring', eg. if kubectl acts on a specific cluster in my cli, then the terraform provider should also touch the same cluster context (unless explicitly configured to do otherwise). Today that assumption breaks, and every person running into this issue has to discover the solution by themselves, instead of following a well established convention which is 'if KUBECONFIG is set, my kubernetes tool will use that'. |
There's one caveat I'm seeing that would be a bit different to |
So the lack of functionality comes from https://github.com/hashicorp/terraform-provider-kubernetes/blob/main/kubernetes/provider.go#L474 where However, giving into account all the different environment configurations there are, I would maybe propose to enable this functionality with the following caveats:
|
I'm wondering if it would make sense to do fold the default config loading rules into the default provider behaviour, or to put the default behind a flag (eg. |
I've modified the issue description to refer to the default constructor of the k8s client lib. |
If no explicit config path is set, fall back to the kubernetes client default behaviour. This allows usage of KUBECONFIG environment variable and also falls back to the default kubectl config location if nothing is set. Fixes hashicorp#1973
I've opened a PR with a proposal on how to handle this. I think this keeps existing use-cases working, except for when a user already has a context configured in So really the big question is, is the default behaviour of the provider with no configs, and no env vars (eg. no If that's the case, we could just drop supporting My gut feeling tells me that if I run a kubectl command and interact with resources, then terraform should also use the same config to provision resources. So when I run terraform apply, I can immediately check with kubectl without having to tweak terraform to target the same kube context. This should be achieved without any explicit config except for setting |
Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you! |
Commenting so issue doesn't go stale. Don't think this is done yet. |
Wanted to chime in here because this is (I think) preventing us from being able to use Terraform Cloud Runs (we pay for TF cloud and store our state there but can't actually do any runs via the cloud currently, which is hurting the value proposition). Current State:
Goal: I'd like to be able to fully complete a cloud run in Terraform cloud or be able to turn this into a 1-click GitHub action for my team rather than everyone having to have the local prerequisites in place. Challenge:
Current possibilities as far as I know:
In this case, if the Kubernetes provider was able to accept the text of a kubeconfig to override the value set by the provider, the path would (I think) be simple: set that in a Terraform Cloud environment variable for that workspace and be done. |
Following, important for us |
Description
It is a common expectation that tools that talk to the kubernetes API can be configured with the
KUBECONFIG
environment variable. When a tool uses the kubernetes client libraries, it can get that functionality out of the box with a default constructor. The terraform kubernetes provider however does not seem to be using the default constructor, and it's using theKUBE_CONFIG_PATH
andKUBE_CONFIG_PATHS
environment variables to retrieve the kubeconfig location.The kubernetes provider should also start supporting the
KUBECONFIG
variable, to meet the common expectation that setting this environment variable will allow all tools to work with the given selected cluster (unless the provider is explicitly configured to use a different configuration).References
Community Note
The text was updated successfully, but these errors were encountered: