Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support KUBECONFIG environment variable #1973

Open
reegnz opened this issue Jan 26, 2023 · 13 comments
Open

Support KUBECONFIG environment variable #1973

reegnz opened this issue Jan 26, 2023 · 13 comments

Comments

@reegnz
Copy link

reegnz commented Jan 26, 2023

Description

It is a common expectation that tools that talk to the kubernetes API can be configured with the KUBECONFIG environment variable. When a tool uses the kubernetes client libraries, it can get that functionality out of the box with a default constructor. The terraform kubernetes provider however does not seem to be using the default constructor, and it's using the KUBE_CONFIG_PATH and KUBE_CONFIG_PATHS environment variables to retrieve the kubeconfig location.

The kubernetes provider should also start supporting the KUBECONFIG variable, to meet the common expectation that setting this environment variable will allow all tools to work with the given selected cluster (unless the provider is explicitly configured to use a different configuration).

References

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@txomon
Copy link

txomon commented Jan 27, 2023

This is a bad idea because you might be managing different clusters that are not in your kubeconfig, nor have access to them, through terraform.

@reegnz
Copy link
Author

reegnz commented Jan 27, 2023

@txomon The same argument would be true for kubectl. Should kubectl not have KUBECONFIG support either? Your argument would also apply to KUBE_CONFIG_PATH environment variable support as well. I don't understand what you want to say there. People do use the environment variable to configure cluster context.
Also the provider already supports using kubeconfig files, just missing the support for recognizing the KUBECONFIG environment variable. So what are you saying there? Should the support for kubeconfig files be entirely be removed simply because you think it's a 'bad idea' and don't have a use-case for them?

Also the request is not about using KUBECONFIG environment variables exclusively, but to also support the environment variable to configure the provider to use a kubeconfig file, which it already supports.

Try thinking with other people's heads first please.

@txomon
Copy link

txomon commented Jan 27, 2023

So the proposal would be to have KUBECONFIG support if no other configuration variable would be available?

Just in case, this is already how the system works.

EDIT: Seems like I was hitting a debugger config and it was working because of it. You are right on it not working.

Also, it seems like my tone was a bit off in my comment, apologies for that.

@reegnz
Copy link
Author

reegnz commented Jan 27, 2023

Pretty much the same behavior that KUBE_CONFIG_PATH does, but supporting the more standard KUBECONFIG environment variable for the same purpose as well.

I've verified that neither this provider, nor the helm provider works with simply setting KUBECONFIG, I have to do double-bookkeeping by setting terraform's own env var for kubecontext to keep the terraform provider satisfied. This is surprising behaviour to a newcomer. I think a good tool is 'boring', eg. if kubectl acts on a specific cluster in my cli, then the terraform provider should also touch the same cluster context (unless explicitly configured to do otherwise).

Today that assumption breaks, and every person running into this issue has to discover the solution by themselves, instead of following a well established convention which is 'if KUBECONFIG is set, my kubernetes tool will use that'.

@reegnz
Copy link
Author

reegnz commented Jan 27, 2023

There's one caveat I'm seeing that would be a bit different to KUBE_CONFIG_PATH:
The KUBECONFIG variable can hold multiple kubeconfigs separated by :, while the terraform provider has a separate KUBE_CONFIG_PATHS environment variable if you have multiple kubeconfig files. I think the logic to collapse multiple files into a single effective config is also done differently by this terraform provider, so supporting multiple configs in KUBECONFIG might be difficult to achieve as it would need a brand new code branch to keep it in line with how KUBECONFIG env var has to be consumed, as it might not be compatible with the existing KUBE_CONFIG_PATHS mechanism.

@txomon
Copy link

txomon commented Jan 27, 2023

So the lack of functionality comes from https://github.com/hashicorp/terraform-provider-kubernetes/blob/main/kubernetes/provider.go#L474 where clientcmd.ClientConfigLoadingRules{} is created manually instead of using the constructor in that very same module clientcmd.NewDefaultClientConfigLoadingRules().

However, giving into account all the different environment configurations there are, I would maybe propose to enable this functionality with the following caveats:

  1. The client should only be using the default contructor if the only overrides supplied are namespace and context.
  2. Any other options are not set.

@reegnz
Copy link
Author

reegnz commented Jan 27, 2023

I'm wondering if it would make sense to do fold the default config loading rules into the default provider behaviour, or to put the default behind a flag (eg. use_client_defaults = true). Sounds a bit weird though if you have to flip a flag to use a default. :) But the alternative is the risk to break backward compatibility.

@reegnz
Copy link
Author

reegnz commented Jan 27, 2023

I've modified the issue description to refer to the default constructor of the k8s client lib.

reegnz added a commit to reegnz/terraform-provider-kubernetes that referenced this issue Jan 27, 2023
If no explicit config path is set, fall back to the kubernetes client
default behaviour.

This allows usage of KUBECONFIG environment variable and also falls back
to the default kubectl config location if nothing is set.

Fixes hashicorp#1973
@reegnz
Copy link
Author

reegnz commented Jan 27, 2023

I've opened a PR with a proposal on how to handle this.
I've went with a different approach, where it uses the default client config, unless config_path or config_paths is set. (either directly in the provider config or through the KUBE_CONFIG_PATH or KUBE_CONFIG_PATHS env vars).
Otherwise it will use the default client config.
The rest of the provider configs are overrides for both use-cases, so it should not make a difference which loading rule is used.

I think this keeps existing use-cases working, except for when a user already has a context configured in $HOME/.kube/config. There the compatibility breaks, because the original behaviour defaulted to the localhost and ignored $HOME/.kube/config.

So really the big question is, is the default behaviour of the provider with no configs, and no env vars (eg. no KUBECONFIG, KUBE_CONFIG_PATH, KUBE_CONFIG_PATHS) set common?

If that's the case, we could just drop supporting $HOME/.kube/config altogether and not use the default client config loading rules (but that again is not in line with kubectl users basic assumptions).

My gut feeling tells me that if I run a kubectl command and interact with resources, then terraform should also use the same config to provision resources. So when I run terraform apply, I can immediately check with kubectl without having to tweak terraform to target the same kube context. This should be achieved without any explicit config except for setting KUBECONFIG, and respecting $HOME/.kube/config as well (if no KUBECONFIG is set).

Copy link

Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!

@github-actions github-actions bot added the stale label Jan 28, 2024
@reegnz
Copy link
Author

reegnz commented Jan 31, 2024

Commenting so issue doesn't go stale. Don't think this is done yet.

@github-actions github-actions bot removed the stale label Jan 31, 2024
@SeanKilleen
Copy link

SeanKilleen commented Jul 31, 2024

Wanted to chime in here because this is (I think) preventing us from being able to use Terraform Cloud Runs (we pay for TF cloud and store our state there but can't actually do any runs via the cloud currently, which is hurting the value proposition).

Current State:

  • We use SAP BTP Kyma, which generates kubeconfigs that by default use OIDC
  • This requires a human interaction with a web browser to login.
  • I have a kubeconfig in source control for each environment, referenced in the provider config based on the env name.
    • I'm OK with this because there are no credentials stored.
    • It also requires a plugin to be installed locally etc. which ties us to having to run things locally on a dev machine.

Goal: I'd like to be able to fully complete a cloud run in Terraform cloud or be able to turn this into a 1-click GitHub action for my team rather than everyone having to have the local prerequisites in place.

Challenge:

  • The other providers I need take env variables, which I can set as secrets in Terraform cloud.
  • I've generated a kubeconfig with a service account and token for the purposes of CI. Due to the permission level, and for general security hygiene, this token is not something I'd ever be comfortable putting in source control.
  • With other providers, they accept environment variables, which I can populate from Terraform Cloud via a variable that I can mark as secret/write-only.
  • With the Kubernetes provider, I can't pass this in via env variable.

Current possibilities as far as I know:

  • ❌ Use Env Variable for contents: Provider doesn't support it
  • ❌ Commit kubeconfig files with admin token to source control: This would make me incredibly nervous and I imagine would be frowned upon by security-conscious folks.
  • ❓ Use GitHub Actions or similar to put the file into the CI build from a secret. I might explore this, but all the files (including the file with the token) would then be sent to TF cloud so I'm still worried about exposure of the token in that sense.
  • ❓ Move to a tool like Atlantis or similar to try to get around this. But I'd rather not introduce a whole new toolset.

In this case, if the Kubernetes provider was able to accept the text of a kubeconfig to override the value set by the provider, the path would (I think) be simple: set that in a Terraform Cloud environment variable for that workspace and be done.

@giosueDelgado
Copy link

Following, important for us

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants