Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provider doesn't properly setup when only given config_context value without config_path #1274

Closed
mgarstecki opened this issue May 17, 2021 · 4 comments

Comments

@mgarstecki
Copy link

mgarstecki commented May 17, 2021

Terraform Version, Provider Version and Kubernetes Version

Terraform version: v0.14.7
Kubernetes provider version: v2.2.0
Kubernetes version: v1.19

Affected Resource(s)

Terraform Configuration Files

provider "kubernetes" {
  config_context = "<a valid context name of your default kubeconfig>"
}

data "kubernetes_service" "kubernetes" {
  metadata {
    name      = "kubernetes"
    namespace = "default"
  }
}

Steps to Reproduce

  1. Have a proper Kubernetes context setup in your kubeconfig (no additional settings, especially nothing in KUBE_CONFIG_PATH)
  2. terraform apply

Expected Behavior

The datasource should have loaded properly from my context.

Actual Behavior

Terraform fails with the following output:

Error: Get "http://localhost/api/v1/namespaces/default/services/kubernetes": dial tcp [::1]:80: connect: connection refused

  on main.tf line 10, in data "kubernetes_service" "kubernetes":
  10: data "kubernetes_service" "kubernetes" {

Important Factoids

My client setup otherwise works, kubectl --context <a valid context name of your default kubeconfig> ... works.

I have no KUBE_CONFIG_PATH variable set, and this is intended: I expect the provider to pick up the location of the default Kubeconfig if none is set with KUBE_CONFIG_PATH/config_path, like other Kubernetes clients.

This impacts our setup since we tend to have different workstation setups in my team (MacOS vs Linux, user-wide config vs local ones, ...), and it is hard to find a convention to setup env vars or Terraform provider values in this case.

The Kubernetes provider seems to me to go against usual client behaviour by ignoring ~/.kube/config if no override is provided.
Just setting config_context without specifying the config file works on the helm provider for example.

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@mgarstecki mgarstecki added the bug label May 17, 2021
@alexsomesan
Copy link
Member

@mgarstecki You mention you expect the provider to pick up the default kubeconfig in absensce of KUBE_CONFIG_PATH, but that is not mentioned in the provider documentation. The docs clearly state you have to explicitly configure the provider. Have a look here: https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#authentication

You do have to set either a path to a kubeconfig file or discreet credentials.

@dak1n1
Copy link
Contributor

dak1n1 commented Jun 3, 2021

@mgarstecki We also have an issue open here, where we're collecting feedback regarding automatically picking up $KUBECONFIG. If you want, you can 👍🏻 the original post to help us gauge interest in reading $KUBECONFIG automatically. (That feature was removed in 2.0, due to feedback we had from users at the time).

Though there is also a ConflctsWith PR in progress which will throw an error if you specify a context without a kubeconfig file, which would have prevented the issue mentioned in this specific bug report. So it could be that ConflictsWith will solve this too.

@github-actions
Copy link

github-actions bot commented Jun 4, 2022

Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!

@github-actions github-actions bot added the stale label Jun 4, 2022
@github-actions github-actions bot closed this as completed Jul 5, 2022
@github-actions
Copy link

github-actions bot commented Aug 4, 2022

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Aug 4, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants