Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

.kubeconfig file current context used for auth instead of static values #1037

Closed
allymparker opened this issue Oct 15, 2020 · 4 comments
Closed
Labels

Comments

@allymparker
Copy link

Terraform Version and Provider Version

Windows
Terraform v0.13.4

  • provider registry.terraform.io/hashicorp/azuread v1.0.0
  • provider registry.terraform.io/hashicorp/azurerm v2.31.1
  • provider registry.terraform.io/hashicorp/kubernetes v1.13.2
  • provider registry.terraform.io/hashicorp/vault v2.14.0

Affected Resource(s)

kubernetes_secrets - potentially all kubernetes resources

Terraform Configuration Files

provider "kubernetes" {
  load_config_file = "false"

  host                   = azurerm_kubernetes_cluster.aks.kube_config.0.host
  client_certificate     = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate)
  client_key             = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.client_key)
  cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)
  alias                  = "aks"
}

resource "kubernetes_namespace" "redgate" {
  provider = kubernetes.aks

  metadata {
    name = "redgate"
  }

  depends_on = [azurerm_kubernetes_cluster.aks]
}

resource "kubernetes_secret" "server_details" {
  metadata {
    name      = "server-details"
    namespace = kubernetes_namespace.redgate.metadata.0.name
  }

  data = {
    "serverDetails.json" = <<JSON
{
  "server": "${var.sql_server_fqdn}",
  "user": "${var.platform_instance_name}-sa",
  "password": "${var.sql_server_sa_password}",
  "edition": "Basic" 
}
JSON
  }
}

Expected Behavior

The statically defined credentials passed to the k8s provider should be used instead of the current context found inside .kubeconfig

As per the docs:

If you have both valid configuration in a config file and static configuration, the static one is used as override. i.e. any static field will override its counterpart loaded from the config.

Actual Behavior

The current credentials/context from my .kubeconfig file are used.

Steps to Reproduce

Given 2 clusters cluster-a and cluster-b
Use a kubernetes provider with statically provided creds to cluster-a
Create a kubernetes_secret resource secret that doesn't exist on either cluster-a or cluster-b
Set your current context to cluster-b: kubectl config set-context cluster-b
Run terraform apply
The secret will have been created on cluster-b instead of cluster-a

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@aareet
Copy link
Contributor

aareet commented Nov 5, 2020

@allymparker can you tell us a bit more about your environment so we can try to reproduce this? Are Cluster A and Cluster B both on AKS?

@dak1n1
Copy link
Contributor

dak1n1 commented Nov 5, 2020

Check to see if any kubernetes-related environment variables are set in your shell. Variables like KUBE_HOST or KUBE_LOAD_CONFIG_FILE=false might be set. In my testing, those variables did interfere with an empty kubernetes provider block. I checked by running env |grep KUBE and then unset KUBE_HOST to fix mine, but I'm on Linux.

I also noticed that your secret block in your example does not specify a provider alias. It might be best to be more explicit, so you can be sure which cluster the resource will land on. I'll give you an example I used for testing:

provider "kubernetes" {
  load_config_file       = "false"
  host                   = "https://${google_container_cluster.primary.endpoint}"
  username               = google_container_cluster.primary.master_auth[0].username
  password               = google_container_cluster.primary.master_auth[0].password
  cluster_ca_certificate = base64decode(google_container_cluster.primary.master_auth[0].cluster_ca_certificate)
  alias                  = "gke"
}

# this one picks up KUBECONFIG from the default location
provider "kubernetes" {
  load_config_file       =  "true"
  alias                  = "minikube"
}

resource "kubernetes_namespace" "testgke" {
  provider = kubernetes.gke
  metadata {
    name = "testgke"
  }
}

resource "kubernetes_namespace" "testminikube" {
  provider = kubernetes.minikube
  metadata {
    name = "testminikube"
  }
}

resource "kubernetes_secret" "test" {
  provider = kubernetes.gke  ### this is the line that I think will fix yours
  metadata {
    name      = "server-details"
    namespace = kubernetes_namespace.testgke.metadata.0.name
  }

  data = {
    "serverDetails.json" = <<JSON
{
  "server": "test",
  "user": "test",
  "password": "test",
  "edition": "test"
}
JSON
  }
}

@allymparker
Copy link
Author

allymparker commented Nov 6, 2020

@allymparker can you tell us a bit more about your environment so we can try to reproduce this? Are Cluster A and Cluster B both on AKS?

Yes but....

provider = kubernetes.gke ### this is the line that I think will fix yours

Yep, that sorts it 🤦 (Well provider = kubernetes.aks) thanks @dak1n1

@ghost ghost removed the waiting-response label Nov 6, 2020
@ghost
Copy link

ghost commented Dec 7, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked as resolved and limited conversation to collaborators Dec 7, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

3 participants