Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform.io to EKS "Error: Kubernetes cluster unreachable" #400

Closed
eeeschwartz opened this issue Feb 7, 2020 · 29 comments
Closed

Terraform.io to EKS "Error: Kubernetes cluster unreachable" #400

eeeschwartz opened this issue Feb 7, 2020 · 29 comments

Comments

@eeeschwartz
Copy link

eeeschwartz commented Feb 7, 2020

Terraform Version

0.12.19

Affected Resource(s)

  • helm_release

Terraform Configuration Files

locals {
 kubeconfig = <<KUBECONFIG
apiVersion: v1
clusters:
- cluster:
    server: ${aws_eks_cluster.my_cluster.endpoint}
    certificate-authority-data: ${aws_eks_cluster.my_cluster.certificate_authority.0.data}
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: aws
  name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: aws-iam-authenticator
      args:
        - "token"
        - "-i"
        - "${aws_eks_cluster.my_cluster.name}"
KUBECONFIG
}

resource "local_file" "kubeconfig" {
  content  = local.kubeconfig
  filename = "/home/terraform/.kube/config"
}

resource "null_resource" "custom" {
  depends_on    = [local_file.kubeconfig]

  # change trigger to run every time
  triggers = {
    build_number = "${timestamp()}"
  }

  # download kubectl
  provisioner "local-exec" {
    command = <<EOF
      set -e

      curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/aws-iam-authenticator
      chmod +x aws-iam-authenticator
      mkdir -p $HOME/bin && cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && export PATH=$PATH:$HOME/bin

      echo $PATH

      aws-iam-authenticator

      curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
      chmod +x kubectl

      ./kubectl get po
    EOF
  }
}

resource "helm_release" "testchart" {
  depends_on    = [local_file.kubeconfig]
  name          = "testchart"
  chart         = "../../../resources/testchart"
  namespace     = "default"
}

Debug Output

Note that

  • kubectl get po reaches the cluster and reports "No resources found in default namespace."
  • while helm_release reports: "Error: Kubernetes cluster unreachable"
  • In earlier testing it errored with "Error: stat /home/terraform/.kube/config". Now that I write the local file to that location, it no longer errors. I assume that means it successfully reads the kube config.

https://gist.github.com/eeeschwartz/021c7b0ca66a1b102970f36c42b23a59

Expected Behavior

The testchart is applied

Actual Behavior

The helm provider is unable to reach the EKS cluster.

Steps to Reproduce

On terraform.io:

  1. terraform apply

Important Factoids

Note that kubectl is able to communicate with the cluster. But something about the terraform.io environment, the .helm/config, or the helm provider itself renders the cluster unreachable.

Note of Gratitude

Thanks for all the work getting helm 3 support out the door. Holler if I'm missing anything obvious or can help diagnose further.

@eeeschwartz
Copy link
Author

eeeschwartz commented Feb 7, 2020

The token auth configuration below ultimately worked for me. Perhaps this should be the canonical approach for Terraform Cloud -> EKS, rather than using ~/.kube/config

provider "aws" {
  region = "us-east-1"
}

data "aws_eks_cluster_auth" "cluster-auth" {
  depends_on = [aws_eks_cluster.my_cluster]
  name       = aws_eks_cluster.my_cluster.name
}

provider "helm" {
  alias = "my_cluster"
  kubernetes {
    host                   = aws_eks_cluster.my_cluster.endpoint
    cluster_ca_certificate = base64decode(aws_eks_cluster.my_cluster.certificate_authority.0.data)
    token                  = data.aws_eks_cluster_auth.cluster-auth.token
    load_config_file       = false
  }
}

resource "helm_release" "testchart" {
  provider  = helm.my_cluster
  name       = "testchart"
  chart      = "../../../resources/testchart"
  namespace  = "default"
}

@kinihun
Copy link

kinihun commented Feb 29, 2020

I don't see how ths could possibly work, with Helm 3, it seems to be completely broken.
Below is my configuration and I can't connect to the cluster.
My kubernetes provider works but not the kubernetes block within Helm which has the same settings.

data "aws_eks_cluster" "cluster" {
  name = "foobar"
}

data "aws_eks_cluster_auth" "cluster" {
  name = "foobar"
}

provider "kubernetes" {
  version                = "1.10.0"
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
  load_config_file       = false
}

provider "helm" {
  version                = "1.0.0"

  kubernetes {
    host                   = data.aws_eks_cluster.cluster.endpoint
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
    token                  = data.aws_eks_cluster_auth.cluster.token
    load_config_file       = false
  }
}```

@venky999
Copy link

venky999 commented Mar 3, 2020

seeing same issue

@kharandziuk
Copy link

@eeeschwartz can confirm it's working with newly created cluster. Here a sample configuration to prove it.

From other perspective with already created cluster I see the same issue and debug=true doesn't help at all

@vfiset
Copy link

vfiset commented Mar 20, 2020

anyone found a workaround yet ?

@kharandziuk
Copy link

@vfiset I believe there is no workaround. It's just an issue with policies(my guess).

The actual provider doesn't give you enough debug information. So, you will probably need to run helm install manually to find an issue.

@rgardam
Copy link

rgardam commented Mar 30, 2020

My guess is that the aws-auth config map is blocking access. In the example that @kharandziuk has show here there's no aws-auth configmap defined. Also it's worth noting that the usage of helm here is in the same terraform run as the eks run which means that the default credentials for eks are the ones being used to deploy helm.

I have a fairly complicated setup where i'm assuming roles between the different stages of the EKS cluster deployment.

@leoddias
Copy link

leoddias commented Jun 9, 2020

Seeing same issue using Helm3. My tf looks like as @kinihun ...
Its happens in the first run of "terraform apply", when I try to exec again everything goes well.

@netflash
Copy link
Contributor

I have created deployed a helm chart via helm provider ages ago. It works fine, I can change things here and there, etc.
Today I wanted to "migrate" a standalone-deployed helm chart to be managed under terraform. So when I tried to run terraform import helm_release.chart namespace/chart, I've got this error.

@privomark
Copy link

Seeing same issue using Helm3. My tf looks like as @kinihun ...
Its happens in the first run of "terraform apply", when I try to exec again everything goes well.

Same behavior as @leoddias . I've added the helm provider reference to k8s cluster and even did some local exec to switch contexts to the correct one. Also getting dial tcp i/o timeout errors in addition that all magically work on the 2nd apply.

@mbelang
Copy link

mbelang commented Jul 22, 2020

Anyone found a workaround here? I still get this: Error: Kubernetes cluster unreachable: invalid configuration: client-key-data or client-key must be specified for to use the clientCert authentication method.

@mbelang
Copy link

mbelang commented Jul 22, 2020

Works after the first apply because of this but the next plan, even if nothing changes will still re-generate the token.

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
 <= read (data resources)

Terraform will perform the following actions:

  # data.aws_eks_cluster_auth.auth will be read during apply
  # (config refers to values not yet known)
 <= data "aws_eks_cluster_auth" "auth"  {
      + id    = (known after apply)
      + name  = "my_cluster"
      + token = (sensitive value)
    }

Plan: 0 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

@BrianMusson
Copy link

BrianMusson commented Jul 29, 2020

Same issue here with Helm3

provider "helm" {
  version = "~> 1.2.3"

  kubernetes {
    load_config_file       = false
    host                   = data.aws_eks_cluster.cluster.endpoint
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
    token                  = data.aws_eks_cluster_auth.kubernetes_token.token
  }
}

@ma-caylent
Copy link

I have created deployed a helm chart via helm provider ages ago. It works fine, I can change things here and there, etc.
Today I wanted to "migrate" a standalone-deployed helm chart to be managed under terraform. So when I tried to run terraform import helm_release.chart namespace/chart, I've got this error.

same issue here

@jvanwygerden
Copy link

jvanwygerden commented Sep 17, 2020

Is there a fix available for this issue, by chance (or a timeline) ? @ HashiCorp team

@voron
Copy link

voron commented Sep 18, 2020

My workaround is to refresh data.aws_eks_cluster_auth before apply

terraform refresh -target=data.aws_eks_cluster_auth.cluster
terraform apply -target=helm_release.helm-operator -refresh=false

@alexsomesan
Copy link
Member

alexsomesan commented Sep 18, 2020 via email

@voron
Copy link

voron commented Sep 20, 2020

You can have the token automatically refreshed if you configure the
provider with an exec block with the aws cli or or the
aws-iam-Authenticator.

It doesn't change anything in my tests with terraform apply -refresh=false. And it doesn't required when data.aws_eks_cluster_auth.cluster is refreshed by terraform configuration.

@kwahsog
Copy link

kwahsog commented Oct 7, 2020

@voron thanks! The refresh fix was the only thing that worked for me.

@netflash
Copy link
Contributor

Tried import with 1.3.2

Got this

terraform import helm_release.jenkins jenkins/jenkins
helm_release.jenkins: Importing from ID "jenkins/jenkins"...

Error: Kubernetes cluster unreachable: the server has asked for the client to provide credentials

but terraform plan works fine (my terraform config that has other helm resources)

@derkoe
Copy link

derkoe commented Nov 5, 2020

We had this problem with an RBAC enabled cluster. In this case the token from the cluster creation did not have enough permissions.

@avodaqstephan
Copy link

Seeing same issue using Helm3. My tf looks like as @kinihun ...
Its happens in the first run of "terraform apply", when I try to exec again everything goes well.

Same here.

@biswa-r-singh
Copy link

biswa-r-singh commented Dec 2, 2020

I followed the instruction #400 (comment) provided by @eeeschwartz in this thread. It would fail for the first apply and work second time. The only thing that i missed was was adding "depends_on = [aws_eks_cluster.my_cluster]" to the data resource as mentioned in the code snippet. Once i added it started working. I created and destroyed the deployment multiple times and it worked.

data "aws_eks_cluster_auth" "cluster-auth" {
// Add the depends_on
name = aws_eks_cluster.my_cluster.name
}

@jroberts235
Copy link

Switching from 2.0.2 to version 1.3.2 of the Helm provider fixed our config issues.

@ghassencherni
Copy link

ghassencherni commented Mar 3, 2021

After unsetting ENV vars for kubectl , that were pointing to the old cluster everything worked:

unset KUBECONFIG
unset KUBE_CONFIG_PATH

Not sure why helm provider reads those vars if following setup was used:

provider "helm" {  
  kubernetes {
    host                   = data.aws_eks_cluster.cluster.endpoint
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)
    token                  = data.aws_eks_cluster_auth.cluster.token  
  }
}

Using 2.0.2 provider versions that don't have "load_config_file" argument available anymore.

@dak1n1
Copy link
Contributor

dak1n1 commented Apr 3, 2021

I'm going to close this issue since the OP has a solution, and since we have several similar issues open already between the Kubernetes and Helm providers. We are continuing to work on the authentication workflow to make configuration easier. (These are the next steps toward fixing it, if anyone is curious: hashicorp/terraform-provider-kubernetes#1141 and hashicorp/terraform-plugin-sdk#727).

@dak1n1 dak1n1 closed this as completed Apr 3, 2021
@formatlos
Copy link

I do have the same problem, I've tried all of the mentioned solutions but it doesn't seem to pick up the token properly.

Terraform v0.14.4
hashicorp/helm v2.0.3

this is my config:

data "aws_eks_cluster" "cluster" {
  name = module.eks.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks.cluster_id
}

provider "helm" {
  kubernetes {
    host                   = data.aws_eks_cluster.cluster.endpoint
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)

    exec {
      api_version = "client.authentication.k8s.io/v1alpha1"
      args        = ["eks", "get-token", "--cluster-name", data.aws_eks_cluster_auth.cluster.id]
      command     = "aws"
    }
  }
}

any thoughts?

@goetzc
Copy link

goetzc commented May 3, 2021

@formatlos Running on Terraform Cloud, using Terraform 0.15.1 and Helm provider 2.1.2, your solution with exec authentication works for me. Just changed on the args to get the cluster name instead of the id.

provider "helm" {
  kubernetes {
    host                   = data.aws_eks_cluster.eks.endpoint
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.eks.certificate_authority.0.data)

    exec {
      api_version = "client.authentication.k8s.io/v1alpha1"
      command     = "aws"
      args        = ["eks", "get-token", "--cluster-name", data.aws_eks_cluster.eks.name]
    }
  }
}

@ghost
Copy link

ghost commented May 4, 2021

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked as resolved and limited conversation to collaborators May 4, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests