-
Notifications
You must be signed in to change notification settings - Fork 374
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Terraform.io to EKS "Error: Kubernetes cluster unreachable" #400
Comments
The token auth configuration below ultimately worked for me. Perhaps this should be the canonical approach for Terraform Cloud -> EKS, rather than using
|
I don't see how ths could possibly work, with Helm 3, it seems to be completely broken.
|
seeing same issue |
@eeeschwartz can confirm it's working with newly created cluster. Here a sample configuration to prove it. From other perspective with already created cluster I see the same issue and |
anyone found a workaround yet ? |
@vfiset I believe there is no workaround. It's just an issue with policies(my guess). The actual provider doesn't give you enough debug information. So, you will probably need to run |
My guess is that the aws-auth config map is blocking access. In the example that @kharandziuk has show here there's no aws-auth configmap defined. Also it's worth noting that the usage of helm here is in the same terraform run as the eks run which means that the default credentials for eks are the ones being used to deploy helm. I have a fairly complicated setup where i'm assuming roles between the different stages of the EKS cluster deployment.
|
Seeing same issue using Helm3. My tf looks like as @kinihun ... |
I have created deployed a helm chart via helm provider ages ago. It works fine, I can change things here and there, etc. |
Same behavior as @leoddias . I've added the helm provider reference to k8s cluster and even did some local exec to switch contexts to the correct one. Also getting |
Anyone found a workaround here? I still get this: |
Works after the first apply because of this but the next plan, even if nothing changes will still re-generate the token.
|
Same issue here with Helm3
|
same issue here |
Is there a fix available for this issue, by chance (or a timeline) ? @ HashiCorp team |
My workaround is to refresh
|
You can have the token automatically refreshed if you configure the
provider with an exec block with the aws cli or or the
aws-iam-Authenticator. Have a look at the Kubeconfig in EKS docs for syntax
details.
On Fri, Sep 18, 2020 at 6:00 PM Alex Vorona ***@***.***> wrote:
My workaround is to refresh data.aws_eks_cluster_auth before apply
terraform refresh -target=data.aws_eks_cluster_auth.cluster
terraform apply -target=helm_release.helm-operator -refresh=false
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#400 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAIL5G3MIR3GLPJHLISOTZ3SGNYZ7ANCNFSM4KRQNFKA>
.
--
— Sent from my phone.
|
It doesn't change anything in my tests with |
@voron thanks! The refresh fix was the only thing that worked for me. |
Tried import with 1.3.2 Got this
|
We had this problem with an RBAC enabled cluster. In this case the token from the cluster creation did not have enough permissions. |
Same here. |
I followed the instruction #400 (comment) provided by @eeeschwartz in this thread. It would fail for the first apply and work second time. The only thing that i missed was was adding "depends_on = [aws_eks_cluster.my_cluster]" to the data resource as mentioned in the code snippet. Once i added it started working. I created and destroyed the deployment multiple times and it worked. data "aws_eks_cluster_auth" "cluster-auth" { |
Switching from 2.0.2 to version 1.3.2 of the Helm provider fixed our config issues. |
After unsetting ENV vars for kubectl , that were pointing to the old cluster everything worked:
Not sure why helm provider reads those vars if following setup was used:
Using 2.0.2 provider versions that don't have "load_config_file" argument available anymore. |
I'm going to close this issue since the OP has a solution, and since we have several similar issues open already between the Kubernetes and Helm providers. We are continuing to work on the authentication workflow to make configuration easier. (These are the next steps toward fixing it, if anyone is curious: hashicorp/terraform-provider-kubernetes#1141 and hashicorp/terraform-plugin-sdk#727). |
I do have the same problem, I've tried all of the mentioned solutions but it doesn't seem to pick up the token properly. Terraform v0.14.4 this is my config:
any thoughts? |
@formatlos Running on Terraform Cloud, using Terraform 0.15.1 and Helm provider 2.1.2, your solution with exec authentication works for me. Just changed on the provider "helm" {
kubernetes {
host = data.aws_eks_cluster.eks.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.eks.certificate_authority.0.data)
exec {
api_version = "client.authentication.k8s.io/v1alpha1"
command = "aws"
args = ["eks", "get-token", "--cluster-name", data.aws_eks_cluster.eks.name]
}
}
} |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
Terraform Version
0.12.19
Affected Resource(s)
Terraform Configuration Files
Debug Output
Note that
kubectl get po
reaches the cluster and reports "No resources found in default namespace."https://gist.github.com/eeeschwartz/021c7b0ca66a1b102970f36c42b23a59
Expected Behavior
The testchart is applied
Actual Behavior
The helm provider is unable to reach the EKS cluster.
Steps to Reproduce
On terraform.io:
terraform apply
Important Factoids
Note that kubectl is able to communicate with the cluster. But something about the terraform.io environment, the
.helm/config
, or the helm provider itself renders the cluster unreachable.Note of Gratitude
Thanks for all the work getting helm 3 support out the door. Holler if I'm missing anything obvious or can help diagnose further.
The text was updated successfully, but these errors were encountered: