Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configuring provider using another resource's outputs is not possible anymore since v2.0.0 #647

Closed
mcanevet opened this issue Dec 19, 2020 · 39 comments · Fixed by #648
Closed

Comments

@mcanevet
Copy link

Terraform, Provider, Kubernetes and Helm Versions

Terraform version: 0.14.3
Provider version: 2.0.0
Kubernetes version: 1.8.x
Helm version: 3.4.x

Affected Resource(s)

  • helm_release
  • helm_repository
Error: provider not configured: you must configure a path to your kubeconfig
or explicitly supply credentials via the provider block or environment variables.

See our documentation at: https://registry.terraform.io/providers/hashicorp/helm/latest/docs#authentication

Expected Behavior

With provider < 2.x I used to create an AKS or EKS cluster and deploy some helm charts in the same terraform workspace, configuring Helm provider with credentials coming from Azure or AWS' specific resources to create the Kubernetes cluster.
It looks like this is not possible anymore.

Actual Behavior

Important Factoids

It used to with with v1.x

References

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@mcanevet mcanevet added the bug label Dec 19, 2020
@jrhouston
Copy link
Contributor

Thanks for opening this @mcanevet can you share the config you used to configure the provider?

I suspect this might be a bug in some additional validation we added when the provider gets configured.

@dwaiba
Copy link

dwaiba commented Dec 19, 2020

Same error with the following:

terraform {
  required_version = ">= 0.14.3"
  required_providers {
    helm = {
      version = ">=1.3.2"
    }
    null = {
      version = ">=3.0.0"
    }
  }
}
provider "helm" {
  kubernetes {
    host                   = lookup(var.k8s_cluster, "host")
    client_certificate     = base64decode(lookup(var.k8s_cluster, "client_certificate"))
    client_key             = base64decode(lookup(var.k8s_cluster, "client_key"))
    cluster_ca_certificate = base64decode(lookup(var.k8s_cluster, "cluster_ca_certificate"))

  }
}

Works fine with the following and hence an issue with 2.0.0

terraform {
  required_version = ">= 0.14.3"
  required_providers {
    helm = {
      version = "=1.3.2"
    }
    null = {
      version = ">=3.0.0"
    }
  }
}
provider "helm" {
  kubernetes {
    host                   = lookup(var.k8s_cluster, "host")
    client_certificate     = base64decode(lookup(var.k8s_cluster, "client_certificate"))
    client_key             = base64decode(lookup(var.k8s_cluster, "client_key"))
    cluster_ca_certificate = base64decode(lookup(var.k8s_cluster, "cluster_ca_certificate"))

  }
}

@jrhouston
Copy link
Contributor

I've put out a patch release that moves the validation logic so it happens after the provider is configured and should remedy this issue. Please let me know if this error is still surfacing for you.

@mprimeaux
Copy link

mprimeaux commented Dec 20, 2020

@jrhouston Thanks for your help. I am experiencing the same issue as identified by @mcanevet even when using the Terraform Helm provider 2.0.1 patch, which I believe represents this commit.

Execution of the 'terraform plan' phase results in the following error:

Error: provider not configured: you must configure a path to your kubeconfig
or explicitly supply credentials via the provider block or environment variables.

See our authentication documentation at: https://registry.terraform.io/providers/hashicorp/helm/latest/docs#authentication

To provide context, I'm spinning up a new AKS cluster as the first phase of our environment formation. The second phase of Terraform modules is focused on workload installation using the Helm provider.

I'm using Terraform 0.14.3, Azure RM provider 2.41.0 and the Terraform Helm provider v2.0.1. Any insights are appreciated. I can, of course, package any logs needed.

I'll start to debug back from this line as it seems to be the source of the error message.

@jrhouston
Copy link
Contributor

@mprimeaux Thanks for getting back to me. Are you able to share a gist with more of your config? I have a config similar to the one posted above, and broken into two modules with one for the cluster and one for the charts but I can't get it to error.

If you are set up to build the provider you could also try commenting out these lines:

if err := checkKubernetesConfigurationValid(m.data); err != nil {
return nil, err
}
If the plan still doesn't succeed then I suspect this is similar to the progressive apply issue we had in the kubernetes provider which is documented here.

Did you have load_config_file set to false in the kubernetes block when using v1.x.x? If not try setting that to false and see if you get an error, it could be that in v1.x.x the plan is succeeding because it's reading your local kubeconfig by default.

@mcanevet
Copy link
Author

Looks like it works for me with v2.0.1.
Thanks.
Should I let this issue open, as it s unclear if the problem is really solved?

@jrhouston
Copy link
Contributor

@mcanevet Thanks for reporting back. We can leave this open for now in case anyone else has issues with this.

@mprimeaux
Copy link

@jrhouston After a bit of debugging, I am pleased to report the 2.0.1 patches does indeed work as intended. I no longer experience the error as per above. We'll test a bit more across various formations and report back should we hit another exception case.

In the near term, we are modifying our formation strategy to reflect infrastructure formation, workload deployment and workload upgrade phases to gracefully avoid the 'pain'.

Sincerely appreciate everyone's help on this one.

@jrhouston
Copy link
Contributor

Glad to hear that @mprimeaux! 😄 Thanks for contributing, please open issues generously if you run into any more problems.

@FischlerA
Copy link

@jrhouston Hi there,
i'm sorry but for me the issue isn't fixed, as the provider no longer looks for the kubeconfig file in the default path "~/.kube/config" as the old version did.
So far i relied on not having to specify the default path for the config file.
Any chance this will be readded?

@jrhouston
Copy link
Contributor

jrhouston commented Dec 21, 2020

@FischlerA is there a reason you can't explicitly specify config_path or set the KUBE_CONFIG_PATH environment variable?

We offer some context for why we made this breaking change in the Upgrade Guide.

@FischlerA
Copy link

@jrhouston we are able to specify the path if necessary.
The question is if this is the wanted behavior.
Personally i prefer not having to be explicit about every little detail, and the default path of the kubeconfig is one of those things i don't expect having to explicitly set.

Having the ability to specify the path if necessary is nice and in some cases definitely necessary.
In the default case of "i have my kubeconfig and just want to deploy" i don't think of it as necessary.

Also to provide more information, we deploying our terraform through pipelines, updating the kubeconfig right before we are applying the changes, so we actually only have one kubeconig and it's always the correct one.

Thanks for providing the link to the Upgrade Guide, stating your reason helped me understand the changes you made a lot better. While i personally do not agree with the reasons listed i can understand the need for the change.

For me personally the security of not applying my configuration to the wrong cluster should not be depending on the path. Especially since all of my team mates work with a single kubeconfig file containing multiple cluster configurations, so it is actually of no benefit to us. BUT i don't want to discourage the changes, i just wanted to share my opinion on this topic.

Thanks for your time btw :)

@jrhouston
Copy link
Contributor

jrhouston commented Dec 21, 2020

@FischlerA Absolutely – I can see that in your use case the problem of picking up the wrong cluster by mistake couldn't happen because your apply runs in a completely isolated environment, which makes sense and is a good practice I like to see.

We did some user research on this, where we talked to a sample of users across the helm and k8s providers, and uncertainty and confusion between the default config path, KUBECONFIG and load_config_file was one of the key findings. We had a bunch of debates internally about this and decided that the best way to provide the most consistent experience is to err on the side of explicitness. I definitely appreciate that on the happy path where your pipeline is so neatly isolated that having to set the path to the kubeconfig feels like an extra detail.

If you feel strongly about this change please feel free to open a new issue to advocate for reversing it and we can have more discussion there.

Thanks for contributing @FischlerA! 😄

@jurgenweber
Copy link

jurgenweber commented Dec 22, 2020

mine still fails in 2.0.1

eerything is:

Error: Get "http://localhost/api/v1/namespaces/argocd/secrets/00-dev": dial tcp 127.0.0.1:80: connect: connection refused

config:

provider helm {
  alias = "operational"
  kubernetes {
    host                   = element(concat(data.aws_eks_cluster.operational[*].endpoint, list("")), 0)
    cluster_ca_certificate = base64decode(element(concat(data.aws_eks_cluster.operational[*].certificate_authority.0    .data, list("")), 0))
    token                  = element(concat(data.aws_eks_cluster_auth.operational[*].token, list("")), 0)
  }
}

reverting back to 1.3.2 and its fine.

@jrhouston
Copy link
Contributor

@jurgenweber does it still succeed on v1.3.2 if you set load_config_file to false?

@jurgenweber
Copy link

@jrhouston yes, I have to put that setting back.

@madushan1000
Copy link

I think there is something wrong with aliasing. If I use,

provider "helm" {
  alias = "alpha"
  kubernetes {
    host = "<host>"
    token = var.rancher_token
  }
}

It breaks with

Error: provider not configured: you must configure a path to your kubeconfig
or explicitly supply credentials via the provider block or environment variables.

See our authentication documentation at: https://registry.terraform.io/providers/hashicorp/helm/latest/docs#authentication

but if I remove the alias attribute in the provider, it works.

I'm trying to pass the provider to a module as

module "helm-releases" {
  source = "../helm-releases"
  environment = "alpha"
  providers = {
    helm = helm.alpha
  }
}

@kpucynski
Copy link

also load_config_file parameter was deleted...

@AndreaGiardini
Copy link

Hi everyone

My terraform code breaks completely with the 2.0.1 version of the helm provider, reverting to < 2.0.0 fixes the problem:

Plan: 0 to add, 2 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

helm_release.jupyterhub_dev: Modifying... [id=jupyterhub-dev]
helm_release.jupyterhub: Modifying... [id=jupyterhub]

Error: template: jupyterhub/templates/proxy/deployment.yaml:28:32: executing "jupyterhub/templates/proxy/deployment.yaml" at <include (
print $.Template.BasePath "/hub/secret.yaml") .>: error calling include: template: jupyterhub/templates/hub/secret.yaml:12:52: executin
g "jupyterhub/templates/hub/secret.yaml" at <$values.custom>: wrong type for value; expected map[string]interface {}; got interface {}

Error: template: jupyterhub/templates/proxy/deployment.yaml:28:32: executing "jupyterhub/templates/proxy/deployment.yaml" at <include (
print $.Template.BasePath "/hub/secret.yaml") .>: error calling include: template: jupyterhub/templates/hub/secret.yaml:12:52: executin
g "jupyterhub/templates/hub/secret.yaml" at <$values.custom>: wrong type for value; expected map[string]interface {}; got interface {}

I am installing the latest version of this chart: https://jupyterhub.github.io/helm-chart/

@aareet
Copy link
Contributor

aareet commented Jan 6, 2021

@AndreaGiardini could you file a separate issue with all your info please? This seems unrelated to this particular issue.

@ghassencherni
Copy link

I'm still having the same issue with the last patch 2.0.1

Error: provider not configured: you must configure a path to your kubeconfig
or explicitly supply credentials via the provider block or environment variables.

The problem is that I can't provide any kube config file as the cluster is not created yet, here is my conf :

provider "helm" {
  kubernetes {
    host                   = aws_eks_cluster.eks-cluster.endpoint
    cluster_ca_certificate = base64decode(aws_eks_cluster.eks-cluster.certificate_authority.0.data)
    token                  = data.aws_eks_cluster_auth.cluster-auth.token
  }
}

@aareet
Copy link
Contributor

aareet commented Jan 6, 2021

@ghassencherni can you share your full config so we can see how this provider block is being used? (also include your terraform version and debug output)

@ghassencherni
Copy link

ghassencherni commented Jan 7, 2021

@aareet Thank you for your response,
Versions are :
Terraform v0.14.3

  • provider registry.terraform.io/hashicorp/aws v3.19.0
  • provider registry.terraform.io/hashicorp/helm v2.0.1
And here is the providers conf :
provider "aws" {
  region     = var.region
}

provider "helm" {
  kubernetes {             
    host                   = aws_eks_cluster.eks-cluster.endpoint
    cluster_ca_certificate = base64decode(aws_eks_cluster.eks-cluster.certificate_authority.0.data)
    token                  = data.aws_eks_cluster_auth.cluster-auth.token
  }
}

Apart from this error, there is nothing intersting in debug output :

Error: provider not configured: you must configure a path to your kubeconfig
or explicitly supply credentials via the provider block or environment variables.

@ghassencherni
Copy link

@jrhouston yes, I have to put that setting back.

v1.3.2 is not working for me ( even by adding load_config_file ) , can you share you final conf please ? your aws provider version ( not sure that it can be related ) ?
Thank you

@mcanevet
Copy link
Author

mcanevet commented Jan 7, 2021

For all that still have issues with v2.0.1, did you also upgrade to Terraform v0.14.x meanwhile? I don't have issue anymore with v2.0.1 on Terraform v0.13.x, but I have one on Terraform v0.14.x (issue #652).

@srinirei
Copy link

srinirei commented Jan 8, 2021

I have Terraform v0.14.x and Helm provider v2.0.1. I am using the alias in my provider and having the issue (same as @madushan1000 ) . I appreciate any help on this. If explicitly give the Kubeconfig path, still it is looking for it is giving error.

provider "helm" {
  alias = "eks"
  kubernetes {
     host                   = data.aws_eks_cluster.main.endpoint
     cluster_ca_certificate = base64decode(data.aws_eks_cluster.main.certificate_authority[0].data)
     token                  = data.aws_eks_cluster_auth.main.token
     config_path = "./kubeconfig_${local.cluster_name}"
  }
  version = "~> 2.0"
}

I am passing the provider to a module using alias and using the helm_release for Ingress deployment. Is there any workaround to make it work? Thanks

@aaaaahaaaaa
Copy link

aaaaahaaaaa commented Jan 8, 2021

Since 2.0.0, I'm also encountering similar issues (tested with v2.0.1 and Terraform v0.14.3).

Fails with:

Error: query: failed to query with labels: secrets is forbidden: User "system:anonymous" cannot list resource "XXX" in API group "" in the namespace "default"

If I downgrade the provider to <2.0.0, my resources are applied successfully.

My provider config looks like this:

provider "helm" {
  kubernetes {
    host                   = google_container_cluster.my_cluster.endpoint
    client_certificate     = base64decode(google_container_cluster.my_cluster.master_auth.0.client_certificate)
    client_key             = base64decode(google_container_cluster.my_cluster.master_auth.0.client_key)
    cluster_ca_certificate = base64decode(google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate)
  }
}

@redeux
Copy link
Contributor

redeux commented Jan 8, 2021

Hello everyone, thanks for reporting these issues. I've been unable to reproduce this, so far. If someone has a config they can share to help reproduce this issue it would be very helpful.

@madushan1000
Copy link

madushan1000 commented Jan 11, 2021

I'm using rancher2 and I can usually use rancher url + token to authenticate to the k8s clusters. But it doesn't work with terrafrom. My config is as follows

» terraform version                                                                                                                      1 ↵
Terraform v0.14.4
+ provider registry.terraform.io/hashicorp/helm v2.0.1
+ provider registry.terraform.io/rancher/rancher2 v1.10.3
terraform {
  required_providers {
    rancher2 = {
        source = "rancher/rancher2"
        version = "1.10.3"
    }
    helm = {
      source = "hashicorp/helm"
      version = "2.0.1"
    }
  }
  required_version = ">= 0.13"
}
provider "helm" {
  alias = "alpha"
  kubernetes {
    host = "https://rancher.dev.mycloud.com/k8s/clusters/c-xxxxx"
    token = var.rancher_token
  }
}

then I use it in a tf file to pass it to a module like bellow

module "helm-releases" {
  source = "../helm-releases"
  environment = "alpha"
  providers = {
    helm = helm.alpha
  }
}

The module instantiate "helm" resource to create deployments

resource "helm_release" "grafana" {
  name = "grafana"
  namespace = "monitoring"
  create_namespace = true
  repository = "https://grafana.github.io/helm-charts/"
  chart = "grafana"
  version = "6.1.16"
  timeout = "600"
}

I get this error

Error: provider not configured: you must configure a path to your kubeconfig
or explicitly supply credentials via the provider block or environment variables.

See our authentication documentation at: https://registry.terraform.io/providers/hashicorp/helm/latest/docs#authentication

@madushan1000
Copy link

@redeux make sure to move your kubeconfig file before testing (mv ~/.kube/config{,.bak})

@alexsomesan
Copy link
Member

Everyone reporting this, we have a small favor to ask 🥰

We've been making changes to credentials handling in both this provider as well as the Kubernetes provider. At the same time, Terraform itself is moving quite fast recently and churning out new releases with significant changes. This really increases the potential for corner cases not being caught in our testing.

I would like to ask a favor of all who reported here seeing this issue if you could please also test with a build of the Kubernetes provider in the same scenario / environment. So far we're not having much luck reproducing these issues on our side, which suggests they might be caused by particularities in real-life environments we might have not foreseen in our tests.

So please, if you have spare cycles and are seeing issues similar to what's reported here, do us a favor and test with a build from master of the K8s provider as well. We're close to a major release there and want to make sure this kind of issue is not carried over there.

Thanks a lot!

@derkoe
Copy link

derkoe commented Jan 11, 2021

Here is how to reproduce this issue:

  1. Create a Terraform definition with an AKS cluster and a Helm chart deployed to that cluster (here is an example https://gist.github.com/derkoe/bbf4036033a322846edda33c123af092)
  2. Run terraform apply
  3. Change the params of the cluster so that it has to be re-created (in the example change the vm_size)
  4. Run terraform plan (or apply) and you'll get the error:
    Error: provider not configured: you must configure a path to your kubeconfig
    or explicitly supply credentials via the provider block or environment variables.
    

I guess the same will also apply for other clusters like GKE or EKS.

@dwaiba
Copy link

dwaiba commented Jan 12, 2021

Works fine with the following provider.tf for modules

terraform {
  required_version = ">= 0.14.4"
  required_providers {
    helm = {
      source  = "hashicorp/helm"
      version = ">=2.0.1"
    }
    null = {
      source  = "hashicorp/null"
      version = ">=3.0.0"
    }
    kubernetes-alpha = {
      source  = "hashicorp/kubernetes-alpha"
      version = "0.2.1"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "1.13.3"
    }
  }
}
provider "helm" {
  kubernetes {
    host                   = lookup(var.k8s_cluster, "host")
    client_certificate     = base64decode(lookup(var.k8s_cluster, "client_certificate"))
    client_key             = base64decode(lookup(var.k8s_cluster, "client_key"))
    cluster_ca_certificate = base64decode(lookup(var.k8s_cluster, "cluster_ca_certificate"))

  }
}
provider "kubernetes-alpha" {
  host                   = lookup(var.k8s_cluster, "host")
  client_certificate     = base64decode(lookup(var.k8s_cluster, "client_certificate"))
  client_key             = base64decode(lookup(var.k8s_cluster, "client_key"))
  cluster_ca_certificate = base64decode(lookup(var.k8s_cluster, "cluster_ca_certificate"))
}

provider "kubernetes" {
  host                   = lookup(var.k8s_cluster, "host")
  client_certificate     = base64decode(lookup(var.k8s_cluster, "client_certificate"))
  client_key             = base64decode(lookup(var.k8s_cluster, "client_key"))
  cluster_ca_certificate = base64decode(lookup(var.k8s_cluster, "cluster_ca_certificate"))
}

@dak1n1
Copy link
Contributor

dak1n1 commented Jan 17, 2021

Here is how to reproduce this issue:

  1. Create a Terraform definition with an AKS cluster and a Helm chart deployed to that cluster (here is an example https://gist.github.com/derkoe/bbf4036033a322846edda33c123af092)
  2. Run terraform apply
  3. Change the params of the cluster so that it has to be re-created (in the example change the vm_size)
  4. Run terraform plan (or apply) and you'll get the error:
    Error: provider not configured: you must configure a path to your kubeconfig
    or explicitly supply credentials via the provider block or environment variables.
    

I guess the same will also apply for other clusters like GKE or EKS.

@derkoe Thanks for the very clear and helpful documentation of this issue! I was able to reproduce this particular setup and found it's present in both version 1 and version 2 of the Helm and Kubernetes providers. This issue appears to be due to Terraform's lack of support for dependency replacement, which could have helped us to mark the Kubernetes/Helm resources as dependent on the cluster, and therefore replaced them when the cluster was re-created.

Thanks to this very concrete example, I was able to write an example config and guide to help users work around this. It's currently in progress here. Please let me know if this does not solve the problem you encountered.

The above guide will also help with anyone who is replacing cluster credentials and passing them into the Kubernetes or Helm providers. The basic idea is that all Kubernetes and Helm resources should be placed into a Terraform module separate from the code that creates the underlying EKS/GKE/AKS cluster. Otherwise you end up passing outdated credentials to the providers, and you get generic provider configuration errors such as:

Error: Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
Error: Get "http://localhost/api/v1/namespaces/test": dial tcp [::1]:80: connect: connection refused

And any errors intended to shield users from the more vague errors above, such as:

Error: provider not configured: you must configure a path to your kubeconfig or explicitly supply credentials via the provider block or environment variables.

All of these errors mean that there's a problem with the configuration being passed into the Helm or Kubernetes provider. Often times, the credentials are just cached and expired. I'm hoping these example guides will provide some tips on avoiding this confusing error scenario in AKS, EKS, and GKE.

@derkoe
Copy link

derkoe commented Jan 19, 2021

Another workaround is to make the changes outside Terraform and then re-import the state. This is also necessary when you want to have a zero-downtime change of the Kubernetes cluster.

This is a good description how to achieve this: Zero downtime migration of Azure Kubernetes clusters managed by Terraform.

@jrhouston
Copy link
Contributor

We have cut another release that removes the strict check on the kubernetes block that produces this error. Please report back if you are still seeing an error when upgrading to v2.0.2.

@jrhouston
Copy link
Contributor

We are going to close this as there hasn't been any additional activity on this issue since the latest release. Please open a new issue with as much detail as you can if you are still experiencing problems with v2 of the provider.

@project-administrator
Copy link

project-administrator commented Feb 16, 2021

We have a similar issue on v2.0.2:
Error: query: failed to query with labels: secrets is forbidden: User "system:anonymous" cannot list resource "secrets" in API group "" in the namespace "test-monitoring"
terraform plan fails with the error above.
Downgrading the helm provider version to 1.3.2 resolves the issue for us.
Both kubernetes and helm providers are initialized:

provider "kubernetes" {
  host                   = aws_eks_cluster.main.endpoint
  token                  = data.aws_eks_cluster_auth.main.token
  cluster_ca_certificate = base64decode(aws_eks_cluster.main.certificate_authority.0.data)
  load_config_file       = false
}

provider "helm" {
  kubernetes {
    host                   = aws_eks_cluster.main.endpoint
    token                  = data.aws_eks_cluster_auth.main.token
    cluster_ca_certificate = base64decode(aws_eks_cluster.main.certificate_authority.0.data)
  }
}

It seems that the helm provider does not pick up the kubernetes authentication block properly.

@ghost
Copy link

ghost commented Mar 4, 2021

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked as resolved and limited conversation to collaborators Mar 4, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.