Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configuring provider using another resource's outputs does not work on second run on Terraform v0.14.x #652

Closed
mcanevet opened this issue Jan 7, 2021 · 6 comments
Labels

Comments

@mcanevet
Copy link

mcanevet commented Jan 7, 2021

Terraform, Provider, Kubernetes and Helm Versions

Terraform version: 0.14.4
Provider version: 2.0.1
Kubernetes version: 1.18.13
Helm version: 3.4.2

Affected Resource(s)

  • helm_release

Terraform Configuration Files

Provider configuration:

provider "helm" {
  kubernetes {
    insecure = true
    host     = local.kubernetes_host
    username = local.kubernetes_username
    password = local.kubernetes_password
  }
}

Expected Behavior

Subsequent terraform apply should work.

Actual Behavior

First apply completes:

Apply complete! Resources: 26 added, 0 changed, 0 destroyed.

But subsequent applies fails when refreshing the file helm_release resource:

...
module.cluster.module.argocd.helm_release.argocd: Refreshing state... [id=argocd]

Error: provider not configured: you must configure a path to your kubeconfig
or explicitly supply credentials via the provider block or environment variables.

See our authentication documentation at: https://registry.terraform.io/providers/hashicorp/helm/latest/docs#authentication

Important Factoids

It works well with Terraform v0.13.x.
Helm provider is configured using another resource's outputs, but somehow it tries to read kubeconfig.

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@redeux
Copy link
Contributor

redeux commented Jan 7, 2021

Thanks for opening this issue @mcanevet. Do you mind sharing your debug log as a gist?

@jrhouston
Copy link
Contributor

@mcanevet Is this still an issue for you in v2.0.2 of the provider?

@mcanevet
Copy link
Author

mcanevet commented Feb 5, 2021

@redeux @jrhouston I still have the issue with Terraform 0.14.6 and provider 2.0.2. Note that I also had the issue with Terraform v0.14.x and provider v1.x, so it's probably related to a change in Terraform behavior.
See gist : https://gist.github.com/mcanevet/898d6093e06df6528ec3a8b7b7af02dd
1rst run is working fine, but 2nd run fails with Error: Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable

@ghost ghost removed the waiting-response label Feb 5, 2021
@gabeio
Copy link

gabeio commented Feb 10, 2021

So I had an extremely similar issue to this one, exactly the same 2 errors. After a ton of unnecessary debugging (had I realized the problem 🤦), I found I had a module which was out of place (which had a helm_release) and apparently was causing it to run first, prior to it's dependencies and in this case the module that provided the kubernetes variables. I had refactored a bit of my terraform code and forgot to move the helm release to the correct location. Again, not sure if this is the problem in this case but I thought I would share in hopes it would help others.

tip: you can look through your terraform state (terraform state pull) and see if you have a helm release with incorrect dependencies.

Hope this helps 🌮

@mcanevet
Copy link
Author

OK, I found my issue. I was running terraform init -upgrade before the second run and as I was storing my kubeconfig file inside the .terraform directory, it was purged. It looks like terraform 0.13 does not purge the .terraform directory when running terraform init -upgrade.

@ghost
Copy link

ghost commented Mar 19, 2021

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked as resolved and limited conversation to collaborators Mar 19, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

5 participants