Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

helm_repository does not refresh on apply #335

Closed
iggy opened this issue Sep 11, 2019 · 8 comments · Fixed by #466
Closed

helm_repository does not refresh on apply #335

iggy opened this issue Sep 11, 2019 · 8 comments · Fixed by #466

Comments

@iggy
Copy link

iggy commented Sep 11, 2019

Brief description

The provider doesn't work well in CI environments where each step is run in a completely clean environment (i.e. Terraform Cloud/Enterprise). It expects state from the plan stage to be available (when doing a plan to file -> apply plan file workflow).

Terraform Version

terraform version
Terraform v0.12.8

  • provider.aws v2.27.0
  • provider.helm v0.10.2
  • provider.kubernetes v1.9.0
  • provider.local v1.3.0
  • provider.null v2.1.2
  • provider.template v2.1.2

Affected Resource(s)

  • helm_repository

Terraform Configuration Files

data "helm_repository" "incubator" {
  name     = "incubator"
  url      = "https://kubernetes-charts-incubator.storage.googleapis.com"
  username = "none"
}
resource "helm_release" "aws_alb_ingress_controller" {
  provider   = helm.module
  name       = "alb-ingress-controller"
  repository = "${data.helm_repository.incubator.metadata.0.name}"
  chart      = "aws-alb-ingress-controller"
  namespace  = "kube-system"
  version    = "0.1.10"
  depends_on = ["module.mod_eks"]

  set {
    name  = "clusterName"
    value = var.cluster_name
  }

  set {
    name  = "autoDiscoverAwsRegion"
    value = "true"
  }

  set {
    name  = "autoDiscoverAwsVpcID"
    value = "true"
  }
}

Debug Output

I didn't run terraform in debug because I doubt it would help with the current problem.

I did gather the state of the system at apply time. Below is the ~/.helm/repository/repositories.yaml file. As you can see, it's missing the incubator repo.

apiVersion: v1
generated: "2019-09-11T02:16:19.961453812Z"
 repositories:
- caFile: ""
  cache: /home/terraform/.helm/repository/cache/stable-index.yaml
  certFile: ""
  keyFile: ""
  name: stable
  password: ""
  url: https://kubernetes-charts.storage.googleapis.com
  username: ""
- caFile: ""
  cache: /home/terraform/.helm/repository/cache/local-index.yaml
  certFile: ""
  keyFile: ""
  name: local
  password: ""
  url: http://127.0.0.1:8879/charts
  username: ""

Expected Behavior

The provider should have updated the repositories.yaml file during the apply phase

Actual Behavior

Error: repo incubator not found

and indeed the repo does not exist in the repositories.yaml file

Steps to Reproduce

The easiest way I can think of would be to run terraform plan -out=/path/to/plan.out in a docker container. Then run terraform apply /path/to/plan.out in a separate container that has the same plan output

Important Factoids

This is all currently running in Terraform Cloud (Hosted Enterprise product). When it was running locally on my laptop, it was fine.

@Aaron-ML
Copy link

Seeing this as well on terraform cloud.

@NickCarton
Copy link

I've been seeing this everywhere on Terraform cloud.
It's becoming a major blocker for our Infrastructure upgrades on all environments including production.

Including some random "error: Unauthorized" messages. I believe this to be just an overall issue due to the design of this module.

@alanbrent
Copy link

alanbrent commented Oct 10, 2019

I ran into this as well, and this is a working fix in my environment. Our helm provider was configured with a relative path for home, which appears to have been broken by Terraform 0.12.x. Switching to an absolute path resolved it.

@eskp
Copy link

eskp commented Oct 29, 2019

Using a repo URL in repository param should work instead of using the data resource.

@mgrecar
Copy link

mgrecar commented Feb 6, 2020

I've found the same problem still occurs on v1.0.0 of the provider as well, and I had to resort to using the repo URL, instead of the datasource.

@dbirks
Copy link

dbirks commented Feb 24, 2020

I've found with v1.0.0 of the provider, with helm v3 not having the stable repo by default anymore, I was able to add this datasource and deploy nicely:

data "helm_repository" "stable" {
  name = "stable"
  url  = "https://kubernetes-charts.storage.googleapis.com"
}

@jrhouston
Copy link
Contributor

I think this is the same issue I have discussed here: #416 (comment)

@ghost
Copy link

ghost commented May 19, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators May 19, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants