Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dial tcp 127.0.0.1:80: connect: connection refused error during terraform plan #1635

Closed
poojaac opened this issue Mar 11, 2022 · 6 comments
Closed

Comments

@poojaac
Copy link

poojaac commented Mar 11, 2022

Description

We are upgrading the EKS module from 17.x to 18.x. As we are creating the kubernetes resources like secrets, configmaps in the same state as we are creating the cluster, we have configured the kubernetes provider block. But kuberenetes provider is not able to talk to the cluster and the terraform plan is failing with the error Error: Get "http://localhost/api/v1/namespaces/kube-system/secrets/git-credentials": dial tcp 127.0.0.1:80: connect: connection refused

Terraform Version, Provider Version and Kubernetes Version

Terraform version: 1.0.0
Kubernetes provider version: >=2.1.0
Kubernetes version: 1.21

Affected Resource(s)

kubernetes_secret

Terraform Configuration Files

Gist: https://gist.github.com/poojaac/16da64eb70f897a7f042e986395c1068

Debug Output

Gist: https://gist.github.com/poojaac/a4587de0f5a11bd18933fa794edad5c7

Steps to Reproduce

terraform plan

Expected Behavior

Provider should be able to successfully connect to the cluster and terraform plan should be completed without errors

Actual Behavior

Provider is not using the given information to connect to the k8s cluster to create kubernetes resources

Important Factoids

References

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@poojaac poojaac added the bug label Mar 11, 2022
@github-actions github-actions bot removed the bug label Mar 11, 2022
@rescribet
Copy link

We're seeing this too on Terraform Cloud, we weren't up to date and bumping dependencies seemed to help, but today the problem reappeared. Both platform providers didn't log any outages or interruptions on their status pages.

Last week: Terraform 0.15, kubernetes provider 2.0.3, digitalocean provider 2.6.0, helm 2.0.3
Today: Terraform 1.1.7, kubernetes provider 2.8.0, digitalocean provider 2.17.1, helm 2.4.1

@bradenmacdonald
Copy link

I have a similar setup and was running into a similar issue. I was able to diagnose it by running terraform plan -target and targeting my kubernetes cluster data source. Then I saw this:

-/+ resource "digitalocean_kubernetes_cluster" "main" {
      ...
      ~ version        = "1.21.10-do.0" -> "1.21.5-do.0" # forces replacement

So what happened is:

  • DigitalOcean automatically upgraded the cluster to a newer point release of Kubernetes
  • My Terraform code still specifies the older Kubernetes version for the cluster. As a result, Terraform thinks that the whole cluster is invalid and needs to be replaced (downgraded).
  • The data source won't load any data from the current cluster because Terraform thinks it's going to be replaced, so no configuration gets passed into the Kubernetes provider.
  • The kubernetes provider silently ignores the empty configuration coming from the data source and reverts to its default config, trying to contact the k8s API at localhost, which of course fails.
    • This seems like a bug in the kubernetes provider or how Terraform passes data around - it silently ignored the issue instead of surfacing it.

The simple solutions are to fix the kubernetes version specified in your cluster resource and/or to use lifecycle { ignore_changes = [ version, ] } to prevent this from happening in the future.

@rescribet
Copy link

Can confirm your fix working @bradenmacdonald, very nice.

@simwak
Copy link

simwak commented Mar 22, 2022

We are seeing the same, but on the refresh before something changes. The endpoints are correctly in the state, but it's justing localhost for whatever reason. So the workaround doesn't help in this case.

@alexsomesan
Copy link
Member

@poojaac The versions of Terraform you are using are quite old. I tried to reproduce the issue using latest versions and could not get it to fail. It works as expected:

➤ terraform version                                                                                                                                                                                                                                                   12:16:21
Terraform v1.1.7
on darwin_arm64
+ provider registry.terraform.io/hashicorp/aws v4.6.0
+ provider registry.terraform.io/hashicorp/cloudinit v2.2.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.9.0
+ provider registry.terraform.io/hashicorp/tls v3.1.0

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Apr 24, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

5 participants