Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dial tcp [::1]:80: connect: connection refused #2501

Closed
bailey-oa opened this issue Mar 2, 2023 · 6 comments
Closed

dial tcp [::1]:80: connect: connection refused #2501

bailey-oa opened this issue Mar 2, 2023 · 6 comments

Comments

@bailey-oa
Copy link

bailey-oa commented Mar 2, 2023

  • Terraform version:
    Terraform Cloud [1.3.9]
  • Provider version(s):
    aws = {
    source = "hashicorp/aws"
    version = ">= 4.28.0"
    }
    kubernetes = {
    source = "hashicorp/kubernetes"
    version = "2.10.0"
    }
    helm = {
    source = "hashicorp/helm"
    version = "2.5.1"
    }

Reproduction Code [Required]

resource "kubernetes_secret" "tailscale" {
  count = 1
  metadata {
    name = "tailscale-subnet-router-secrets"
  }
  data = {
    "AUTH_KEY" = jsondecode(data.aws_secretsmanager_secret_version.oak_external.secret_string)["TAILSCALE_AUTH_KEY"]
  }
}

locals {
  tailscale_sets = {
    "fullnameOverride" = "ts-subnet-router"
    "image.repository" = "tailscale/tailscale"
    "image.tag"        = "v1.32"
  }
}

resource "helm_release" "tailscale" {
  count         = 1
  name          = "tailscale"
  repository    = "https://gtaylor.github.io/helm-charts"
  chart         = "tailscale-subnet-router"
  version       = "1.1.1"
  timeout       = 300
  atomic        = true
  recreate_pods = true

  dynamic "set" {
    for_each = local.tailscale_sets

    content {
      name  = set.key
      value = set.value
    }
  }

  dynamic "set" {
    for_each = zipmap(range(length(var.tailscale_subnets)), var.tailscale_subnets)

    content {
      name  = "tailscale.routes[${set.key}]"
      value = set.value
    }
  }
}

Steps to reproduce the behavior:

Running the deployment on Terraform Cloud.
Run instantly fails on modules/helm/helm-tailscale.tf line 1, in resource "kubernetes_secret" "tailscale" with the error:
Error: Post "http://localhost/api/v1/namespaces/default/secrets": dial tcp [::1]:80: connect: connection refused

Then fails on
modules/helm/helm-tailscale.tf line 19, in resource "helm_release" "tailscale": with the error:
Error: release tailscale failed, and has been uninstalled due to atomic being set: timed out waiting for the condition

Expected behavior

Adds the Helm Chart to my EKS cluster

Actual behavior

Errors

Additional context

This is my EKS Cluster main.tf:
data "aws_caller_identity" "current" {}

provider "kubernetes" {
  host                   = module.eks.cluster_endpoint
  cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)

  exec {
      api_version = "client.authentication.k8s.io/v1beta1"
      args        = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
      command     = "aws"
  }
}
    
locals {
  name   = "dev"
}

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "18.31.2"

  cluster_name                    = local.name
  cluster_version                 = "1.25"
  cluster_endpoint_private_access = true
  cluster_endpoint_public_access  = true

  vpc_id     = var.vpc
  subnet_ids = var.public_subnets

  cloudwatch_log_group_retention_in_days = 30

  manage_aws_auth_configmap = true

  aws_auth_roles = [
    {
      rolearn  = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/OrganizationAccountAccessRole"
      username = "OrganizationAccountAccessRole"
      groups   = ["system:masters"]
    }
  ]

  node_security_group_additional_rules = {
    ingress_self_all = {
      description = "Node to node all ports/protocols"
      protocol    = "-1"
      from_port   = 0
      to_port     = 0
      type        = "ingress"
      self        = true
    }
    egress_all = {
      description = "Node all egress to self"
      protocol    = "-1"
      from_port   = 0
      to_port     = 0
      type        = "egress"
      self        = true
    }
  }

    eks_managed_node_groups = {
    "${local.name}-node" = {
      min_size     = 1
      max_size     = 10
      desired_size = 1

      instance_types = ["t3a.large"]
      capacity_type  = "ON_DEMAND"

    },

    "jobs-node" = {
      min_size      = 0
      max_size      = 10
      desired_size  = 0

      instance_types = ["t3a.large"]
      capacity_type  = "ON_DEMAND"
            
    labels = {
        jobs_node = "true"
      }
      taints = [
        {
          key    = "jobs"
          value  = "true"
          effect = "NO_SCHEDULE"
        }
      ]
    }
  }

    eks_managed_node_group_defaults = {
    ami_type       = "BOTTLEROCKET_x86_64"
    platform       = "bottlerocket"
    instance_types = ["t3a.large"]
    subnet_ids     = [var.private_subnets[0]] # pin to us-east-1a
    metadata_options = {
      http_endpoint = "enabled"
      http_tokens   = "required"
    }
  }

    cluster_addons = {
    aws-ebs-csi-driver = {
      resolve_conflicts        = "OVERWRITE"
      service_account_role_arn = module.iam_eks_aws-ebs-csi-driver.iam_role_arn
    }
  }
}

module "iam_eks_aws-ebs-csi-driver" {
  source  = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"

  attach_ebs_csi_policy = true
  role_name_prefix      = local.name

  oidc_providers = {
    main = {
      provider_arn               = module.eks.oidc_provider_arn
      namespace_service_accounts = ["kube-system:ebs-csi-controller-sa"]
    }
  }
}
@chulkilee
Copy link

Probably related to hashicorp/terraform-provider-kubernetes#1028

@chriskinsman
Copy link

Seeing the same issue. It is trying to hit localhost instead of the cluster for some reason...

@streamnsight
Copy link

streamnsight commented Mar 14, 2023

try using a datasource to get the kubeconfig, rather than the module output.
In my experience, with TF1.0, the datasource data was not being refreshed until after the provider was configured, causing this issue of config not found and defaulting to dialing localhost.
in tf1.2 with kubernetes provider 2.18.1, the datasource gets refreshed before the provider is configured and it works fine.

now, in this case above, you're not using a datasource but the module output directly, and that may not be refreshed until after the provider is configured, hence you still have the issue.

Instead, use the pattern set in the example:
https://github.com/hashicorp/terraform-provider-kubernetes/blob/main/_examples/eks/kubernetes-config/main.tf

data "aws_eks_cluster" "default" {
  name = var.cluster_name
}

data "aws_eks_cluster_auth" "default" {
  name = var.cluster_name
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.default.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.default.certificate_authority[0].data)
  token                  = data.aws_eks_cluster_auth.default.token
}

and that should resolve the problem if the datasource is now refreshed before the provider is configured.

@github-actions
Copy link

This issue has been automatically marked as stale because it has been open 30 days
with no activity. Remove stale label or comment or this issue will be closed in 10 days

@github-actions github-actions bot added the stale label Apr 14, 2023
@github-actions
Copy link

This issue was automatically closed because of stale in 10 days

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Apr 24, 2023
@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 24, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants