Skip to content
This repository has been archived by the owner on Mar 29, 2023. It is now read-only.

Nodes created in private cluster cannot access the internet or docker hub #94

Closed
btomasini opened this issue Jun 6, 2020 · 8 comments
Closed

Comments

@btomasini
Copy link

I am looking to create a basic GKE cluster behind a VPN using your module. However, the nodes created are not able to access dockerhub. Can you provide guidance on how to create a cluster which is privately accessible which can download images from docker hub? Any suggestions would be appreciated.

Perhaps I misunderstand the usage of "private nodes"

module "network" {
  source = "github.com/gruntwork-io/terraform-google-network.git//modules/vpc-network?ref=v0.4.0"
  name_prefix = "management"
  project = var.project
  region = var.region
  cidr_block = "10.100.0.0/16"
  secondary_cidr_block = "10.101.0.0/16"
}


module "gke_cluster" {

  source = "github.com/gruntwork-io/terraform-google-gke.git//modules/gke-cluster?ref=v0.4.3"

  name = "management"

  project  = var.project
  location = var.region
  network  = module.network.network
  subnetwork = module.network.public_subnetwork

  master_ipv4_cidr_block = "10.102.0.0/28"

  enable_private_nodes = "true"

  # To make testing easier, we keep the public endpoint available. In production, we highly recommend restricting access to only within the network boundary, requiring your users to use a bastion host or VPN.
  disable_public_endpoint = "true"

  # With a private cluster, it is highly recommended to restrict access to the cluster master
  # However, for testing purposes we will allow all inbound traffic.
  master_authorized_networks_config = [
    {
      cidr_blocks = [
        {
          cidr_block   = "10.100.0.0/16"
          display_name = "management 1"
        },
        {
          cidr_block   = "10.101.0.0/16"
          display_name = "management 2"
        },
        {
          cidr_block   = "10.11.0.0/24"
          display_name = "home vpn"
        },
      ]
    },
  ]

  cluster_secondary_range_name = module.network.public_subnetwork_secondary_range_name

  enable_vertical_pod_autoscaling = "true"
}

# ---------------------------------------------------------------------------------------------------------------------
# CREATE A NODE POOL
# ---------------------------------------------------------------------------------------------------------------------

resource "google_container_node_pool" "node_pool" {
  provider = google-beta

  name     = "pool"
  project  = var.project
  location = var.region
  cluster  = module.gke_cluster.name

  initial_node_count = "1"

  autoscaling {
    min_node_count = "1"
    max_node_count = "5"
  }

  management {
    auto_repair  = "true"
    auto_upgrade = "true"
  }

  node_config {
    image_type   = "COS"
    machine_type = "n1-standard-1"

    labels = {
      private-pools-example = "true"
    }

    tags = [
      module.network.public,
    ]

    disk_size_gb = "30"
    disk_type    = "pd-standard"
    preemptible  = false

    service_account = module.gke_service_account.email

    oauth_scopes = [
      "https://www.googleapis.com/auth/cloud-platform",
    ]
  }

  lifecycle {
    ignore_changes = [initial_node_count]
  }

  timeouts {
    create = "30m"
    update = "30m"
    delete = "30m"
  }
}

# ---------------------------------------------------------------------------------------------------------------------
# CREATE A CUSTOM SERVICE ACCOUNT TO USE WITH THE GKE CLUSTER
# ---------------------------------------------------------------------------------------------------------------------

module "gke_service_account" {
  source = "github.com/gruntwork-io/terraform-google-gke.git//modules/gke-service-account?ref=v0.4.3"

  name        = "management-gke"
  project     = var.project
  description = "Management GKE"
}

@btomasini
Copy link
Author

I tried this configuration with the node pool tags as "module.network.public" and "module.network.private", both with the same result.

@silazare
Copy link

silazare commented Jul 6, 2020

@btomasini Have the same issue, per my understanding that is because NAT is allowed only for private subnet in vpc-network module. This behaviour was changed in this PR
So we have NAT mapping to private subnet after GKE cluster creation, but there is also created public subnet (which is private in fact for private GKE cluster) for nodes and it cannot reach internet without NAT.
image

My quick workaround was to add public subnet in the NAT mapping, not sure if it is best practice for private GKE cluster, so either use a mirror or use GCR for all images.

@Jojoooo1
Copy link

Jojoooo1 commented Aug 4, 2020

Would be really helpful to have a clarification in the documentation.

@snyman
Copy link

snyman commented Aug 4, 2020

The vpc-network module documentation is already quite clear. It states that the public subnet is supposed to have NAT for outbound access and the private is supposed to be restricted to internal access (+ Google's services). This arrangement makes sense to me. Thus PR#53 seems to be in error here.

@Jojoooo1
Copy link

Jojoooo1 commented Aug 4, 2020

Confirmed by the comment in the gke-private-example

# We're deploying the cluster in the 'public' subnetwork to allow outbound internet access

@Jojoooo1
Copy link

reverted in release 0.5.0

@Etiene
Copy link
Contributor

Etiene commented Jan 20, 2022

Apologies for the delay in responding to this issue. Please see below:

Sunset notice

We believe there is an opportunity to create a truly outstanding developer experience for deploying to the cloud, however developing this vision requires that we temporarily limit our focus to just one cloud. Gruntwork has hundreds of customers currently using AWS, so we have temporarily suspended our maintenance efforts on this repo. Once we have implemented and validated our vision for the developer experience on the cloud, we look forward to picking this up. In the meantime, you are welcome to use this code in accordance with the open source license, however we will not be responding to GitHub Issues or Pull Requests.

If you wish to be the maintainer for this project, we are open to considering that. Please contact us at [email protected].

@eak12913
Copy link
Contributor

Closing due to repo sunset

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants