Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cloud Run VPC Connector does not associate with Cloud Run instance #735

Closed
sethmoon opened this issue Jul 13, 2022 · 3 comments
Closed

Cloud Run VPC Connector does not associate with Cloud Run instance #735

sethmoon opened this issue Jul 13, 2022 · 3 comments
Labels
bug Something isn't working

Comments

@sethmoon
Copy link
Contributor

When using the cloud-run template, the vpc_connector does not associate with the cloud run instance inside GCP. This appears to be caused by the annotations being set on the google_cloud_run_service.metadata and not on the google_cloud_run_service.template.metadata. This is documented inside the GCP Cloud Run Documentation.

Example from Google Documentation:

# Cloud Run service
resource "google_cloud_run_service" "gcr_service" {
  name     = "mygcrservice"
  provider = google-beta
  location = "us-west1"

  template {
    spec {
      containers {
        image = "us-docker.pkg.dev/cloudrun/container/hello"
        resources {
          limits = {
            cpu = "1000m"
            memory = "512M"
          }
        }
      }
      # the service uses this SA to call other Google Cloud APIs
      # service_account_name = myservice_runtime_sa
    }

    metadata {
      annotations = {
        # Limit scale up to prevent any cost blow outs!
        "autoscaling.knative.dev/maxScale" = "5"
        # Use the VPC Connector
        "run.googleapis.com/vpc-access-connector" = google_vpc_access_connector.connector.name
        # all egress from the service should go through the VPC Connector
        "run.googleapis.com/vpc-access-egress" = "all-traffic"
      }
    }
  }
  autogenerate_revision_name = true
}
@sethmoon
Copy link
Contributor Author

With the following Terraform, I get the following YAML.

module "cloud_run_service" {
  source = "github.com/GoogleCloudPlatform/cloud-foundation-fabric//modules/cloud-run"

  project_id      = var.project
  name            = "${var.service_name}-tftest"
  region          = var.region
  service_account = module.base.runner_sa.email
  vpc_connector = {
    create          = false
    name            = "default-connector"
    egress_settings = "private-ranges-only"
  }
  volumes = var.cloud_run_volumes

  containers = [
    {
      image = "${var.image.region}-docker.pkg.dev/${var.project}/${var.image.repo_name}/${var.image.image_name}:${var.image.image_tag}"
      options = {
        args     = null
        command  = null
        env      = var.cloud_run_env
        env_from = var.cloud_run_env_from
      }
      ports = [
        {
          name           = "h2c"
          protocol       = "TCP"
          container_port = "8080"
        }
      ]
      resources        = null
      ingress_settings = "all"
    }
  ]
}
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: service-tftest
  namespace: '000000000000'
  selfLink: >-
    /apis/serving.knative.dev/v1/namespaces/000000000000/services/service-tftest
  uid: fa91fb03-2533-4179-9bd4-97d25aaf3fdd
  resourceVersion: XXXXXXXXXXX
  generation: 1
  creationTimestamp: '2022-07-13T16:51:10.437610Z'
  labels:
    cloud.googleapis.com/location: us-central1
  annotations:
    serving.knative.dev/creator: >-
      [email protected]
    serving.knative.dev/lastModifier: >-
      [email protected]
    run.googleapis.com/vpc-access-egress: private-ranges-only
    run.googleapis.com/vpc-access-connector: default-connector
    run.googleapis.com/ingress: all
spec:
  template:
    metadata:
      annotations:
        autoscaling.knative.dev/maxScale: '100'
    spec:
      containerConcurrency: 80
      timeoutSeconds: 300
      serviceAccountName: >-
        [email protected]
      containers:
      - image: >-
          us-central1-docker.pkg.dev/myproject/repo/image:latest
        ports:
        - name: h2c
          containerPort: 8080
        resources:
          limits:
            memory: 512Mi
            cpu: 1000m
  traffic:
  - percent: 100
    latestRevision: true

However, if I edit the YAML to move the vpc-access-connector and egress as shown below, I am now using the vpc connector.

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: service-tftest
  namespace: '000000000000'
  selfLink: >-
    /apis/serving.knative.dev/v1/namespaces/000000000000/services/service-tftest
  uid: fa91fb03-2533-4179-9bd4-97d25aaf3fdd
  resourceVersion: XXXXXXXXXXX
  generation: 2
  creationTimestamp: '2022-07-13T16:51:10.437610Z'
  labels:
    cloud.googleapis.com/location: us-central1
  annotations:
    serving.knative.dev/creator: >-
      [email protected]
    serving.knative.dev/lastModifier: >-
      [email protected]
    run.googleapis.com/ingress: all
    run.googleapis.com/ingress-status: all
spec:
  template:
    metadata:
      annotations:
        run.googleapis.com/vpc-access-egress: private-ranges-only
        autoscaling.knative.dev/maxScale: '100'
        run.googleapis.com/vpc-access-connector: default-connector
    spec:
      containerConcurrency: 80
      timeoutSeconds: 300
      serviceAccountName: >-
        [email protected]
      containers:
      - image: >-
          us-central1-docker.pkg.dev/myproject/repo/image:latest
        ports:
        - name: h2c
          containerPort: 8080
        resources:
          limits:
            cpu: 1000m
            memory: 512Mi
  traffic:
  - percent: 100
    latestRevision: true

@ludoo ludoo added the bug Something isn't working label Jul 13, 2022
@ludoo
Copy link
Collaborator

ludoo commented Jul 13, 2022

Thank you Seth, care to send a PR? Otherwise we'll get to it ourselves.

@sethmoon
Copy link
Contributor Author

I have never performed a PR before, but this change seems to have resolved the issue on my end.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants