Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add callysto cluster and hub #1649

Merged
merged 18 commits into from
Sep 5, 2022
Merged

Conversation

GeorgianaElena
Copy link
Member

@GeorgianaElena GeorgianaElena commented Aug 23, 2022

Reference #1439

Creates a regional cluster in northamerica-northeast1-b (Montreal, gpu machines are available in this zone also, just in case we need them).

Other assumptions made would be that the storage buckets will be disabled since they won't probably be needed, as this hub is an educational one. Also, although right now it would be a single tenant cluster, maybe this would change in the future, so I left enable_network_policy set to true since I don't think we are so strict on the costs.

Config inspiration was from the cloudbank cluster, since that holds edu hubs

@GeorgianaElena
Copy link
Member Author

GeorgianaElena commented Aug 23, 2022

Terraform plan output
Terraform will perform the following actions:

  # google_container_cluster.cluster will be created
  + resource "google_container_cluster" "cluster" {
      + cluster_ipv4_cidr           = (known after apply)
      + datapath_provider           = (known after apply)
      + default_max_pods_per_node   = (known after apply)
      + enable_binary_authorization = false
      + enable_intranode_visibility = (known after apply)
      + enable_kubernetes_alpha     = false
      + enable_l4_ilb_subsetting    = false
      + enable_legacy_abac          = false
      + enable_shielded_nodes       = true
      + enable_tpu                  = false
      + endpoint                    = (known after apply)
      + id                          = (known after apply)
      + initial_node_count          = 1
      + label_fingerprint           = (known after apply)
      + location                    = "northamerica-northeast1"
      + logging_service             = (known after apply)
      + master_version              = (known after apply)
      + monitoring_service          = (known after apply)
      + name                        = "callysto-cluster"
      + network                     = "default"
      + networking_mode             = (known after apply)
      + node_locations              = [
          + "northamerica-northeast1-b",
        ]
      + node_version                = (known after apply)
      + operation                   = (known after apply)
      + private_ipv6_google_access  = (known after apply)
      + project                     = "callysto-202316"
      + remove_default_node_pool    = true
      + self_link                   = (known after apply)
      + services_ipv4_cidr          = (known after apply)
      + subnetwork                  = (known after apply)
      + tpu_ipv4_cidr_block         = (known after apply)

      + addons_config {
          + cloudrun_config {
              + disabled           = (known after apply)
              + load_balancer_type = (known after apply)
            }

          + config_connector_config {
              + enabled = (known after apply)
            }

          + dns_cache_config {
              + enabled = (known after apply)
            }

          + gce_persistent_disk_csi_driver_config {
              + enabled = (known after apply)
            }

          + gcp_filestore_csi_driver_config {
              + enabled = (known after apply)
            }

          + horizontal_pod_autoscaling {
              + disabled = true
            }

          + http_load_balancing {
              + disabled = true
            }

          + istio_config {
              + auth     = (known after apply)
              + disabled = (known after apply)
            }

          + kalm_config {
              + enabled = (known after apply)
            }

          + network_policy_config {
              + disabled = (known after apply)
            }
        }

      + authenticator_groups_config {
          + security_group = (known after apply)
        }

      + cluster_autoscaling {
          + autoscaling_profile = "OPTIMIZE_UTILIZATION"
          + enabled             = false

          + auto_provisioning_defaults {
              + image_type       = (known after apply)
              + min_cpu_platform = (known after apply)
              + oauth_scopes     = (known after apply)
              + service_account  = (known after apply)
            }
        }

      + cluster_telemetry {
          + type = (known after apply)
        }

      + confidential_nodes {
          + enabled = (known after apply)
        }

      + database_encryption {
          + key_name = (known after apply)
          + state    = (known after apply)
        }

      + default_snat_status {
          + disabled = (known after apply)
        }

      + identity_service_config {
          + enabled = (known after apply)
        }

      + ip_allocation_policy {
          + cluster_ipv4_cidr_block       = (known after apply)
          + cluster_secondary_range_name  = (known after apply)
          + services_ipv4_cidr_block      = (known after apply)
          + services_secondary_range_name = (known after apply)
        }

      + logging_config {
          + enable_components = (known after apply)
        }

      + master_auth {
          + client_certificate     = (known after apply)
          + client_key             = (sensitive value)
          + cluster_ca_certificate = (known after apply)

          + client_certificate_config {
              + issue_client_certificate = (known after apply)
            }
        }

      + monitoring_config {
          + enable_components = (known after apply)
        }

      + network_policy {
          + enabled = true
        }

      + node_config {
          + disk_size_gb      = (known after apply)
          + disk_type         = (known after apply)
          + guest_accelerator = (known after apply)
          + image_type        = (known after apply)
          + labels            = (known after apply)
          + local_ssd_count   = (known after apply)
          + machine_type      = (known after apply)
          + metadata          = (known after apply)
          + oauth_scopes      = (known after apply)
          + preemptible       = false
          + service_account   = (known after apply)
          + spot              = false
          + taint             = (known after apply)

          + shielded_instance_config {
              + enable_integrity_monitoring = (known after apply)
              + enable_secure_boot          = (known after apply)
            }

          + workload_metadata_config {
              + mode = (known after apply)
            }
        }

      + node_pool {
          + initial_node_count          = (known after apply)
          + instance_group_urls         = (known after apply)
          + managed_instance_group_urls = (known after apply)
          + max_pods_per_node           = (known after apply)
          + name                        = (known after apply)
          + name_prefix                 = (known after apply)
          + node_count                  = (known after apply)
          + node_locations              = (known after apply)
          + version                     = (known after apply)

          + autoscaling {
              + max_node_count = (known after apply)
              + min_node_count = (known after apply)
            }

          + management {
              + auto_repair  = (known after apply)
              + auto_upgrade = (known after apply)
            }

          + network_config {
              + create_pod_range    = (known after apply)
              + pod_ipv4_cidr_block = (known after apply)
              + pod_range           = (known after apply)
            }

          + node_config {
              + boot_disk_kms_key = (known after apply)
              + disk_size_gb      = (known after apply)
              + disk_type         = (known after apply)
              + guest_accelerator = (known after apply)
              + image_type        = (known after apply)
              + labels            = (known after apply)
              + local_ssd_count   = (known after apply)
              + machine_type      = (known after apply)
              + metadata          = (known after apply)
              + min_cpu_platform  = (known after apply)
              + node_group        = (known after apply)
              + oauth_scopes      = (known after apply)
              + preemptible       = (known after apply)
              + service_account   = (known after apply)
              + spot              = (known after apply)
              + tags              = (known after apply)
              + taint             = (known after apply)

              + ephemeral_storage_config {
                  + local_ssd_count = (known after apply)
                }

              + gcfs_config {
                  + enabled = (known after apply)
                }

              + kubelet_config {
                  + cpu_cfs_quota        = (known after apply)
                  + cpu_cfs_quota_period = (known after apply)
                  + cpu_manager_policy   = (known after apply)
                }

              + linux_node_config {
                  + sysctls = (known after apply)
                }

              + sandbox_config {
                  + sandbox_type = (known after apply)
                }

              + shielded_instance_config {
                  + enable_integrity_monitoring = (known after apply)
                  + enable_secure_boot          = (known after apply)
                }

              + workload_metadata_config {
                  + mode = (known after apply)
                }
            }

          + upgrade_settings {
              + max_surge       = (known after apply)
              + max_unavailable = (known after apply)
            }
        }

      + notification_config {
          + pubsub {
              + enabled = (known after apply)
              + topic   = (known after apply)
            }
        }

      + release_channel {
          + channel = "UNSPECIFIED"
        }

      + workload_identity_config {
          + workload_pool = "callysto-202316.svc.id.goog"
        }
    }

  # google_container_node_pool.core will be created
  + resource "google_container_node_pool" "core" {
      + cluster                     = "callysto-cluster"
      + id                          = (known after apply)
      + initial_node_count          = 1
      + instance_group_urls         = (known after apply)
      + location                    = "northamerica-northeast1"
      + managed_instance_group_urls = (known after apply)
      + max_pods_per_node           = (known after apply)
      + name                        = "core-pool"
      + name_prefix                 = (known after apply)
      + node_count                  = (known after apply)
      + node_locations              = (known after apply)
      + operation                   = (known after apply)
      + project                     = "callysto-202316"
      + version                     = (known after apply)

      + autoscaling {
          + max_node_count = 5
          + min_node_count = 1
        }

      + management {
          + auto_repair  = true
          + auto_upgrade = false
        }

      + node_config {
          + disk_size_gb      = 30
          + disk_type         = (known after apply)
          + guest_accelerator = (known after apply)
          + image_type        = (known after apply)
          + labels            = {
              + "hub.jupyter.org/node-purpose" = "core"
              + "k8s.dask.org/node-purpose"    = "core"
            }
          + local_ssd_count   = (known after apply)
          + machine_type      = "n1-highmem-4"
          + metadata          = (known after apply)
          + oauth_scopes      = [
              + "https://www.googleapis.com/auth/cloud-platform",
            ]
          + preemptible       = false
          + service_account   = (known after apply)
          + tags              = []
          + taint             = (known after apply)

          + shielded_instance_config {
              + enable_integrity_monitoring = (known after apply)
              + enable_secure_boot          = (known after apply)
            }

          + workload_metadata_config {
              + mode = (known after apply)
            }
        }

      + upgrade_settings {
          + max_surge       = (known after apply)
          + max_unavailable = (known after apply)
        }
    }

  # google_container_node_pool.notebook["user"] will be created
  + resource "google_container_node_pool" "notebook" {
      + cluster                     = "callysto-cluster"
      + id                          = (known after apply)
      + initial_node_count          = 0
      + instance_group_urls         = (known after apply)
      + location                    = "northamerica-northeast1"
      + managed_instance_group_urls = (known after apply)
      + max_pods_per_node           = (known after apply)
      + name                        = "nb-user"
      + name_prefix                 = (known after apply)
      + node_count                  = (known after apply)
      + node_locations              = (known after apply)
      + operation                   = (known after apply)
      + project                     = "callysto-202316"
      + version                     = (known after apply)

      + autoscaling {
          + max_node_count = 20
          + min_node_count = 0
        }

      + management {
          + auto_repair  = true
          + auto_upgrade = false
        }

      + node_config {
          + disk_size_gb      = (known after apply)
          + disk_type         = "pd-balanced"
          + guest_accelerator = (known after apply)
          + image_type        = (known after apply)
          + labels            = {
              + "hub.jupyter.org/node-purpose" = "user"
              + "k8s.dask.org/node-purpose"    = "scheduler"
            }
          + local_ssd_count   = (known after apply)
          + machine_type      = "n1-highmem-4"
          + metadata          = (known after apply)
          + oauth_scopes      = [
              + "https://www.googleapis.com/auth/cloud-platform",
            ]
          + preemptible       = false
          + service_account   = (known after apply)
          + tags              = []
          + taint             = [
              + {
                  + effect = "NO_SCHEDULE"
                  + key    = "hub.jupyter.org_dedicated"
                  + value  = "user"
                },
            ]

          + shielded_instance_config {
              + enable_integrity_monitoring = (known after apply)
              + enable_secure_boot          = (known after apply)
            }

          + workload_metadata_config {
              + mode = "GKE_METADATA"
            }
        }

      + upgrade_settings {
          + max_surge       = (known after apply)
          + max_unavailable = (known after apply)
        }
    }

  # google_project_iam_custom_role.requestor_pays will be created
  + resource "google_project_iam_custom_role" "requestor_pays" {
      + deleted     = (known after apply)
      + description = "Minimal role for hub users on callysto to identify as current project"
      + id          = (known after apply)
      + name        = (known after apply)
      + permissions = [
          + "serviceusage.services.use",
        ]
      + project     = "callysto-202316"
      + role_id     = "callysto_requestor_pays"
      + stage       = "GA"
      + title       = "Identify as project role for users in callysto"
    }

  # google_project_iam_member.cd_sa_roles["roles/artifactregistry.writer"] will be created
  + resource "google_project_iam_member" "cd_sa_roles" {
      + etag    = (known after apply)
      + id      = (known after apply)
      + member  = (known after apply)
      + project = "callysto-202316"
      + role    = "roles/artifactregistry.writer"
    }

  # google_project_iam_member.cd_sa_roles["roles/container.admin"] will be created
  + resource "google_project_iam_member" "cd_sa_roles" {
      + etag    = (known after apply)
      + id      = (known after apply)
      + member  = (known after apply)
      + project = "callysto-202316"
      + role    = "roles/container.admin"
    }

  # google_project_iam_member.cluster_sa_roles["roles/artifactregistry.reader"] will be created
  + resource "google_project_iam_member" "cluster_sa_roles" {
      + etag    = (known after apply)
      + id      = (known after apply)
      + member  = (known after apply)
      + project = "callysto-202316"
      + role    = "roles/artifactregistry.reader"
    }

  # google_project_iam_member.cluster_sa_roles["roles/logging.logWriter"] will be created
  + resource "google_project_iam_member" "cluster_sa_roles" {
      + etag    = (known after apply)
      + id      = (known after apply)
      + member  = (known after apply)
      + project = "callysto-202316"
      + role    = "roles/logging.logWriter"
    }

  # google_project_iam_member.cluster_sa_roles["roles/monitoring.metricWriter"] will be created
  + resource "google_project_iam_member" "cluster_sa_roles" {
      + etag    = (known after apply)
      + id      = (known after apply)
      + member  = (known after apply)
      + project = "callysto-202316"
      + role    = "roles/monitoring.metricWriter"
    }

  # google_project_iam_member.cluster_sa_roles["roles/monitoring.viewer"] will be created
  + resource "google_project_iam_member" "cluster_sa_roles" {
      + etag    = (known after apply)
      + id      = (known after apply)
      + member  = (known after apply)
      + project = "callysto-202316"
      + role    = "roles/monitoring.viewer"
    }

  # google_project_iam_member.cluster_sa_roles["roles/stackdriver.resourceMetadata.writer"] will be created
  + resource "google_project_iam_member" "cluster_sa_roles" {
      + etag    = (known after apply)
      + id      = (known after apply)
      + member  = (known after apply)
      + project = "callysto-202316"
      + role    = "roles/stackdriver.resourceMetadata.writer"
    }

  # google_service_account.cd_sa will be created
  + resource "google_service_account" "cd_sa" {
      + account_id   = "callysto-cd-sa"
      + disabled     = false
      + display_name = "Continuous Deployment SA for callysto"
      + email        = (known after apply)
      + id           = (known after apply)
      + name         = (known after apply)
      + project      = "callysto-202316"
      + unique_id    = (known after apply)
    }

  # google_service_account.cluster_sa will be created
  + resource "google_service_account" "cluster_sa" {
      + account_id   = "callysto-cluster-sa"
      + disabled     = false
      + display_name = "Service account used by nodes of cluster callysto"
      + email        = (known after apply)
      + id           = (known after apply)
      + name         = (known after apply)
      + project      = "callysto-202316"
      + unique_id    = (known after apply)
    }

  # google_service_account_key.cd_sa will be created
  + resource "google_service_account_key" "cd_sa" {
      + id                 = (known after apply)
      + key_algorithm      = "KEY_ALG_RSA_2048"
      + name               = (known after apply)
      + private_key        = (sensitive value)
      + private_key_type   = "TYPE_GOOGLE_CREDENTIALS_FILE"
      + public_key         = (known after apply)
      + public_key_type    = "TYPE_X509_PEM_FILE"
      + service_account_id = (known after apply)
      + valid_after        = (known after apply)
      + valid_before       = (known after apply)
    }

Plan: 14 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + buckets                   = {}
  + ci_deployer_key           = (sensitive value)
  + kubernetes_sa_annotations = {}
  + registry_sa_keys          = (sensitive value)
──────────────────────────────────────────

@GeorgianaElena GeorgianaElena requested a review from a team August 23, 2022 13:27
@GeorgianaElena
Copy link
Member Author

According to our docs, I believe I first need an approval for the config, before running terraform apply and add the cluster to the various workflows

Copy link
Member

@yuvipanda yuvipanda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This LGTM!

For setting up new infra, I think there's no need to wait on first review to start iterating. I think it's covered in https://infrastructure.2i2c.org/en/latest/contributing/code-review.html?highlight=terraform#changes-to-set-up-new-infrastructure.

I'm not sure we need to redeploy either!

@GeorgianaElena GeorgianaElena changed the title Add tfvars file to create callysto cluster Add callysto cluster and hub Aug 25, 2022
@GeorgianaElena
Copy link
Member Author

For setting up new infra, I think there's no need to wait on first review to start iterating. I think it's covered in https://infrastructure.2i2c.org/en/latest/contributing/code-review.html?highlight=terraform#changes-to-set-up-new-infrastructure.
I'm not sure we need to redeploy either!

Sorry about it! Thank you @yuvipanda!

Copy link
Member

@yuvipanda yuvipanda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually have a suggestion here - let's use Google Filestore and not manually manage an NFS server. I think we should move off manually managed NFS servers everywhere possible, and just rely on Filestore. The only disadvantage of filestore is the minimum base cost, but I think at Callysto's scale that isn't a big deal.

@GeorgianaElena GeorgianaElena changed the title Add callysto cluster and hub [WIP] Add callysto cluster and hub Aug 29, 2022
@GeorgianaElena
Copy link
Member Author

I actually have a suggestion here - let's use Google Filestore and not manually manage an NFS server.

Thanks for the suggestion @yuvipanda! I've deleted the manually deployed vm and switched the cluster to use GFS. But just to make sure I understand, the reason behind this is to avoid performing those manual steps to create the vm, right? Are there any other technicalities that make GFS the better choice? thanks!

Update on the state of the cluster and hub

  • there is a new regional callysto cluster that's running and uses GFS to store the user data
  • all support components are up and running
  • there is a staging cluster that's running at https://staging.callysto.2i2c.cloud/ which uses cilogon authentication and allows Google + Microsoft authentication.

HOWEVER

  • The usernames are opaque ids, corresponding to the oidc claim returned by CILogon, which is actually the sub claim as returned by Google and Microsoft (more about this https://www.cilogon.org/oidc) - which should be unique.

IMPASSE

I am in an impasse however on how to deal with the allowed_users and admin_users in this case, because we cannot know these unique ids before and even if we would, that would mean maintaining this lists by hand?

There is also the issue of tracing back support requests to the users, which in theory would be possible with the current hub version if auth_state would show up in the admin panel like:
Screenshot 2022-08-29 at 16 54 49

Context
The utoronto hub is another hub that uses unique identifiers for the users, but since allowing users is done through only allowing login in through the utoronto portal. This is not the case here, since we want to allow logging in with Google and Microsoft.

Where I need help

The only solution I can think of right now is to override the authenticate method of the CILogonOAuthenticator so that it performs the allowed_domain check on the email claim, but returns the authenticated user as the one identified by the oidc claim.

But maybe there's something better that can be done? I would really appreciate a set of fresh eyes looking at thins, since I feel I am in a bit of a loop maybe (cc @yuvipanda)

@GeorgianaElena
Copy link
Member Author

Sorry for the extra ping @yuvipanda and @2i2c-org/tech-team, but can please help unblock me with on the above matter ⬆️ ? thank you!

@yuvipanda
Copy link
Member

But just to make sure I understand, the reason behind this is to avoid performing those manual steps to create the vm, right? Are there any other technicalities that make GFS the better choice? thanks!

Yes - GFS is fully automated, while manual NFS isn't. Plus, it is much easier to resize a GFS than a manually deployed NFS VM.

@yuvipanda
Copy link
Member

@GeorgianaElena re: authentication, were we planning on allowing anyone with a Microsoft or Google account (based on #1439 (comment))? If that's the case, I think for admins we should ask the admins to log in, and ask for their opaque ID. It can be found if they go to the JupyterHub Control Panel. For example, if I go to https://jupyter.utoronto.ca/hub/home I can see my user id on the top right:
Screen Shot 2022-08-30 at 11 44 47 PM

We can add those to the admin list once admins log in and report their opaque UID to us. This is also how support can be provided - users can self-report their Opaque UID when asking for support.

Hope this helps!

@GeorgianaElena
Copy link
Member Author

@GeorgianaElena re: authentication, were we planning on allowing anyone with a Microsoft or Google account (based on #1439 (comment))?

Hmm, not sure if they want to allow anyone but I will ask. I assumed they would like to keep a list of allowed users at least. Esp since we don't have extraordinary experiences with the hubs where we enabled this kind of loose access.

We can add those to the admin list once admins log in and report their opaque UID to us. This is also how support can be provided - users can self-report their Opaque UID when asking for support.

I believe not knowing each users opaque UID disables the ability to have an allowed list because the people will need to login to find their id.

But CILogon provides a service which I believe checks which attributes returns an identity provider, and it's running at https://cilogon.org/testidp/. You can then login with the IDP and account you'd like to access the hub, go to the User Attributes section there, and get the OpenID returned there, which is what we're using to identify the users in this hub (I double checked with my account and this is it)

We could use this to find out the admin's Opaque IDs and the admins could ask the users get their oids to providem access to the hub.

WDYT @yuvipanda ?

IT looks like this:
Screenshot 2022-08-31 at 12 53 53

@github-actions
Copy link

Support and Staging deployments

Cloud Provider Cluster Name Upgrade Support? Reason for Support Redeploy Upgrade Staging? Reason for Staging Redeploy
gcp callysto Yes Following helm chart values files were modified: enc-support.secret.values.yaml, support.values.yaml Yes Following helm chart values files were modified: staging.values.yaml, enc-staging.secret.values.yaml, common.values.yaml

Production deployments

Cloud Provider Cluster Name Hub Name Reason for Redeploy
gcp callysto prod Following helm chart values files were modified: prod.values.yaml, common.values.yaml, enc-prod.secret.values.yaml

@GeorgianaElena GeorgianaElena changed the title [WIP] Add callysto cluster and hub Add callysto cluster and hub Aug 31, 2022
@GeorgianaElena GeorgianaElena mentioned this pull request Aug 31, 2022
9 tasks
@github-actions
Copy link

github-actions bot commented Sep 1, 2022

Support and Staging deployments

Cloud Provider Cluster Name Upgrade Support? Reason for Support Redeploy Upgrade Staging? Reason for Staging Redeploy
gcp callysto Yes Following helm chart values files were modified: enc-support.secret.values.yaml, support.values.yaml Yes Following helm chart values files were modified: common.values.yaml, enc-staging.secret.values.yaml, staging.values.yaml

Production deployments

Cloud Provider Cluster Name Hub Name Reason for Redeploy
gcp callysto prod Following helm chart values files were modified: common.values.yaml, prod.values.yaml, enc-prod.secret.values.yaml

@GeorgianaElena
Copy link
Member Author

@2i2c-org/tech-team, can you please login into https://2i2c.callysto.ca then retrieve the hub username from there so I can add it as a hub admin?

@sgibson91
Copy link
Member

I am 115722756968212778437 (We should maybe add comments in the config to map who is who... you've probably already thought of that!)

@github-actions
Copy link

github-actions bot commented Sep 1, 2022

Support and Staging deployments

Cloud Provider Cluster Name Upgrade Support? Reason for Support Redeploy Upgrade Staging? Reason for Staging Redeploy
gcp callysto Yes Following helm chart values files were modified: enc-support.secret.values.yaml, support.values.yaml Yes Following helm chart values files were modified: common.values.yaml, enc-staging.secret.values.yaml, staging.values.yaml

Production deployments

Cloud Provider Cluster Name Hub Name Reason for Redeploy
gcp callysto prod Following helm chart values files were modified: enc-prod.secret.values.yaml, common.values.yaml, prod.values.yaml

@GeorgianaElena
Copy link
Member Author

We should maybe add comments in the config to map who is who... you've probably already thought of that!

Kind of 😅 I believe the community rep wants to keep theirs private, so I'll probably have staff ones mapped and add the others through the hub Admin Panel.

@sgibson91
Copy link
Member

Yeah, I meant, but didn't articulate, staff IDs!

@github-actions
Copy link

github-actions bot commented Sep 1, 2022

Support and Staging deployments

Cloud Provider Cluster Name Upgrade Support? Reason for Support Redeploy Upgrade Staging? Reason for Staging Redeploy
gcp callysto Yes Following helm chart values files were modified: enc-support.secret.values.yaml, support.values.yaml Yes Following helm chart values files were modified: enc-staging.secret.values.yaml, staging.values.yaml, common.values.yaml

Production deployments

Cloud Provider Cluster Name Hub Name Reason for Redeploy
gcp callysto prod Following helm chart values files were modified: prod.values.yaml, enc-prod.secret.values.yaml, common.values.yaml

@sgibson91
Copy link
Member

sgibson91 commented Sep 1, 2022

Hmmm, I'm going to have to pause that workflow that posts the plans and figure out why it's posting multiple times instead of updating an existing comment...

Edit: I opened #1675 to track

@GeorgianaElena
Copy link
Member Author

Edit: I opened #1675 to track

Thanks a lot @sgibson91

Feedback

I believe this is ready for review again. I will check with the callysto folks if they are able to login, but my plan is to get this merged, then iterate on further request from them in other PRs

@GeorgianaElena GeorgianaElena requested a review from a team September 2, 2022 07:48
Copy link
Member

@yuvipanda yuvipanda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the understanding that this would probably have trouble with cryptocurrency miners in some form as is (but I think @ianabc already knows that :D), I'm happy to merge this.

@ianabc @GeorgianaElena let's open a separate issue to talk about locking this down to protect from that some more? We already run cryptnono, but need more I'd imagine.

@GeorgianaElena
Copy link
Member Author

Thanks @yuvipanda! I opened #1678 and will merge this now as it is, but I plan to open a follow-up PR to use an allowed list of users until we find other abuse protection mechanisms for it, or until the hub will need to be used.

@GeorgianaElena GeorgianaElena merged commit 10049a0 into 2i2c-org:master Sep 5, 2022
@github-actions
Copy link

github-actions bot commented Sep 5, 2022

🎉🎉🎉🎉

Monitor the deployment of the hubs here 👉 https://github.com/2i2c-org/infrastructure/actions/runs/2991831003

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Archived in project
Development

Successfully merging this pull request may close these issues.

3 participants