-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can not use dynamic Service Account #27
Comments
Hi @AdrienWalkowiak, Is this still an issue for you? I've been unable to duplicate the issue. I'm able to correctly plan and apply using just about the same terraform you pasted. I've put what I'm using below. If this is still an issue you're running into, could you send me your whole project? Best, Rishi My main.tf module "gke" {
source = "github.com/terraform-google-modules/terraform-google-kubernetes-engine"
project_id = "${var.project_id}"
name = "deploy-service-cluster"
region = "${var.region}"
network = "${var.network}"
subnetwork = "${var.subnetwork}"
ip_range_pods = "${var.ip_range_pods}"
ip_range_services = "${var.ip_range_services}"
http_load_balancing = true
horizontal_pod_autoscaling = true
kubernetes_dashboard = true
network_policy = true
kubernetes_version = "1.11.2-gke.18"
node_pools = [
{
name = "default-node-pool"
machine_type = "n1-standard-2"
min_count = 1
max_count = 10
disk_size_gb = 100
disk_type = "pd-standard"
image_type = "COS"
auto_repair = true
auto_upgrade = true
},
]
node_pools_labels = {
all = {}
default-node-pool = {
default-node-pool = "true"
}
}
node_pools_taints = {
all = []
default-node-pool = [
{
key = "default-node-pool"
value = "true"
effect = "PREFER_NO_SCHEDULE"
},
]
}
node_pools_tags = {
all = []
default-node-pool = [
"default-node-pool",
]
}
} My vars /**
* Copyright 2018 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
variable "project_id" {
description = "The project ID to host the cluster in (required)"
}
variable "name" {
description = "The name of the cluster (required)"
}
variable "description" {
description = "The description of the cluster"
default = ""
}
variable "regional" {
description = "Whether is a regional cluster (zonal cluster if set false. WARNING: changing this after cluster creation is destructive!)"
default = true
}
variable "region" {
description = "The region to host the cluster in (required)"
}
variable "zones" {
type = "list"
description = "The zones to host the cluster in (optional if regional cluster / required if zonal)"
default = []
}
variable "network" {
description = "The VPC network to host the cluster in (required)"
}
variable "network_project_id" {
description = "The project ID of the shared VPC's host (for shared vpc support)"
default = ""
}
variable "subnetwork" {
description = "The subnetwork to host the cluster in (required)"
}
variable "kubernetes_version" {
description = "The Kubernetes version of the masters. If set to 'latest' it will pull latest available version in the selected region."
default = "1.10.6"
}
variable "node_version" {
description = "The Kubernetes version of the node pools. Defaults kubernetes_version (master) variable and can be overridden for individual node pools by setting the version key on them. Must be empyty or set the same as master at cluster creation."
default = ""
}
variable "master_authorized_networks_config" {
type = "list"
description = <<EOF
The desired configuration options for master authorized networks. Omit the nested cidr_blocks attribute to disallow external access (except the cluster node IPs, which GKE automatically whitelists)
### example format ###
master_authorized_networks_config = [{
cidr_blocks = [{
cidr_block = "10.0.0.0/8"
display_name = "example_network"
}],
}]
EOF
default = []
}
variable "horizontal_pod_autoscaling" {
description = "Enable horizontal pod autoscaling addon"
default = false
}
variable "http_load_balancing" {
description = "Enable httpload balancer addon"
default = true
}
variable "kubernetes_dashboard" {
description = "Enable kubernetes dashboard addon"
default = false
}
variable "network_policy" {
description = "Enable network policy addon"
default = false
}
variable "maintenance_start_time" {
description = "Time window specified for daily maintenance operations in RFC3339 format"
default = "05:00"
}
variable "ip_range_pods" {
description = "The secondary ip range to use for pods"
}
variable "ip_range_services" {
description = "The secondary ip range to use for pods"
}
variable "node_pools" {
type = "list"
description = "List of maps containing node pools"
default = [
{
name = "default-node-pool"
},
]
}
variable "node_pools_labels" {
type = "map"
description = "Map of maps containing node labels by node-pool name"
default = {
all = {}
default-node-pool = {}
}
}
variable "node_pools_taints" {
type = "map"
description = "Map of lists containing node taints by node-pool name"
default = {
all = []
default-node-pool = []
}
}
variable "node_pools_tags" {
type = "map"
description = "Map of lists containing node network tags by node-pool name"
default = {
all = []
default-node-pool = []
}
}
variable "stub_domains" {
type = "map"
description = "Map of stub domains and their resolvers to forward DNS queries for a certain domain to an external DNS server"
default = {}
}
variable "non_masquerade_cidrs" {
type = "list"
description = "List of strings in CIDR notation that specify the IP address ranges that do not use IP masquerading."
default = ["10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16"]
}
variable "ip_masq_resync_interval" {
description = "The interval at which the agent attempts to sync its ConfigMap file from the disk."
default = "60s"
}
variable "ip_masq_link_local" {
description = "Whether to masquerade traffic to the link-local prefix (169.254.0.0/16)."
default = "false"
}
variable "logging_service" {
description = "The logging service that the cluster should write logs to. Available options include logging.googleapis.com, logging.googleapis.com/kubernetes (beta), and none"
default = "logging.googleapis.com"
}
variable "monitoring_service" {
description = "The monitoring service that the cluster should write metrics to. Automatically send metrics from pods in the cluster to the Google Cloud Monitoring API. VM metrics will be collected by Google Compute Engine regardless of this setting Available options include monitoring.googleapis.com, monitoring.googleapis.com/kubernetes (beta) and none"
default = "monitoring.googleapis.com"
} And my outputs /**
* Copyright 2018 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
output "name_example" {
description = "Cluster name"
value = "${module.gke.name}"
}
output "endpoint_example" {
sensitive = true
description = "Cluster endpoint"
value = "${module.gke.endpoint}"
}
output "location_example" {
description = "Cluster location"
value = "${module.gke.location}"
}
output "zones_example" {
description = "List of zones in which the cluster resides"
value = "${module.gke.zones}"
}
output "node_pools_names_example" {
value = "${module.gke.node_pools_names}"
}
output "node_pools_versions_example" {
value = "${module.gke.node_pools_versions}"
} |
Thank you for checking. I tried using your code and it seems to go past the error so I will close this issue and see what' wrong on my end, probably a syntax issue. Thanks |
this is definitely an issue i get following error
my config is pretty straight forward
I am thinking this should fix it ... needs to be applied in both regional.tf and zonal.tf
|
I have the same issue now. Trying to deploy a GKE cluster and keep getting this error.
Taint option does not seem to be available now I removed the Also, as the error suggests to use Beta provider and I did it but from my understanding this option has been deprecated |
Same for me with a minimal config: provider "google-beta" {
project = "${var.project_id}"
region = "${var.region}"
}
module "gke" {
source = "terraform-google-modules/kubernetes-engine/google"
project_id = "${var.project_id}"
name = "${var.cluster_name}"
region = "${var.region}"
zones = ["${var.cluster_zone}"]
network = "${var.network_name}"
subnetwork = "${var.subnetwork_name}"
ip_range_pods = "${var.ip_range_pods}"
ip_range_services = "${var.ip_range_services}"
service_account = "[SERVICE ACCOUNT NAME]"
kubernetes_dashboard = true
}
|
Same here:
plan output:
|
@deenski @tommyknows @faizan82 Can any of you let me know what version of the provider you're using? I can confirm the issue on 2.0. |
Ok, so the documentation states that you need version 1.8 of the provider, and that is the supported configuration.
I can confirm that the examples work on with the provider pinned to 1.8. Using the 2.0 version of the provider is unsupported. If you are using that, you do need to be on the beta version of the Google provider. This configuration seems to work for me. provider "google-beta" {
version = "~> 2.0.0"
project = "${var.project_id}"
region = "${var.region}"
}
module "gke" {
providers = {
google ="google-beta"
}
source = "terraform-google-modules/kubernetes-engine/google"
project_id = "${var.project_id}"
name = "issue27-test-cluster"
region = "us-east4"
zones = ["us-east4-a"]
network = "${var.network}"
subnetwork = "${var.subnetwork}"
ip_range_pods = "${var.ip_range_pods}"
ip_range_services = "${var.ip_range_services}"
http_load_balancing = true
horizontal_pod_autoscaling = true
kubernetes_dashboard = true
network_policy = true
} Note that the only difference is you have to explicitly pass the beta provider into the module, so that it inherits correctly. |
Can confirm, I was on the 2.0 version. The configuration @ogreface provided also works for me. Edit: sorry for the delay. |
@ogreface -- we're giving this a shot right now. We even tried to explicitly pin to 2.0.0 of the beta provider and no dice. Seems this is now completely borked? |
@wadadli Could you paste your code? Happy to take a look, but the example above still seems to work for me. |
Here's the tf that is resulting in the following error
We have tried adding the
to both |
@wadadli That TF pretty much works for me in terms of validation. Are you specifying the provider that's being passed in?
|
I still have this issue. I'm using master code of this repo, downloaded some minutes ago.
This is my provider
Still having this errors:
Is there some mistake I made? |
Folks, I think I've found the issue. You must use specifically a variable. If you use local or module output it does fail. node_pools = [
{
name = "default-node-pool"
machine_type = "n1-standard-2"
min_count = 0
max_count = 1
disk_size_gb = 100
disk_type = "pd-standard"
image_type = "COS"
auto_repair = true
auto_upgrade = true
service_account = "${local.default_service_account}"
preemptible = false
initial_node_count = 0
},
] Does not work node_pools = [
{
name = "default-node-pool"
machine_type = "n1-standard-2"
min_count = 0
max_count = 1
disk_size_gb = 100
disk_type = "pd-standard"
image_type = "COS"
auto_repair = true
auto_upgrade = true
service_account = "${var.default_service_account}"
preemptible = false
initial_node_count = 0
},
] works |
Hi all. I apologize for the persistence of this issue. A workaround in addition to the one shared by @thiagonache is to allow the module to create a dedicated service account for the cluster: module "kubernetes_engine" {
# ...
service_account = "create"
} |
Not reproducible. locals {
cluster_type = "node-pool"
mysa = "[email protected]"
}
provider "google" {
version = "~> 2.9.0"
region = var.region
}
provider "google-beta" {
version = "~> 2.9.0"
region = var.region
}
module "sa" {
source = "./sa"
}
module "gke" {
source = "../terraform-google-kubernetes-engine"
project_id = var.project_id
name = "${local.cluster_type}-cluster${var.cluster_name_suffix}"
regional = false
region = var.region
zones = var.zones
network = var.network
subnetwork = var.subnetwork
ip_range_pods = var.ip_range_pods
ip_range_services = var.ip_range_services
remove_default_node_pool = true
disable_legacy_metadata_endpoints = false
node_pools = [
{
name = "pool-01"
min_count = 1
max_count = 2
service_account = module.sa.name
auto_upgrade = false
},
{
name = "pool-02"
min_count = 1
max_count = 2
service_account = local.mysa
auto_upgrade = false
},
] Module file sa.tf: locals {
prefix = "xxxxxx"
suffix = "xxxxxx.iam.gserviceaccount.com"
}
output "name" {
value = "${local.prefix}@${local.suffix}"
} |
@kopachevsky Please attempt to reproduce when you include the SA in the same config as your module invocation. ie.
|
@morgante this schenario working fine, tested several times, see gist working for me https://gist.github.com/kopachevsky/6152449ac8e2a177e0759564915ed84f So dynamic service account definition in node_pool parameter works: node_pools = [
{
name = "pool-01"
min_count = 1
max_count = 1
service_account = google_service_account.sa.email
auto_upgrade = false
auto_repair = false
},
] But if I set service account for default pool as top level service_account parameter: module "gke" {
source = "../terraform-google-kubernetes-engine"
project_id = "gl-akopachevskyy-gke"
initial_node_count = 1
service_account = google_service_account.gke.email I'm getting following error
Possible solution here is to add new parameter boolean create_service_account, true by default and use it in form: module "gke" {
source = "../terraform-google-kubernetes-engine"
project_id = "gl-akopachevskyy-gke"
initial_node_count = 1
create_service_account = false
service_account = google_service_account.gke.email
//..other props
} What do you think about? |
@kopachevsky That sounds good to me. |
Bugfix: Can not use dynamic Service Account #27
Added boolean create_service_account variable, true by default, after this change google_service_account.cluster_service_account.count depends on new variable and not on service_account variable, means service_account variable can be dynamic from now on.
…27/dynamic-sa Bugfix: Can not use dynamic Service Account terraform-google-modules#27
I am trying to use this module, based on the provided examples, but can't seem to get it to work. It used to be fine few days ago, but not anymore.
Here is the error I get:
And here is the terraform used:
The text was updated successfully, but these errors were encountered: