-
Notifications
You must be signed in to change notification settings - Fork 63
Crash #88
Comments
I noticed
Is this crash reproducible with a 1.17.x cluster @aronneagu? |
Sorry, I had kubectl use a an existing context that had a 1.16 cluster inn it |
To deploy a 1.17.7 GKE cluster via terraform, consider the following: resource "google_compute_network" "vpc" {
name = "kubernetes-tf-network"
auto_create_subnetworks = "false"
}
resource "google_compute_subnetwork" "subnet" {
name = "kubernetes-tf-subnet"
region = var.region
network = google_compute_network.vpc.name
ip_cidr_range = "10.10.0.0/24"
}
data "google_container_engine_versions" "east4" {
provider = google-beta
project = var.project_id
location = var.region
version_prefix = "1.17."
}
resource "google_container_cluster" "primary" {
provider = google-beta
name = "kubernetes-tf-cluster"
location = var.region
remove_default_node_pool = true
initial_node_count = 1
node_version = data.google_container_engine_versions.east4.release_channel_default_version["RAPID"]
min_master_version = data.google_container_engine_versions.east4.release_channel_default_version["RAPID"]
release_channel {
channel = "RAPID"
}
network = google_compute_network.vpc.name
subnetwork = google_compute_subnetwork.subnet.name
master_auth {
client_certificate_config {
issue_client_certificate = false
}
}
}
resource "google_container_node_pool" "primary_nodes" {
provider = google-beta
name = "${google_container_cluster.primary.name}-node-pool"
location = var.region
cluster = google_container_cluster.primary.name
initial_node_count = 1
autoscaling {
min_node_count = 0
max_node_count = 6
}
node_config {
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]
labels = {
env = var.project_id
}
# preemptible = true
machine_type = var.machine_type
tags = ["gke-node", "${google_container_cluster.primary.name}"]
metadata = {
disable-legacy-endpoints = "true"
}
}
}
output "kubernetes_cluster_name" {
value = google_container_cluster.primary.name
description = "GKE Cluster Name"
}
output "version" {
value = google_container_cluster.primary.master_version
description = "master version"
}
output "project_id" {
value = var.project_id
description = "GCP project id"
}
output "region" {
value = var.region
description = "region"
} The outputs allow you to connect to the cluster via gcloud container clusters get-credentials $(terraform output kubernetes_cluster_name) --region $(terraform output region) --project $(terraform output project_id) |
Hi @jefflantz, thanks for that snippet. But I am looking to deploy to Azure AKS |
Sorry about that, I confused this issue with a different one. Perhaps a similar method exists for amazon? |
@aronneagu Your crash is caused by inconsistent credential values coming from the AKS datasources. Likely because you are creating the cluster in the same apply operation (am I right?). This is a known limitation in Terraform that can sometimes cause problems. The situation here should improve with #65, but it's still not entirely solved. However, you'll notice that once that's fixed, your use-case will still not work as expected because you create multiple K8s resources that depend on one-another. This currently falls under a documented limitation of the provider (see README). It will be fixed when PR #41 merges. |
Hi @alexsomesan, you are right, I was trying to use the kubernetes-alpha provider in the same apply as the one that was creating the cluster. In the end, I've separated the apply in two runs, one that creates the kubernetes cluster, and the second one that uses kubernetes-alpha to deploy istio Thanks for explaining the cause, it wasn't immediately obvious why it didn't work |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
Terraform Version and Provider Version
Terraform v0.12.28
Used the binary from https://github.com/hashicorp/terraform-provider-kubernetes-alpha/releases/download/v0.1.0/terraform-provider-kubernetes-alpha_0.1.0_linux_amd64.zip
Kubernetes Version
kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.8", GitCommit:"ec6eb119b81be488b030e849b9e64fda4caaf33c", GitTreeState:"clean", BuildDate:"2020-03-13T02:33:08Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Affected Resource(s)
Terraform Configuration Files
main.tf
variables.tf
Debug Output
Panic Output
https://gist.github.com/aronneagu/2480e338d056cd955d7a2154bd1f5a2d
Expected Behavior
What should have happened?
Actual Behavior
What actually happened?
Steps to Reproduce
Important Factoids
References
Community Note
The text was updated successfully, but these errors were encountered: