Skip to content
This repository has been archived by the owner on Mar 29, 2023. It is now read-only.

Commit

Permalink
Merge pull request #29 from gruntwork-io/yori-update-to-latest-tiller
Browse files Browse the repository at this point in the history
Update to latest method of deploying tiller
  • Loading branch information
yorinasub17 authored May 6, 2019
2 parents 2f95540 + df1255c commit 0f3cf50
Show file tree
Hide file tree
Showing 7 changed files with 260 additions and 79 deletions.
2 changes: 1 addition & 1 deletion .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ defaults: &defaults
environment:
GRUNTWORK_INSTALLER_VERSION: v0.0.21
TERRATEST_LOG_PARSER_VERSION: v0.13.13
KUBERGRUNT_VERSION: v0.3.6
KUBERGRUNT_VERSION: v0.3.8
HELM_VERSION: v2.11.0
MODULE_CI_VERSION: v0.13.3
TERRAFORM_VERSION: 0.11.8
Expand Down
45 changes: 25 additions & 20 deletions examples/gke-basic-tiller/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,10 @@
# GKE Basic Helm Example

This example shows how to use Terraform to launch a GKE cluster with Helm configured and installed. We achieve this by
calling out to our `kubergrunt` utility in order to securely deploy Tiller - the server component of Helm.
utilizing the [k8s-tiller module in the terraform-kubernetes-helm
repository](https://github.com/gruntwork-io/terraform-kubernetes-helm/tree/master/modules/k8s-tiller).
Note that we utilize our `kubergrunt` utility to securely manage TLS certificate key pairs used by Tiller - the server
component of Helm.

## Background

Expand All @@ -11,7 +14,7 @@ before continuing with this guide for a background on Helm, Tiller, and the secu

## Overview

In this guide we will walk through the steps necessary to get up and running with deploying Tiller on GKE using this
In this guide we will walk through the steps necessary to get up and running with deploying Tiller on GKE using this
module. Here are the steps:

1. [Install the necessary tools](#installing-necessary-tools)
Expand All @@ -23,8 +26,8 @@ module. Here are the steps:
## Installing necessary tools

In addition to `terraform`, this guide relies on the `gcloud` and `kubectl` tools to manage the cluster. In addition
we use `kubergrunt` to manage the deployment of Tiller. You can read more about the decision behind this approach in
[the Appendix](#appendix-a-why-kubergrunt) of this guide.
we use `kubergrunt` to manage the TLS certificate key pairs for Tiller. You can read more about the decision behind this
approach in [the Appendix](#appendix-a-why-kubergrunt) of this guide.

This means that your system needs to be configured to be able to find `terraform`, `gcloud`, `kubectl`, `kubergrunt`,
and `helm` client utilities on the system `PATH`. Here are the installation guide for each tool:
Expand All @@ -33,7 +36,7 @@ and `helm` client utilities on the system `PATH`. Here are the installation guid
1. [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
1. [`terraform`](https://learn.hashicorp.com/terraform/getting-started/install.html)
1. [`helm` client](https://docs.helm.sh/using_helm/#installing-helm)
1. [`kubergrunt`](https://github.com/gruntwork-io/kubergrunt#installation) (Minimum version: v0.3.6)
1. [`kubergrunt`](https://github.com/gruntwork-io/kubergrunt#installation) (Minimum version: v0.3.8)

Make sure the binaries are discoverable in your `PATH` variable. See [this Stack Overflow
post](https://stackoverflow.com/questions/14637979/how-to-permanently-set-path-on-linux-unix) for instructions on
Expand All @@ -57,21 +60,26 @@ Now that all the prerequisite tools are installed, we are ready to deploy the GK
- `terraform apply`
- Fill in the required variables based on your needs. <!-- TODO: show example inputs here -->

**Note:** For simplicity this example uses `kubergrunt` to install Tiller into the `kube-system` namespace. However in
a production deployment we strongly recommend you segregate the Tiller resources into a separate namespace.
**Note:** For simplicity this example installs Tiller into the `kube-system` namespace. However in a production
deployment we strongly recommend you segregate the Tiller resources into a separate namespace.

As part of the deployment, `kubergrunt` will:
This Terraform code will:

- Create a new TLS certificate key pair to use as the CA and upload it to Kubernetes as a `Secret` in the `kube-system`
namespace.
- Using the generated CA TLS certificate key pair, create a signed TLS certificate key pair to use to identify the
Tiller server and upload it to Kubernetes as a `Secret` in `kube-system`.
- Deploy a publicly accessible GKE cluster
- Use `kubergrunt` to:
- Create a new TLS certificate key pair to use as the CA and upload it to Kubernetes as a `Secret` in the
`kube-system` namespace.
- Using the generated CA TLS certificate key pair, create a signed TLS certificate key pair to use to identify the
Tiller server and upload it to Kubernetes as a `Secret` in `kube-system`.

- Create a new `ServiceAccount` for Tiller in the `kube-system` namespace and bind admin permissions to it.
- Deploy Tiller with the following configurations turned on:
- TLS verification
- `Secrets` as the storage engine
- Provisioned in the `kube-system` namespace using the `default` service account.

- Grant access to the provided RBAC entity and configure the local helm client to use those credentials:
- Once Tiller is deployed, once again call out to `kubergrunt` to grant access to the provided RBAC entity and configure
the local helm client to use those credentials:
- Using the CA TLS certificate key pair, create a signed TLS certificate key pair to use to identify the client.
- Upload the certificate key pair to the `kube-system`.
- Grant the RBAC entity access to:
Expand All @@ -82,8 +90,8 @@ As part of the deployment, `kubergrunt` will:

- Install the client certificate key pair to the helm home directory so the client can use it.

You should now have a working Tiller deployment with your helm client configured to access it.
So let's verify that in the next step!
At the end of the `terraform apply`, you should now have a working Tiller deployment with your helm client configured to
access it. So let's verify that in the next step!

## Verify Tiller Deployment

Expand Down Expand Up @@ -126,14 +134,11 @@ to implementing the functionalities using pure Terraform providers. This approac
That said, we decided to use this approach because of limitations in the existing providers to implement the
functionalities here in pure Terraform code:

- The Helm provider does not have [a resource that manages
Tiller](https://github.com/terraform-providers/terraform-provider-helm/issues/134).
- The [TLS provider](https://www.terraform.io/docs/providers/tls/index.html) stores the certificate key pairs in plain
text into the Terraform state.
- The Kubernetes Secret resource in the provider [also stores the value in plain text in the Terraform
state](https://www.terraform.io/docs/providers/kubernetes/r/secret.html).
- The grant and configure workflows are better suited as CLI tools than in Terraform.

Note that [we intend to implement a pure Terraform version of this when the Helm provider is
updated](https://github.com/gruntwork-io/terraform-kubernetes-helm/issues/13), but we plan to continue to maintain the
`kubergrunt` approach for folks who are wary of leaking secrets into Terraform state.
Note that we intend to implement a pure Terraform version of this in the near future, but we plan to continue to
maintain the `kubergrunt` approach for folks who are wary of leaking secrets into Terraform state.
115 changes: 100 additions & 15 deletions examples/gke-basic-tiller/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -189,11 +189,11 @@ resource "random_string" "suffix" {
}

module "vpc_network" {
source = "git::[email protected]:gruntwork-io/terraform-google-network.git//modules/vpc-network?ref=v0.0.2"
source = "git::[email protected]:gruntwork-io/terraform-google-network.git//modules/vpc-network?ref=v0.0.3"

name = "${var.cluster_name}-network-${random_string.suffix.result}"
project = "${var.project}"
region = "${var.region}"
name_prefix = "${var.cluster_name}-network-${random_string.suffix.result}"
project = "${var.project}"
region = "${var.region}"

cidr_block = "${var.vpc_cidr_block}"
secondary_cidr_block = "${var.vpc_secondary_cidr_block}"
Expand All @@ -212,6 +212,14 @@ resource "null_resource" "configure_kubectl" {
depends_on = ["google_container_node_pool.node_pool"]
}

# Create a ServiceAccount for Tiller
resource "kubernetes_service_account" "tiller" {
metadata {
name = "tiller"
namespace = "${local.tiller_namespace}"
}
}

resource "kubernetes_cluster_role_binding" "user" {
metadata {
name = "admin-user"
Expand All @@ -229,14 +237,16 @@ resource "kubernetes_cluster_role_binding" "user" {
api_group = "rbac.authorization.k8s.io"
}

# We give the Tiller ServiceAccount cluster admin status so that we can deploy anything in any namespace using this
# Tiller instance for testing purposes. In production, you might want to use a more restricted role.
subject {
# this is a workaround for https://github.com/terraform-providers/terraform-provider-kubernetes/issues/204.
# we have to set an empty api_group or the k8s call will fail. It will be fixed in v1.5.2 of the k8s provider.
api_group = ""

kind = "ServiceAccount"
name = "default"
namespace = "kube-system"
name = "${kubernetes_service_account.tiller.metadata.0.name}"
namespace = "${local.tiller_namespace}"
}

subject {
Expand All @@ -246,29 +256,104 @@ resource "kubernetes_cluster_role_binding" "user" {
}
}

# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# GENERATE TLS CERTIFICATES FOR USE WITH TILLER
# This will use kubergrunt to generate TLS certificates, and upload them as Kubernetes Secrets that can then be used by
# Tiller.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

resource "null_resource" "tiller_tls_certs" {
provisioner "local-exec" {
command = <<-EOF
# Generate CA TLS certs
kubergrunt tls gen --ca --namespace kube-system --secret-name ${local.tls_ca_secret_name} --secret-label gruntwork.io/tiller-namespace=${local.tiller_namespace} --secret-label gruntwork.io/tiller-credentials=true --secret-label gruntwork.io/tiller-credentials-type=ca --tls-subject-json '${jsonencode(var.tls_subject)}' ${local.tls_algorithm_config} ${local.kubectl_auth_config}
# Then use that CA to generate server TLS certs
kubergrunt tls gen --namespace ${local.tiller_namespace} --ca-secret-name ${local.tls_ca_secret_name} --ca-namespace kube-system --secret-name ${local.tls_secret_name} --secret-label gruntwork.io/tiller-namespace=${local.tiller_namespace} --secret-label gruntwork.io/tiller-credentials=true --secret-label gruntwork.io/tiller-credentials-type=server --tls-subject-json '${jsonencode(var.tls_subject)}' ${local.tls_algorithm_config} ${local.kubectl_auth_config}
EOF

# Use environment variables for Kubernetes credentials to avoid leaking into the logs
environment = {
KUBECTL_SERVER_ENDPOINT = "${data.template_file.gke_host_endpoint.rendered}"
KUBECTL_CA_DATA = "${base64encode(data.template_file.cluster_ca_certificate.rendered)}"
KUBECTL_TOKEN = "${data.template_file.access_token.rendered}"
}
}
}

# ---------------------------------------------------------------------------------------------------------------------
# DEPLOY TILLER TO THE GKE CLUSTER USING KUBERGRUNT
# DEPLOY TILLER TO THE GKE CLUSTER
# ---------------------------------------------------------------------------------------------------------------------

# We install an older version of Tiller as the provider expects this.
resource "null_resource" "tiller" {
module "tiller" {
source = "git::[email protected]:gruntwork-io/terraform-kubernetes-helm.git//modules/k8s-tiller?ref=v0.3.0"

tiller_service_account_name = "${kubernetes_service_account.tiller.metadata.0.name}"
tiller_service_account_token_secret_name = "${kubernetes_service_account.tiller.default_secret_name}"
tiller_tls_secret_name = "${local.tls_secret_name}"
namespace = "${local.tiller_namespace}"
tiller_image_version = "${local.tiller_version}"

# Kubergrunt will store the private key under the key "tls.pem" in the corresponding Secret resource, which will be
# accessed as a file when mounted into the container.
tiller_tls_key_file_name = "tls.pem"

dependencies = ["${null_resource.tiller_tls_certs.id}", "${kubernetes_cluster_role_binding.user.id}"]
}

# The Deployment resources created in the module call to `k8s-tiller` will be complete creation before the rollout is
# complete. We use kubergrunt here to wait for the deployment to complete, so that when this resource is done creating,
# any resources that depend on this can assume Tiller is successfully deployed and up at that point.
resource "null_resource" "wait_for_tiller" {
provisioner "local-exec" {
command = "kubergrunt helm deploy --service-account default --resource-namespace default --tiller-namespace kube-system ${local.tls_algorithm_config} --tls-subject-json '${jsonencode(var.tls_subject)}' --client-tls-subject-json '${jsonencode(var.client_tls_subject)}' --helm-home ${pathexpand("~/.helm")} --tiller-version v2.11.0 --rbac-user ${data.google_client_openid_userinfo.terraform_user.email}"
command = "kubergrunt helm wait-for-tiller --tiller-namespace ${local.tiller_namespace} --tiller-deployment-name ${module.tiller.deployment_name} --expected-tiller-version ${local.tiller_version} ${local.kubectl_auth_config}"
}
}

# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# CONFIGURE OPERATOR HELM CLIENT
# To allow usage of the helm client immediately, we grant access to the admin RBAC user and configure the local helm
# client.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

resource "null_resource" "grant_and_configure_helm" {
provisioner "local-exec" {
command = "kubergrunt helm undeploy --helm-home ${pathexpand("~/.helm")} --tiller-namespace kube-system ${local.undeploy_args}"
when = "destroy"
command = <<-EOF
kubergrunt helm grant --tiller-namespace ${local.tiller_namespace} --tls-subject-json '${jsonencode(var.client_tls_subject)}' --rbac-user ${data.google_client_openid_userinfo.terraform_user.email} ${local.kubectl_auth_config}
kubergrunt helm configure --helm-home ${pathexpand("~/.helm")} --tiller-namespace ${local.tiller_namespace} --resource-namespace ${local.resource_namespace} --rbac-user ${data.google_client_openid_userinfo.terraform_user.email} ${local.kubectl_auth_config}
EOF
}

depends_on = ["null_resource.configure_kubectl", "kubernetes_cluster_role_binding.user"]
depends_on = ["null_resource.wait_for_tiller"]
}

# Interpolate and construct kubergrunt deploy command args
# ---------------------------------------------------------------------------------------------------------------------
# COMPUTATIONS
# These locals set constants and compute various useful information used throughout this Terraform module.
# ---------------------------------------------------------------------------------------------------------------------

locals {
# For this example, we hardcode our tiller namespace to kube-system. In production, you might want to consider using a
# different Namespace.
tiller_namespace = "kube-system"

# For this example, we setup Tiller to manage the default Namespace.
resource_namespace = "default"

# We install an older version of Tiller to match the Helm library version used in the Terraform helm provider.
tiller_version = "v2.11.0"

# We store the CA Secret in the kube-system Namespace, given that only cluster admins should access these.
tls_ca_secret_namespace = "kube-system"

# We name the TLS Secrets to be compatible with the `kubergrunt helm grant` command
tls_ca_secret_name = "${local.tiller_namespace}-namespace-tiller-ca-certs"
tls_secret_name = "tiller-certs"
tls_algorithm_config = "--tls-private-key-algorithm ${var.private_key_algorithm} ${var.private_key_algorithm == "ECDSA" ? "--tls-private-key-ecdsa-curve ${var.private_key_ecdsa_curve}" : "--tls-private-key-rsa-bits ${var.private_key_rsa_bits}"}"

undeploy_args = "${var.force_undeploy ? "--force" : ""} ${var.undeploy_releases ? "--undeploy-releases" : ""}"
# These will be filled in by the shell environment
kubectl_auth_config = "--kubectl-server-endpoint \"$KUBECTL_SERVER_ENDPOINT\" --kubectl-certificate-authority \"$KUBECTL_CA_DATA\" --kubectl-token \"$KUBECTL_TOKEN\""
}

# ---------------------------------------------------------------------------------------------------------------------
Expand Down
8 changes: 4 additions & 4 deletions examples/gke-private-cluster/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -148,11 +148,11 @@ module "gke_service_account" {
# ---------------------------------------------------------------------------------------------------------------------

module "vpc_network" {
source = "git::[email protected]:gruntwork-io/terraform-google-network.git//modules/vpc-network?ref=v0.0.2"
source = "git::[email protected]:gruntwork-io/terraform-google-network.git//modules/vpc-network?ref=v0.0.3"

name = "${var.cluster_name}-network-${random_string.suffix.result}"
project = "${var.project}"
region = "${var.region}"
name_prefix = "${var.cluster_name}-network-${random_string.suffix.result}"
project = "${var.project}"
region = "${var.region}"

cidr_block = "${var.vpc_cidr_block}"
secondary_cidr_block = "${var.vpc_secondary_cidr_block}"
Expand Down
Loading

0 comments on commit 0f3cf50

Please sign in to comment.