Skip to content
This repository has been archived by the owner on Mar 29, 2023. It is now read-only.

Add basic Helm on GKE example #15

Merged
merged 16 commits into from
Feb 28, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 22 additions & 1 deletion .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ defaults: &defaults
environment:
GRUNTWORK_INSTALLER_VERSION: v0.0.21
TERRATEST_LOG_PARSER_VERSION: v0.13.13
KUBERGRUNT_VERSION: v0.3.2
HELM_VERSION: v2.11.0
MODULE_CI_VERSION: v0.13.3
TERRAFORM_VERSION: 0.11.8
TERRAGRUNT_VERSION: NONE
Expand All @@ -26,6 +28,15 @@ install_gruntwork_utils: &install_gruntwork_utils
--go-version ${GOLANG_VERSION} \
--go-src-path test

install_helm_client: &install_helm_client
name: install helm client
command: |
# install helm client
curl -Lo helm.tar.gz https://storage.googleapis.com/kubernetes-helm/helm-${HELM_VERSION}-linux-amd64.tar.gz
tar -xvf helm.tar.gz
chmod +x linux-amd64/helm
sudo mv linux-amd64/helm /usr/local/bin/

version: 2
jobs:
build:
Expand Down Expand Up @@ -66,7 +77,17 @@ jobs:
- checkout
- run: echo 'export PATH=$HOME/terraform:$HOME/packer:$PATH' >> $BASH_ENV
- run:
<<: *install_gruntwork_utils
<<: *install_gruntwork_utils

# Install helm
- run:
<<: *install_helm_client

# Install kubergrunt
- run:
name: Install kubergrunt
command: gruntwork-install --binary-name "kubergrunt" --repo "https://github.com/gruntwork-io/kubergrunt" --tag "${KUBERGRUNT_VERSION}"

- run:
name: update gcloud
command: |
Expand Down
139 changes: 139 additions & 0 deletions examples/gke-basic-tiller/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
# GKE Basic Helm Example

This example shows how to use Terraform to launch a GKE cluster with Helm configured and installed. We achieve this by calling out to our `kubergrunt` utility in order to securely deploy Tiller - the server component of Helm.


## Background

We strongly recommend reading [our guide on Helm](https://github.com/gruntwork-io/kubergrunt/blob/master/HELM_GUIDE.md)
before continuing with this guide for a background on Helm, Tiller, and the security model backing it.


## Overview

In this guide we will walk through the steps necessary to get up and running with deploying Tiller on GKE using this
module. Here are the steps:

1. [Install the necessary tools](#installing-necessary-tools)
1. [Apply the Terraform code](#apply-the-terraform-code)
1. [Verify the deployment](#verify-tiller-deployment)
1. [Granting access to additional roles](#granting-access-to-additional-users)
1. [Upgrading the deployed Tiller instance](#upgrading-deployed-tiller)

## Installing necessary tools

In addition to `terraform`, this guide relies on the `gcloud` and `kubectl` tools to manage the cluster. In addition
we use `kubergrunt` to manage the deployment of Tiller. You can read more about the decision behind this approach in
[the Appendix](#appendix-a-why-kubergrunt) of this guide.

This means that your system needs to be configured to be able to find `terraform`, `gcloud`, `kubectl`, `kubergrunt`,
and `helm` client utilities on the system `PATH`. Here are the installation guide for each tool:

1. [`gcloud`](https://cloud.google.com/sdk/gcloud/)
1. [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
1. [`terraform`](https://learn.hashicorp.com/terraform/getting-started/install.html)
1. [`helm` client](https://docs.helm.sh/using_helm/#installing-helm)
1. [`kubergrunt`](https://github.com/gruntwork-io/kubergrunt#installation)

Make sure the binaries are discoverable in your `PATH` variable. See [this Stack Overflow
post](https://stackoverflow.com/questions/14637979/how-to-permanently-set-path-on-linux-unix) for instructions on
setting up your `PATH` on Unix, and [this
post](https://stackoverflow.com/questions/1618280/where-can-i-set-path-to-make-exe-on-windows) for instructions on
Windows.

## Apply the Terraform Code

Now that all the prerequisite tools are installed, we are ready to deploy the GKE cluster with Tiller installed!

1. If you haven't already, clone this repo:
- `git clone https://github.com/gruntwork-io/terraform-google-gke.git`
1. Make sure you are in the `gke-basic-tiller` example folder:
- `cd examples/gke-basic-tiller`
1. Initialize terraform:
- `terraform init`
1. Check the terraform plan:
- `terraform plan`
1. Apply the terraform code:
- `terraform apply`
- Fill in the required variables based on your needs. <!-- TODO: show example inputs here -->

**Note:** For simplicity this example uses `kubergrunt` to install Tiller into the `kube-system` namespace. However in
a production deployment we strongly recommend you segregate the Tiller resources into a separate namespace.

As part of the deployment, `kubergrunt` will:

- Create a new TLS certificate key pair to use as the CA and upload it to Kubernetes as a `Secret` in the `kube-system`
namespace.
- Using the generated CA TLS certificate key pair, create a signed TLS certificate key pair to use to identify the
Tiller server and upload it to Kubernetes as a `Secret` in `kube-system`.
- Deploy Tiller with the following configurations turned on:
- TLS verification
- `Secrets` as the storage engine
- Provisioned in the `kube-system` namespace using the `default` service account.

- Grant access to the provided RBAC entity and configure the local helm client to use those credentials:
- Using the CA TLS certificate key pair, create a signed TLS certificate key pair to use to identify the client.
- Upload the certificate key pair to the `kube-system`.
- Grant the RBAC entity access to:
- Get the client certificate `Secret` (`kubergrunt helm configure` uses this to install the client certificate
key pair locally)
- Get and List pods in `kube-system` namespace (the `helm` client uses this to find the Tiller pod)
- Create a port forward to the Tiller pod (the `helm` client uses this to make requests to the Tiller pod)

- Install the client certificate key pair to the helm home directory so the client can use it.

You should now have a working Tiller deployment with your helm client configured to access it.
So let's verify that in the next step!

## Verify Tiller Deployment

To start using `helm` with the configured credentials, you need to specify the following things:

- enable TLS verification
- use TLS credentials to authenticate
- the namespace where Tiller is deployed

These are specified through command line arguments. If everything is configured correctly, you should be able to access
the Tiller that was deployed with the following args:

```
helm --tls --tls-verify --tiller-namespace NAMESPACE_OF_TILLER version
```

If you have access to Tiller, this should return you both the client version and the server version of Helm.

Note that you need to pass the above CLI argument every time you want to use `helm`. This can be cumbersome, so
`kubergrunt` installs an environment file into your helm home directory that you can dot source to set environment
variables that guide `helm` to use those options:

```
. ~/.helm/env
helm version
```

## Appendix A: Why kubergrunt?

This Terraform example is not idiomatic Terraform code in that it relies on an external binary, `kubergrunt` as opposed
to implementing the functionalities using pure Terraform providers. This approach has some noticeable drawbacks:

- You have to install extra tools to use, so it is not a minimal `terraform init && terraform apply`.
- Portability concerns to setup, as there is no guarantee the tools work cross platform. We make every effort to test
across the major operating systems (Linux, Mac OSX, and Windows), but we can't possibly test every combination and so
there are bound to be portability issues.
- You don't have the declarative Terraform features that you come to love, such as `plan`, updates through `apply`, and
`destroy`.

That said, we decided to use this approach because of limitations in the existing providers to implement the
functionalities here in pure Terraform code:

- The Helm provider does not have [a resource that manages
Tiller](https://github.com/terraform-providers/terraform-provider-helm/issues/134).
- The [TLS provider](https://www.terraform.io/docs/providers/tls/index.html) stores the certificate key pairs in plain
text into the Terraform state.
- The Kubernetes Secret resource in the provider [also stores the value in plain text in the Terraform
state](https://www.terraform.io/docs/providers/kubernetes/r/secret.html).
- The grant and configure workflows are better suited as CLI tools than in Terraform.

Note that [we intend to implement a pure Terraform version of this when the Helm provider is
updated](https://github.com/gruntwork-io/terraform-kubernetes-helm/issues/13), but we plan to continue to maintain the
`kubergrunt` approach for folks who are wary of leaking secrets into Terraform state.
20 changes: 20 additions & 0 deletions examples/gke-basic-tiller/dependencies.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# ---------------------------------------------------------------------------------------------------------------------
# INTERPOLATE AND CONSTRUCT KUBERGRUNT HELM DEPLOY COMMAND ARGUMENTS
# ---------------------------------------------------------------------------------------------------------------------

locals {
tls_config = "--tls-private-key-algorithm ${var.private_key_algorithm} ${local.tls_algorithm_config} --tls-common-name ${lookup(var.tls_subject, "common_name")} --tls-org ${lookup(var.tls_subject, "org")} ${local.tls_org_unit} ${local.tls_city} ${local.tls_state} ${local.tls_country}"
tls_algorithm_config = "${var.private_key_algorithm == "ECDSA" ? "--tls-private-key-ecdsa-curve ${var.private_key_ecdsa_curve}" : "--tls-private-key-rsa-bits ${var.private_key_rsa_bits}"}"
tls_org_unit = "${lookup(var.tls_subject, "org_unit", "") != "" ? "--tls-org-unit ${lookup(var.tls_subject, "org_unit", "")}" : ""}"
tls_city = "${lookup(var.tls_subject, "city", "") != "" ? "--tls-city ${lookup(var.tls_subject, "city", "")}" : ""}"
tls_state = "${lookup(var.tls_subject, "state", "") != "" ? "--tls-state ${lookup(var.tls_subject, "state", "")}" : ""}"
tls_country = "${lookup(var.tls_subject, "country", "") != "" ? "--tls-country ${lookup(var.tls_subject, "country", "")}" : ""}"

client_tls_config = "--client-tls-common-name ${lookup(var.client_tls_subject, "common_name")} --client-tls-org ${lookup(var.client_tls_subject, "org")} ${local.client_tls_org_unit} ${local.client_tls_city} ${local.client_tls_state} ${local.client_tls_country}"
client_tls_org_unit = "${lookup(var.client_tls_subject, "org_unit", "") != "" ? "--client-tls-org-unit ${lookup(var.client_tls_subject, "org_unit", "")}" : ""}"
client_tls_city = "${lookup(var.client_tls_subject, "city", "") != "" ? "--client-tls-city ${lookup(var.client_tls_subject, "city", "")}" : ""}"
client_tls_state = "${lookup(var.client_tls_subject, "state", "") != "" ? "--client-tls-state ${lookup(var.client_tls_subject, "state", "")}" : ""}"
client_tls_country = "${lookup(var.client_tls_subject, "country", "") != "" ? "--client-tls-country ${lookup(var.client_tls_subject, "country", "")}" : ""}"

undeploy_args = "${var.force_undeploy ? "--force" : ""} ${var.undeploy_releases ? "--undeploy-releases" : ""}"
}
Loading