Skip to content

Commit

Permalink
Merge pull request #41 from DataDog/aaron.kalin/new_terraform
Browse files Browse the repository at this point in the history
Unified Terraform Deployments (Starting with Digital Ocean)
  • Loading branch information
Aaron Kalin authored Oct 30, 2020
2 parents d997cbc + e515998 commit e960b94
Show file tree
Hide file tree
Showing 14 changed files with 248 additions and 44 deletions.
10 changes: 10 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -7,3 +7,13 @@
.vscode
sandbox/tmp/
__pycache__

# Terraform
*.tfstate
*.tfstate.backup
*.tfstate.lock.info
.terraform/
kube_config_server.yaml

# HELM
helm-values.yaml
27 changes: 1 addition & 26 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,32 +31,7 @@ Feel free to [follow along](https://www.katacoda.com/DataDog/scenarios/ecommerce

## Deploying the application

The `deploy` folder contains the different tested ways in which this application can be deployed:

* `aws`: Deployments to Amazon Web Services
* `aws/ecs`: Deployment to Amazon ECS
* `gcp`: Deployments to Google Cloud Platform
* `gke`: Deployment to Google Kubernetes Engine
* `vms`: Deployment to GCP VMs using Terraform
* `generic-k8s`: Generic Kubernetes manifests
* `openshift`: Manifests to deploy the application to Openshift
* `docker-compose`: Docker compose to run the application locally

### Running the Application Locally

The application itself runs on `docker-compose`. First, install Docker along with docker-compose. Then sign up with a trial [Datadog account](https://www.datadoghq.com/), and grab your API key from the Integrations->API tab.

Each of the scenarios use a different `docker-compose` file in the `docker-compose-files` folder. To run any of the scenarios:

```bash
$ git clone https://github.com/DataDog/ecommerce-workshop.git
$ cd ecommerce-workshop/docker-compose-files
$ POSTGRES_USER=postgres POSTGRES_PASSWORD=postgres DD_API_KEY=<YOUR_API_KEY> docker-compose -f <docker_compose_with_your_selected_scenario> up
```

With this, the docker images will be pulled, and you'll be able to visit the app.

When you go to the homepage, you'll notice that, although the site takes a while to load, it mostly looks as if it works. Indeed, there are only a few views that are broken. Try navigating around the site to see if you can't discover the broken pieces.
The `deploy` folder contains the different tested ways in which this application can be deployed.

## Enabling Real User Monitoring (RUM)

Expand Down
39 changes: 26 additions & 13 deletions deploy/README.md
Original file line number Diff line number Diff line change
@@ -1,26 +1,39 @@
This folder contains the different tested ways in which this application can be deployed:

* `aws`: Deployments to Amazon Web Services
* `aws/ecs`: Deployment to Amazon ECS
* `aws/ecs`: Deployment to Amazon ECS
* `datadog`: Deploying datadog via HELM or kubernetes manifests
* `gcp`: Deployments to Google Cloud Platform
* `gke`: Deployment to Google Kubernetes Engine
* `vms`: Deployment to GCP VMs using Terraform
* `gke`: Deployment to Google Kubernetes Engine
* `vms`: Deployment to GCP VMs using Terraform
* `generic-k8s`: Generic Kubernetes manifests
* `openshift`: Manifests to deploy the application to Openshift
* `docker-compose`: Docker compose to run the application locally
* `terraform`: Terraform based deployments separated by platform

### Running the Application Locally
## Running the Application Locally

The application itself runs on `docker-compose`. First, install Docker along with docker-compose. Then sign up with a trial [Datadog account](https://www.datadoghq.com/), and grab your API key from the Integrations->API tab.
Look at the `docker-compose` folder README for details.

Each of the scenarios use a different `docker-compose` file in the `deploy/docker-compose` folder. To run any of the scenarios:
## Installing Datadog via HELM Chart

```bash
$ git clone https://github.com/DataDog/ecommerce-workshop.git
$ cd ecommerce-workshop/deploy/docker-compose
$ POSTGRES_USER=postgres POSTGRES_PASSWORD=postgres DD_API_KEY=<YOUR_API_KEY> docker-compose -f <docker_compose_with_your_selected_scenario> up
```
### Requirements

With this, the docker images will be pulled, and you'll be able to visit the app.
* Install [HELM v3](https://helm.sh/docs/intro/install/)
* [Generate a Datadog API Key](https://app.datadoghq.com/account/settings#api)
* Optionally [Generate a Datadog Application Key](https://app.datadoghq.com/account/settings#api) if you are deploying the cluster monitor

When you go to the homepage, you'll notice that, although the site takes a while to load, it mostly looks as if it works. Indeed, there are only a few views that are broken. Try navigating around the site to see if you can't discover the broken pieces.
### Installing

* Make sure you have a working `kubectl`, you may need to switch to the platform folder first
* Run `helm repo add datadog https://helm.datadoghq.com` to track our official HELM repo
* Run `helm repo update` to sync up the latest chart
* Create a secret for the API Key `export DATADOG_SECRET_API_KEY_NAME=datadog-api-secret && kubectl create secret generic $DATADOG_SECRET_API_KEY_NAME --from-literal api-key="<DATADOG_API_KEY>" --namespace="default"`
* If you want to install the Cluster Agent, then export an APP Key `export DATADOG_SECRET_APP_KEY_NAME=datadog-app-secret && kubectl create secret generic $DATADOG_SECRET_APP_KEY_NAME --from-literal app-key="<DATADOG_APP_KEY>" --namespace="default"`
* Make your own copy of the `helm-values.yaml.example` in the datadog folder `cp datadog/helm-values.yaml.example datadog/helm-values.yaml` and make any changes you would like or just deploy the defaults
* If you are not installing Cluster Agent, run `helm install datadog-agent --set datadog.apiKeyExistingSecret=<YOUR DATADOG API KEY> --values datadog/helm-values.yaml`
* If you are installing Cluster Agent, run `helm install datadog-agent datadog/datadog --set datadog.apiKeyExistingSecret=$DATADOG_SECRET_API_KEY_NAME --set datadog.appKeyExistingSecret=$DATADOG_SECRET_APP_KEY_NAME --values datadog/helm-values.yaml`

If you ever want to change the values in the chart, you can apply them via a HELM upgrade:

`helm upgrade datadog-agent datadog/datadog --set datadog.apiKeyExistingSecret=$DATADOG_SECRET_API_KEY_NAME --set datadog.appKeyExistingSecret=$DATADOG_SECRET_APP_KEY_NAME --values datadog/helm-values.yaml`
41 changes: 41 additions & 0 deletions deploy/datadog/helm-values.yaml.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
# This is a reasonable set of default HELM settings.
# Enable/disable as you see fit for your deployment.

# Enable this block if you want to use the newest agent
# agents:
# image:
# repository: datadog/agent
# tag: latest
# pullPolicy: Always

# Enable this block to get Kubernetes Beta metrics (https://www.datadoghq.com/blog/explore-kubernetes-resources-with-datadog/)
# clusterAgent:
# enabled: true
# image:
# repository: datadog/cluster-agent
# tag: latest
# pullPolicy: Always

datadog:
clusterName: "ecommerce"

apm:
enabled: true

# Enable this block to get all logs from the pods/containers
# logs:
# enabled: true
# containerCollectAll: true

# Enable this block for the Kubernetes Beta metrics
# orchestratorExplorer:
# enabled: true

# Enable this block for process collection. It is required for Kubernetes Beta metrics
# processAgent:
# processCollection: true

# Enable this block for network and DNS metric collection
# systemProbe:
# enabled: true
# collectDNSStats: true
16 changes: 15 additions & 1 deletion deploy/docker-compose/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Docker Compose Files for Live Development
# Docker Compose Files for Live Development or Local Deployment

These files allow for different ways of deploying the ecommerce application locally for development.

Expand All @@ -11,3 +11,17 @@ They currently exist in three different versions:
`docker-compose-broken-instrumented`: View a broken application and diagnose it with Datadog

`docker-compose-fixed-instrumented`: View a fixed application and compare it to the previously broken deployment.

The application itself runs on `docker-compose`. First, install Docker along with docker-compose. Then sign up with a trial [Datadog account](https://www.datadoghq.com/), and grab your API key from the Integrations->API tab.

To run any of the scenarios:

```bash
$ git clone https://github.com/DataDog/ecommerce-workshop.git
$ cd ecommerce-workshop/deploy/docker-compose
$ POSTGRES_USER=postgres POSTGRES_PASSWORD=postgres DD_API_KEY=<YOUR_API_KEY> docker-compose -f <docker_compose_with_your_selected_scenario> up
```

With this, the docker images will be pulled, and you'll be able to visit the app.

When you go to the homepage, you'll notice that, although the site takes a while to load, it mostly looks as if it works. Indeed, there are only a few views that are broken. Try navigating around the site to see if you can't discover the broken pieces.
2 changes: 2 additions & 0 deletions deploy/generic-k8s/ecommerce-app/advertisements.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,8 @@ spec:
value: "user"
- name: DATADOG_SERVICE_NAME
value: "advertisements-service"
- name: DD_TAGS
value: "env:ruby-shop"
- name: DD_AGENT_HOST
valueFrom:
fieldRef:
Expand Down
2 changes: 2 additions & 0 deletions deploy/generic-k8s/ecommerce-app/discounts.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,8 @@ spec:
value: "true"
- name: DD_ANALYTICS_ENABLED
value: "true"
- name: DD_TAGS
value: "env:ruby-shop"
ports:
- containerPort: 5001
resources: {}
Expand Down
11 changes: 7 additions & 4 deletions deploy/generic-k8s/ecommerce-app/frontend.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,8 @@ spec:
fieldPath: status.hostIP
- name: DD_LOGS_INJECTION
value: "true"
- name: DD_TAGS
value: "env:ruby-shop"
- name: DD_ANALYTICS_ENABLED
value: "true"
# Enable RUM
Expand Down Expand Up @@ -69,10 +71,11 @@ metadata:
name: frontend
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 3000
- port: 80
protocol: TCP
targetPort: 3000
name: http
selector:
service: frontend
app: ecommerce
type: ClusterIP
type: LoadBalancer
17 changes: 17 additions & 0 deletions deploy/terraform/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# Terraform Deployment

This area of the repo is dedicated to a terraform based deployment of the ecommerce-workshop application.

## Requirements

* [Terraform 0.13+](https://terraform.io) (Check your current version with `terraform version`)

## Platform Folders

In order to setup the infrastructure for any of the deployment options, we have created separate provider-based folders for you to use. Just `cd` into the provider of your preference and follow the README for further instructions.

## Adding a new platform

To add another deployment platform you can copy an existing one and name the folder after the platform target. For example, you can `cp -R digitalocean aks` to make an Azure Kubernetes deployment platform target, but you will have to modify all the files to match the Azure Terraform provider resources and provider. Of course, don't forget to update the README with instructions.

One other thing your integration should be doing is making an output that writes the kubectl configuration to the deploy folder. If you need an example, look at `output.tf` in the digitalocean folder.
41 changes: 41 additions & 0 deletions deploy/terraform/digitalocean/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
# Digital Ocean

This terraform module sets up and configures a k8s cluster as a deployment target for both Datadog and the Ecommerce application.

## Initial Setup

* Export the following environment variable:
* `TF_VAR_do_token` with your DigitalOcean API Token
* Review the variables in the `variables.tf` file if you want to make any adjustments
* Run `terraform init` to install all of the needed terraform modules
* Run `terraform apply` to spin up the cluster. The location of the cluster will output at the end.

## Automatic `kubectl` config

When you `terraform apply` and it is successful, this terraform configuration will automatically write a `kube_config_server.yaml` file to use with `kubectl`. To automatically use that config, you need to export the `KUBECONFIG` environment variable pointing to this file like so:

```bash
export KUBECONFIG="$(pwd)/kube_config_server.yaml"
```

If you use [direnv](https://direnv.net/) you can put the above line into an `.envrc` in this directory to automatically load the config for you each time you visit this directory. If you can't or don't want to do that, just make sure you export the KUBECONFIG variable like above, or put it in front of the kubectl command so it knows where to find the kubeconfig file.

To verify you have this configured correctly, try the following command:

```bash
$ kubectl get pods
```

You should see output like this:

```bash
No resources found in default namespace.
```

Now that you have a working kubernetes cluster, you can deploy the Datadog HELM Chart or Ecommerce manifests to start monitoring. For those instructions, see the `README.md` in the deploy folder above this one.

## Important Notes

If you want to upgrade k8s versions on Digital Ocean, you will need at least two nodes or one larger node to perform this operation due to resource constraints in the upgrade process.

Once you destroy the cluster, check for any leftover load balancers in your DigitalOcean account. Any time you apply a k8s manifest containing a LoadBalancer service, they spin up an actual DigitalOcean load balancer for you which is unknown to Terraform and won't be cleared off when you destroy the k8s cluster.
31 changes: 31 additions & 0 deletions deploy/terraform/digitalocean/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
data "digitalocean_kubernetes_versions" "stable" {
version_prefix = "1.19."
}

resource "digitalocean_kubernetes_cluster" "k8s_cluster" {
# See variables.tf for adjustable options
name = var.cluster_name
region = var.region
# Set this to false if you want to disable automatic upgrading of your cluster
auto_upgrade = true
version = data.digitalocean_kubernetes_versions.stable.latest_version
tags = ["development"]

node_pool {
name = var.node_pool_name
size = var.node_size
node_count = var.node_count
}
}

provider "digitalocean" {
token = var.do_token
}

provider "kubernetes" {
host = digitalocean_kubernetes_cluster.k8s_cluster.endpoint
token = digitalocean_kubernetes_cluster.k8s_cluster.kube_config[0].token
cluster_ca_certificate = base64decode(
digitalocean_kubernetes_cluster.k8s_cluster.kube_config[0].cluster_ca_certificate
)
}
5 changes: 5 additions & 0 deletions deploy/terraform/digitalocean/output.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
resource "local_file" "kube_config_server_yaml" {
filename = format("%s/../../%s", path.root, "kube_config_server.yaml")
sensitive_content = digitalocean_kubernetes_cluster.k8s_cluster.kube_config[0].raw_config
file_permission = "0600"
}
33 changes: 33 additions & 0 deletions deploy/terraform/digitalocean/variables.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# DigitalOcean API Token. This can also be set via the
# TF_VAR_do_token environment variable
variable "do_token" {}

variable "region" {
description = "The DigitalOcean region to deploy the k8s cluster into"
type = string
default = "nyc1"
}

variable "cluster_name" {
description = "Kubernetes cluster name"
type = string
default = "ecommerce"
}

variable "node_pool_name" {
description = "Name of the Kubernetes worker pool nodes"
type = string
default = "worker"
}

variable "node_size" {
description = "Cluster node size. See https://slugs.do-api.dev/ for slug options."
type = string
default = "s-2vcpu-2gb"
}

variable "node_count" {
description = "Number of nodes in the Kubernetes pool"
type = number
default = 2
}
17 changes: 17 additions & 0 deletions deploy/terraform/digitalocean/versions.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
terraform {
required_providers {
digitalocean = {
source = "digitalocean/digitalocean"
version = "~> 1.23.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 1.13.2"
}
local = {
source = "hashicorp/local"
version = "~> 2.0.0"
}
}
required_version = ">= 0.13"
}

0 comments on commit e960b94

Please sign in to comment.