-
Notifications
You must be signed in to change notification settings - Fork 112
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #41 from DataDog/aaron.kalin/new_terraform
Unified Terraform Deployments (Starting with Digital Ocean)
- Loading branch information
Showing
14 changed files
with
248 additions
and
44 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,26 +1,39 @@ | ||
This folder contains the different tested ways in which this application can be deployed: | ||
|
||
* `aws`: Deployments to Amazon Web Services | ||
* `aws/ecs`: Deployment to Amazon ECS | ||
* `aws/ecs`: Deployment to Amazon ECS | ||
* `datadog`: Deploying datadog via HELM or kubernetes manifests | ||
* `gcp`: Deployments to Google Cloud Platform | ||
* `gke`: Deployment to Google Kubernetes Engine | ||
* `vms`: Deployment to GCP VMs using Terraform | ||
* `gke`: Deployment to Google Kubernetes Engine | ||
* `vms`: Deployment to GCP VMs using Terraform | ||
* `generic-k8s`: Generic Kubernetes manifests | ||
* `openshift`: Manifests to deploy the application to Openshift | ||
* `docker-compose`: Docker compose to run the application locally | ||
* `terraform`: Terraform based deployments separated by platform | ||
|
||
### Running the Application Locally | ||
## Running the Application Locally | ||
|
||
The application itself runs on `docker-compose`. First, install Docker along with docker-compose. Then sign up with a trial [Datadog account](https://www.datadoghq.com/), and grab your API key from the Integrations->API tab. | ||
Look at the `docker-compose` folder README for details. | ||
|
||
Each of the scenarios use a different `docker-compose` file in the `deploy/docker-compose` folder. To run any of the scenarios: | ||
## Installing Datadog via HELM Chart | ||
|
||
```bash | ||
$ git clone https://github.com/DataDog/ecommerce-workshop.git | ||
$ cd ecommerce-workshop/deploy/docker-compose | ||
$ POSTGRES_USER=postgres POSTGRES_PASSWORD=postgres DD_API_KEY=<YOUR_API_KEY> docker-compose -f <docker_compose_with_your_selected_scenario> up | ||
``` | ||
### Requirements | ||
|
||
With this, the docker images will be pulled, and you'll be able to visit the app. | ||
* Install [HELM v3](https://helm.sh/docs/intro/install/) | ||
* [Generate a Datadog API Key](https://app.datadoghq.com/account/settings#api) | ||
* Optionally [Generate a Datadog Application Key](https://app.datadoghq.com/account/settings#api) if you are deploying the cluster monitor | ||
|
||
When you go to the homepage, you'll notice that, although the site takes a while to load, it mostly looks as if it works. Indeed, there are only a few views that are broken. Try navigating around the site to see if you can't discover the broken pieces. | ||
### Installing | ||
|
||
* Make sure you have a working `kubectl`, you may need to switch to the platform folder first | ||
* Run `helm repo add datadog https://helm.datadoghq.com` to track our official HELM repo | ||
* Run `helm repo update` to sync up the latest chart | ||
* Create a secret for the API Key `export DATADOG_SECRET_API_KEY_NAME=datadog-api-secret && kubectl create secret generic $DATADOG_SECRET_API_KEY_NAME --from-literal api-key="<DATADOG_API_KEY>" --namespace="default"` | ||
* If you want to install the Cluster Agent, then export an APP Key `export DATADOG_SECRET_APP_KEY_NAME=datadog-app-secret && kubectl create secret generic $DATADOG_SECRET_APP_KEY_NAME --from-literal app-key="<DATADOG_APP_KEY>" --namespace="default"` | ||
* Make your own copy of the `helm-values.yaml.example` in the datadog folder `cp datadog/helm-values.yaml.example datadog/helm-values.yaml` and make any changes you would like or just deploy the defaults | ||
* If you are not installing Cluster Agent, run `helm install datadog-agent --set datadog.apiKeyExistingSecret=<YOUR DATADOG API KEY> --values datadog/helm-values.yaml` | ||
* If you are installing Cluster Agent, run `helm install datadog-agent datadog/datadog --set datadog.apiKeyExistingSecret=$DATADOG_SECRET_API_KEY_NAME --set datadog.appKeyExistingSecret=$DATADOG_SECRET_APP_KEY_NAME --values datadog/helm-values.yaml` | ||
|
||
If you ever want to change the values in the chart, you can apply them via a HELM upgrade: | ||
|
||
`helm upgrade datadog-agent datadog/datadog --set datadog.apiKeyExistingSecret=$DATADOG_SECRET_API_KEY_NAME --set datadog.appKeyExistingSecret=$DATADOG_SECRET_APP_KEY_NAME --values datadog/helm-values.yaml` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,41 @@ | ||
# This is a reasonable set of default HELM settings. | ||
# Enable/disable as you see fit for your deployment. | ||
|
||
# Enable this block if you want to use the newest agent | ||
# agents: | ||
# image: | ||
# repository: datadog/agent | ||
# tag: latest | ||
# pullPolicy: Always | ||
|
||
# Enable this block to get Kubernetes Beta metrics (https://www.datadoghq.com/blog/explore-kubernetes-resources-with-datadog/) | ||
# clusterAgent: | ||
# enabled: true | ||
# image: | ||
# repository: datadog/cluster-agent | ||
# tag: latest | ||
# pullPolicy: Always | ||
|
||
datadog: | ||
clusterName: "ecommerce" | ||
|
||
apm: | ||
enabled: true | ||
|
||
# Enable this block to get all logs from the pods/containers | ||
# logs: | ||
# enabled: true | ||
# containerCollectAll: true | ||
|
||
# Enable this block for the Kubernetes Beta metrics | ||
# orchestratorExplorer: | ||
# enabled: true | ||
|
||
# Enable this block for process collection. It is required for Kubernetes Beta metrics | ||
# processAgent: | ||
# processCollection: true | ||
|
||
# Enable this block for network and DNS metric collection | ||
# systemProbe: | ||
# enabled: true | ||
# collectDNSStats: true |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
# Terraform Deployment | ||
|
||
This area of the repo is dedicated to a terraform based deployment of the ecommerce-workshop application. | ||
|
||
## Requirements | ||
|
||
* [Terraform 0.13+](https://terraform.io) (Check your current version with `terraform version`) | ||
|
||
## Platform Folders | ||
|
||
In order to setup the infrastructure for any of the deployment options, we have created separate provider-based folders for you to use. Just `cd` into the provider of your preference and follow the README for further instructions. | ||
|
||
## Adding a new platform | ||
|
||
To add another deployment platform you can copy an existing one and name the folder after the platform target. For example, you can `cp -R digitalocean aks` to make an Azure Kubernetes deployment platform target, but you will have to modify all the files to match the Azure Terraform provider resources and provider. Of course, don't forget to update the README with instructions. | ||
|
||
One other thing your integration should be doing is making an output that writes the kubectl configuration to the deploy folder. If you need an example, look at `output.tf` in the digitalocean folder. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,41 @@ | ||
# Digital Ocean | ||
|
||
This terraform module sets up and configures a k8s cluster as a deployment target for both Datadog and the Ecommerce application. | ||
|
||
## Initial Setup | ||
|
||
* Export the following environment variable: | ||
* `TF_VAR_do_token` with your DigitalOcean API Token | ||
* Review the variables in the `variables.tf` file if you want to make any adjustments | ||
* Run `terraform init` to install all of the needed terraform modules | ||
* Run `terraform apply` to spin up the cluster. The location of the cluster will output at the end. | ||
|
||
## Automatic `kubectl` config | ||
|
||
When you `terraform apply` and it is successful, this terraform configuration will automatically write a `kube_config_server.yaml` file to use with `kubectl`. To automatically use that config, you need to export the `KUBECONFIG` environment variable pointing to this file like so: | ||
|
||
```bash | ||
export KUBECONFIG="$(pwd)/kube_config_server.yaml" | ||
``` | ||
|
||
If you use [direnv](https://direnv.net/) you can put the above line into an `.envrc` in this directory to automatically load the config for you each time you visit this directory. If you can't or don't want to do that, just make sure you export the KUBECONFIG variable like above, or put it in front of the kubectl command so it knows where to find the kubeconfig file. | ||
|
||
To verify you have this configured correctly, try the following command: | ||
|
||
```bash | ||
$ kubectl get pods | ||
``` | ||
|
||
You should see output like this: | ||
|
||
```bash | ||
No resources found in default namespace. | ||
``` | ||
|
||
Now that you have a working kubernetes cluster, you can deploy the Datadog HELM Chart or Ecommerce manifests to start monitoring. For those instructions, see the `README.md` in the deploy folder above this one. | ||
|
||
## Important Notes | ||
|
||
If you want to upgrade k8s versions on Digital Ocean, you will need at least two nodes or one larger node to perform this operation due to resource constraints in the upgrade process. | ||
|
||
Once you destroy the cluster, check for any leftover load balancers in your DigitalOcean account. Any time you apply a k8s manifest containing a LoadBalancer service, they spin up an actual DigitalOcean load balancer for you which is unknown to Terraform and won't be cleared off when you destroy the k8s cluster. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,31 @@ | ||
data "digitalocean_kubernetes_versions" "stable" { | ||
version_prefix = "1.19." | ||
} | ||
|
||
resource "digitalocean_kubernetes_cluster" "k8s_cluster" { | ||
# See variables.tf for adjustable options | ||
name = var.cluster_name | ||
region = var.region | ||
# Set this to false if you want to disable automatic upgrading of your cluster | ||
auto_upgrade = true | ||
version = data.digitalocean_kubernetes_versions.stable.latest_version | ||
tags = ["development"] | ||
|
||
node_pool { | ||
name = var.node_pool_name | ||
size = var.node_size | ||
node_count = var.node_count | ||
} | ||
} | ||
|
||
provider "digitalocean" { | ||
token = var.do_token | ||
} | ||
|
||
provider "kubernetes" { | ||
host = digitalocean_kubernetes_cluster.k8s_cluster.endpoint | ||
token = digitalocean_kubernetes_cluster.k8s_cluster.kube_config[0].token | ||
cluster_ca_certificate = base64decode( | ||
digitalocean_kubernetes_cluster.k8s_cluster.kube_config[0].cluster_ca_certificate | ||
) | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,5 @@ | ||
resource "local_file" "kube_config_server_yaml" { | ||
filename = format("%s/../../%s", path.root, "kube_config_server.yaml") | ||
sensitive_content = digitalocean_kubernetes_cluster.k8s_cluster.kube_config[0].raw_config | ||
file_permission = "0600" | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,33 @@ | ||
# DigitalOcean API Token. This can also be set via the | ||
# TF_VAR_do_token environment variable | ||
variable "do_token" {} | ||
|
||
variable "region" { | ||
description = "The DigitalOcean region to deploy the k8s cluster into" | ||
type = string | ||
default = "nyc1" | ||
} | ||
|
||
variable "cluster_name" { | ||
description = "Kubernetes cluster name" | ||
type = string | ||
default = "ecommerce" | ||
} | ||
|
||
variable "node_pool_name" { | ||
description = "Name of the Kubernetes worker pool nodes" | ||
type = string | ||
default = "worker" | ||
} | ||
|
||
variable "node_size" { | ||
description = "Cluster node size. See https://slugs.do-api.dev/ for slug options." | ||
type = string | ||
default = "s-2vcpu-2gb" | ||
} | ||
|
||
variable "node_count" { | ||
description = "Number of nodes in the Kubernetes pool" | ||
type = number | ||
default = 2 | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
terraform { | ||
required_providers { | ||
digitalocean = { | ||
source = "digitalocean/digitalocean" | ||
version = "~> 1.23.0" | ||
} | ||
kubernetes = { | ||
source = "hashicorp/kubernetes" | ||
version = "~> 1.13.2" | ||
} | ||
local = { | ||
source = "hashicorp/local" | ||
version = "~> 2.0.0" | ||
} | ||
} | ||
required_version = ">= 0.13" | ||
} |