Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unified Terraform Deployments (Starting with Digital Ocean) #41

Merged
merged 30 commits into from
Oct 30, 2020

Conversation

martinisoft
Copy link
Contributor

@martinisoft martinisoft commented Oct 7, 2020

This is a first pass at unifying the deployments options with Terraform. My PoC is with DigitalOcean and their Kubernetes product, but will eventually branch out to other services as optional targets once I can make it a generic module.

Currently in draft form until I can get this reproducible and well documented. Completes #43

Task list:

  • Add Datadog HELM chart
  • Add ecommerce app k8s manifests
  • DigitalOcean deployment target
  • Make k8s deployment into a generic module
  • Re-build documentation READMEs in the deploy folder

@martinisoft martinisoft self-assigned this Oct 7, 2020
@martinisoft
Copy link
Contributor Author

martinisoft commented Oct 14, 2020

This is currently pretty "stable" as a Proof of Concept via DigitalOcean using their managed kubernetes service. It writes out a kubeconfig file for you after provisioning which makes it super quick to jump right into kubectl commands should you need to. Next steps for me in this PR are updated in the initial PR comment. Would like some 👀 on this and suggestions for how it can be made easier to use. Specifically, I want any ideas of how to make configurable options around the datadog HELM chart to enable features like Cluster Agent, Logs, etc.

@martinisoft
Copy link
Contributor Author

martinisoft commented Oct 16, 2020

So after doing the initial extraction I think this needs a different approach. I also uncovered a fun bug in the kubernetes provider with Terraform which has me thinking I need to approach this in another way, so here is what I think.

  • Re-organize the folders again so there are root-level modules for each provider. This makes the deploy/terraform folder take on a structure like deploy/terraform/digitalocean and deploy/terraform/modules/datadog and deploy/terraform/modules/ecommerce for example
  • Move the datadog HELM configs into a shared module with specific exposed attributes
  • Turn ecommerce app into a HELM chart to unify and simplify the overall deployment (See Convert generic k8s manifests to HELM chart #40)

@martinisoft
Copy link
Contributor Author

Decided that it would be better to avoid wrapping a single HELM resource for installing Datadog which even Terraform docs tell you not to do because they are meant to house collections of resources (not providers). With that in mind, I am shipping this PR as a decent example of making a provider-specific infrastructure deployment option.

@martinisoft martinisoft marked this pull request as ready for review October 20, 2020 20:35
@martinisoft martinisoft changed the title Unified Terraform Deployments Unified Terraform Deployments (Starting with Digital Ocean) Oct 20, 2020
Copy link
Collaborator

@arapulido arapulido left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, looks good to me, just a couple of comments/questions

Thanks!

deploy/terraform/digitalocean/README.md Outdated Show resolved Hide resolved
deploy/terraform/digitalocean/variables.tf Outdated Show resolved Hide resolved
@martinisoft
Copy link
Contributor Author

This is now officially ready for review. Let me know what you think @burningion @arapulido

Copy link
Collaborator

@arapulido arapulido left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left some comments of minor things, but other than that, looks good to me 👍

* Run `helm repo add datadog https://helm.datadoghq.com` to track our official HELM repo
* Run `helm repo update` to sync up the latest chart
* Create a secret for the API Key `export DATADOG_SECRET_API_KEY_NAME=datadog-api-secret && kubectl create secret generic $DATADOG_SECRET_API_KEY_NAME --from-literal api-key="<DATADOG_API_KEY>" --namespace="default"`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

helm install datadog-agent datadog/datadog --set datadog.apiKey already creates a Kubernetes secret for the api key, so it might be more straightforward (1 step instead of 2)

@@ -0,0 +1,41 @@
# This is a reasonable set of default HELM settings.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would add a comment with the chart version this was tested with (as it may break in the future)

@@ -38,6 +38,8 @@ spec:
value: "user"
- name: DATADOG_SERVICE_NAME
value: "advertisements-service"
- name: DD_TAGS
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shall we move to the new unified tagging? (DD_ENV, instead of the env tag)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh thats a good point. I will incorporate this into the HELM chart.

Used DigitalOcean as a test of this before bringing other platforms
Forgot to turn this on in the editor to correctly format the terraform
files for me
This is a super rough draft. Need some 👀 on this implementation
because it doesn't feel very elegant to configure the HELM chart this
way and I need a better or easier solution. Not sure if that means
having a data structure which gets applied? Maybe a commented YAML?
This isn't smoothed out yet, just jotting it down until it can be a
better story for everyone.
Without this, Terraform will happily nuke the file first which might be
referenced by the environment variable KUBECONFIG which produces this
error when destroying without this option:

> Error: Kubernetes cluster unreachable: stat kube_config_server.yaml: no such file or directory
This should make for less noise in the terraform apply diffs because it
sorts on the keys of the options.

I have also enabled the systemprobe agent for network stats
This technically doesn't matter right now because I am pinning agent
versions to latest, but it will help in the future.
This is the first pass at trying providers as a module and I am not
sure I like this approach.

I think a better approach will be to have folders for each provider so
you can more or less choose the deployment target, then push out any
common resources into a shared module folder that you interact with.
This makes more sense then trying to wrap k8s and HELM in Terraform. It
simply makes it more complex than it really needs to be.
We pinned them in the previous module setup and forgot to do that here.
Needed to separate the information between providers and the main
folder.
Also extract some more tunable variables with sensible defaults
Not super sure about the workflow here because it's split across
resources and platforms. I am thinking each terraform should write the
kubectl config into the deploy directory instead for easier command
running.
Still recommend 3 here, but it works well enough with only 2 nodes.
Also a slight formatting tweak this will visually separate everything
better to scan in github and in a text editor

Moved docker compose instructions to that folder so it's closer to where
it is needed
Turns out copy pasta can be bad for you sometimes
I think this makes it easier to run kubectl commands from the deploy
folder for HELM charts and all that jazz.
Also document more clearly needing to write out a config file
This was originally just a clusterIP so you had to manually apply an
ingress of some kind. We could separate these if needed.
This is really a bug and honestly the wrong environment so in APM this
shows up as env:none by default. Patching this for now until we can make
the tags more realistic like "development" "staging" and "production"
@martinisoft martinisoft force-pushed the aaron.kalin/new_terraform branch from adc4427 to e515998 Compare October 30, 2020 03:08
@martinisoft martinisoft merged commit e960b94 into master Oct 30, 2020
@martinisoft martinisoft deleted the aaron.kalin/new_terraform branch October 30, 2020 23:42
@martinisoft martinisoft mentioned this pull request Nov 9, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants