-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unified Terraform Deployments (Starting with Digital Ocean) #41
Conversation
This is currently pretty "stable" as a Proof of Concept via DigitalOcean using their managed kubernetes service. It writes out a kubeconfig file for you after provisioning which makes it super quick to jump right into kubectl commands should you need to. Next steps for me in this PR are updated in the initial PR comment. Would like some 👀 on this and suggestions for how it can be made easier to use. Specifically, I want any ideas of how to make configurable options around the datadog HELM chart to enable features like Cluster Agent, Logs, etc. |
So after doing the initial extraction I think this needs a different approach. I also uncovered a fun bug in the kubernetes provider with Terraform which has me thinking I need to approach this in another way, so here is what I think.
|
Decided that it would be better to avoid wrapping a single HELM resource for installing Datadog which even Terraform docs tell you not to do because they are meant to house collections of resources (not providers). With that in mind, I am shipping this PR as a decent example of making a provider-specific infrastructure deployment option. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general, looks good to me, just a couple of comments/questions
Thanks!
This is now officially ready for review. Let me know what you think @burningion @arapulido |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left some comments of minor things, but other than that, looks good to me 👍
* Run `helm repo add datadog https://helm.datadoghq.com` to track our official HELM repo | ||
* Run `helm repo update` to sync up the latest chart | ||
* Create a secret for the API Key `export DATADOG_SECRET_API_KEY_NAME=datadog-api-secret && kubectl create secret generic $DATADOG_SECRET_API_KEY_NAME --from-literal api-key="<DATADOG_API_KEY>" --namespace="default"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
helm install datadog-agent datadog/datadog --set datadog.apiKey
already creates a Kubernetes secret for the api key, so it might be more straightforward (1 step instead of 2)
@@ -0,0 +1,41 @@ | |||
# This is a reasonable set of default HELM settings. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would add a comment with the chart version this was tested with (as it may break in the future)
@@ -38,6 +38,8 @@ spec: | |||
value: "user" | |||
- name: DATADOG_SERVICE_NAME | |||
value: "advertisements-service" | |||
- name: DD_TAGS |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shall we move to the new unified tagging? (DD_ENV
, instead of the env
tag)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh thats a good point. I will incorporate this into the HELM chart.
Used DigitalOcean as a test of this before bringing other platforms
Forgot to turn this on in the editor to correctly format the terraform files for me
This is a super rough draft. Need some 👀 on this implementation because it doesn't feel very elegant to configure the HELM chart this way and I need a better or easier solution. Not sure if that means having a data structure which gets applied? Maybe a commented YAML?
This isn't smoothed out yet, just jotting it down until it can be a better story for everyone.
Without this, Terraform will happily nuke the file first which might be referenced by the environment variable KUBECONFIG which produces this error when destroying without this option: > Error: Kubernetes cluster unreachable: stat kube_config_server.yaml: no such file or directory
This should make for less noise in the terraform apply diffs because it sorts on the keys of the options. I have also enabled the systemprobe agent for network stats
This technically doesn't matter right now because I am pinning agent versions to latest, but it will help in the future.
This is the first pass at trying providers as a module and I am not sure I like this approach. I think a better approach will be to have folders for each provider so you can more or less choose the deployment target, then push out any common resources into a shared module folder that you interact with.
This makes more sense then trying to wrap k8s and HELM in Terraform. It simply makes it more complex than it really needs to be.
We pinned them in the previous module setup and forgot to do that here.
Needed to separate the information between providers and the main folder.
Also extract some more tunable variables with sensible defaults
Not super sure about the workflow here because it's split across resources and platforms. I am thinking each terraform should write the kubectl config into the deploy directory instead for easier command running.
Still recommend 3 here, but it works well enough with only 2 nodes.
Also a slight formatting tweak this will visually separate everything better to scan in github and in a text editor Moved docker compose instructions to that folder so it's closer to where it is needed
Turns out copy pasta can be bad for you sometimes
I think this makes it easier to run kubectl commands from the deploy folder for HELM charts and all that jazz.
Also document more clearly needing to write out a config file
This was originally just a clusterIP so you had to manually apply an ingress of some kind. We could separate these if needed.
This is really a bug and honestly the wrong environment so in APM this shows up as env:none by default. Patching this for now until we can make the tags more realistic like "development" "staging" and "production"
adc4427
to
e515998
Compare
This is a first pass at unifying the deployments options with Terraform. My PoC is with DigitalOcean and their Kubernetes product, but will eventually branch out to other services as optional targets once I can make it a generic module.
Currently in draft form until I can get this reproducible and well documented. Completes #43
Task list:
Add Datadog HELM chartAdd ecommerce app k8s manifestsMake k8s deployment into a generic module