From 8dd8c5a88f6ddafb7a3521c5827e3cf0d4a366d2 Mon Sep 17 00:00:00 2001 From: Henrik Fricke Date: Fri, 20 May 2022 15:53:04 +0200 Subject: [PATCH] =?UTF-8?q?fix:=20use=20proper=20single=20qoutes=20?= =?UTF-8?q?=F0=9F=99=88?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- 1-getting-started/README.md | 16 ++++++++-------- 2-modules/README.md | 8 ++++---- 3-composition/README.md | 6 +++--- 4-parameterization/README.md | 6 +++--- 4 files changed, 18 insertions(+), 18 deletions(-) diff --git a/1-getting-started/README.md b/1-getting-started/README.md index dbd4690..b51ba4b 100644 --- a/1-getting-started/README.md +++ b/1-getting-started/README.md @@ -1,6 +1,6 @@ # Getting started -Let's get started by bootstrapping Terraform and deploying some resources to AWS. Instead of just deploying some random resources, we want to create an S3 bucket and enable static website hosting. Ultimately, we serve a static HTML file. +Let’s get started by bootstrapping Terraform and deploying some resources to AWS. Instead of just deploying some random resources, we want to create an S3 bucket and enable static website hosting. Ultimately, we serve a static HTML file. ## Bootstrap Terraform @@ -38,7 +38,7 @@ Let's get started by bootstrapping Terraform and deploying some resources to AWS We just created an empty S3 bucket. Go to the [S3 console](https://s3.console.aws.amazon.com/s3/buckets) and verify the existence. -What's going on here? We created a `.tf` file: Terraform comes with its syntax called [HCL](https://www.terraform.io/language/syntax/configuration). In the `main.tf` file, we set the required Terraform version. After that, we configure a provider. Throughout the workshop, we focus on AWS and only deploy AWS resources. The [AWS provider](https://www.terraform.io/language/providers) gives us all the resources and data sources we need to describe AWS infrastructure. +What’s going on here? We created a `.tf` file: Terraform comes with its syntax called [HCL](https://www.terraform.io/language/syntax/configuration). In the `main.tf` file, we set the required Terraform version. After that, we configure a provider. Throughout the workshop, we focus on AWS and only deploy AWS resources. The [AWS provider](https://www.terraform.io/language/providers) gives us all the resources and data sources we need to describe AWS infrastructure. Bare in mind, that Terraform provides [dozens of providers](https://registry.terraform.io/browse/providers) (e.g. Azure, Google Cloud Platform or even Auth0). @@ -48,7 +48,7 @@ Finally, we run `terraform apply` to deploy the resources. ## Outputs -Let's extend the stack and deploy more resources: +Let’s extend the stack and deploy more resources: 1. Create a new file `index.html` next to the `main.tf`: 2. Add the following lines to the HTML file: @@ -116,9 +116,9 @@ Thanks to the output, we can easily find the endpoint of the static website with ## Remote Backend -Before we continue and go to the next lab, we need to talk about the Terraform state. As we apply changes, Terraform is always smart enough to update the AWS resources. How does it work? You might have noticed the auto-generated files `terraform.tfstate` and `terraform.tfstate.backup`. Terraform persists every single state of every AWS resource in the Terraform state. When applying a new update, Terraform compares the desired state with the current state and calculates a diff. Based on the diff, Terraform updates the AWS resources and also updates the state afterward. Without the Terraform state, Terraform would lose the connection to the AWS resources and wouldn't know how to handle updates. As you can see, the Terraform state is very crucial. +Before we continue and go to the next lab, we need to talk about the Terraform state. As we apply changes, Terraform is always smart enough to update the AWS resources. How does it work? You might have noticed the auto-generated files `terraform.tfstate` and `terraform.tfstate.backup`. Terraform persists every single state of every AWS resource in the Terraform state. When applying a new update, Terraform compares the desired state with the current state and calculates a diff. Based on the diff, Terraform updates the AWS resources and also updates the state afterward. Without the Terraform state, Terraform would lose the connection to the AWS resources and wouldn’t know how to handle updates. As you can see, the Terraform state is very crucial. -Until now, we used local files for the Terraform state. That's okay for a workshop but doesn't work for production workloads. The problem is, that we always need the state to apply changes. So if you want to work on the same stack with a team or some form of automation, then you need to share the state with others. The recommended solution is a remote backend. In this workshop, we focus on an S3 bucket, but you have [different options](https://www.terraform.io/language/settings/backends). Instead of keeping the state locally, we upload the state to the S3 bucket and read the current status from there. +Until now, we used local files for the Terraform state. That’s okay for a workshop but doesn’t work for production workloads. The problem is, that we always need the state to apply changes. So if you want to work on the same stack with a team or some form of automation, then you need to share the state with others. The recommended solution is a remote backend. In this workshop, we focus on an S3 bucket, but you have [different options](https://www.terraform.io/language/settings/backends). Instead of keeping the state locally, we upload the state to the S3 bucket and read the current status from there. 1. Create a new S3 bucket in the [AWS Management Console](https://s3.console.aws.amazon.com/s3/bucket/create?region=eu-west-1). Copy the name of the bucket afterward. 2. Go to the file `main.tf` and replace it: @@ -162,10 +162,10 @@ Until now, we used local files for the Terraform state. That's okay for a worksh 3. Run `terraform init`. The command asks for the bucket name. Answer the question **Do you want to copy existing state to the new backend?** with **yes**. 4. Run `terraform apply`. Everything should still work. -Go to the S3 bucket in the AWS Management Console and check out the files. You should see a new file in the bucket. It's still the same file like the one we had locally, but now in the cloud. Terraform takes care of updating the Terraform state automatically. +Go to the S3 bucket in the AWS Management Console and check out the files. You should see a new file in the bucket. It’s still the same file like the one we had locally, but now in the cloud. Terraform takes care of updating the Terraform state automatically. -You might have noticed the manual creation of the S3 bucket. To keep it simple for the sake of the workshop, we create the bucket directly in the AWS Management Console. It's a classic chicken and egg situation because we would like to use *infrastructure as code* to create the bucket for the remote backend as well, but therefore we need also a Terraform state. Though workarounds and solutions exist, but we won't cover them here. +You might have noticed the manual creation of the S3 bucket. To keep it simple for the sake of the workshop, we create the bucket directly in the AWS Management Console. It’s a classic chicken and egg situation because we would like to use *infrastructure as code* to create the bucket for the remote backend as well, but therefore we need also a Terraform state. Though workarounds and solutions exist, but we won’t cover them here. ## Next -That's it for the first lab. We learned more about the Terraform Language (provider, data sources, resources and outputs) and deployed some AWS resources. In the [next lab](../2-modules/), we extend the stack and use a third-party module. +That’s it for the first lab. We learned more about the Terraform Language (provider, data sources, resources and outputs) and deployed some AWS resources. In the [next lab](../2-modules/), we extend the stack and use a third-party module. diff --git a/2-modules/README.md b/2-modules/README.md index 703ef4c..d77347d 100644 --- a/2-modules/README.md +++ b/2-modules/README.md @@ -1,6 +1,6 @@ # Modules -In the first lab, we bootstrapped Terraform and got familiar with the very basics. Let's extend the stack and add a simple *Hello World API*. We want to use [Amazon API Gateway](https://aws.amazon.com/api-gateway/) and [AWS Lambda](https://aws.amazon.com/lambda/) (with Node.js) for a serverless API returning a hello world statement. +In the first lab, we bootstrapped Terraform and got familiar with the very basics. Let’s extend the stack and add a simple *Hello World API*. We want to use [Amazon API Gateway](https://aws.amazon.com/api-gateway/) and [AWS Lambda](https://aws.amazon.com/lambda/) (with Node.js) for a serverless API returning a hello world statement. ## Lambda function @@ -70,15 +70,15 @@ In the first lab, we bootstrapped Terraform and got familiar with the very basic } ``` -So far, we used *resources* to describe AWS infrastructure. Think of it as a low-level component to describe one specific entity in AWS (like an IAM user or an S3 bucket). Sometimes we have a combination of resources widely used we would like to bundle into an abstraction layer. That's where modules come in. +So far, we used *resources* to describe AWS infrastructure. Think of it as a low-level component to describe one specific entity in AWS (like an IAM user or an S3 bucket). Sometimes we have a combination of resources widely used we would like to bundle into an abstraction layer. That’s where modules come in. The good part is, that we can write our own modules or use third-party modules. In this case, we used a third-party module called [*terraform-aws-modules/lambda/aws*](https://registry.terraform.io/modules/terraform-aws-modules/lambda/aws/latest). So instead of wiring up many resources by ourselves to deploy a simple Lambda function, we can just use the module. It bundles the source code and handles the IAM policies in the background. The Terraform community is very vibrant and you can find thousands of modules. Before reinventing the wheel, check out the [Terraform Registry](https://registry.terraform.io). -For third-party modules, it's [good practice](https://www.terraform.io/language/expressions/version-constraints#module-versions) to add the version attribute and define a specific version. That ensures you don't accidentally upgrade third-party modules. +For third-party modules, it’s [good practice](https://www.terraform.io/language/expressions/version-constraints#module-versions) to add the version attribute and define a specific version. That ensures you don’t accidentally upgrade third-party modules. -That's it for the Lambda function. Let's go to the API Gateway. +That’s it for the Lambda function. Let’s go to the API Gateway. ## API Gateway diff --git a/3-composition/README.md b/3-composition/README.md index 7541f8c..c48cb3e 100644 --- a/3-composition/README.md +++ b/3-composition/README.md @@ -132,7 +132,7 @@ The previous lab introduced a third-party module to easily deploy a Lambda funct rm -rf .terraform builds ``` -That might be very overwhelming and understanding the big picture at this point is not easy. Before we get into the details, let's quickly add the Terraform stack for a staging environment. +That might be very overwhelming and understanding the big picture at this point is not easy. Before we get into the details, let’s quickly add the Terraform stack for a staging environment. ## Staging Environment @@ -196,7 +196,7 @@ That might be very overwhelming and understanding the big picture at this point You might have noticed that we took everything from the previous labs and introduced two modules. One module for the API, one module for the website. New is, that we also created *input variables*. With input variables in Terraform, we can define a public interface for modules. So far, we introduced a simple input variable to pass an environment identifier to the modules. We use the identifier to create unique names for AWS resources (like the S3 bucket name). -The modules folder itself functions as a library. We need a Terraform stack wiring up the modules, configuring the input variables and connecting the dots essentially. Therefore, we created the `staging` folder. As you can see in the `main.tf` file inside the staging folder, we import our modules and configure the `environment` variable. That's all we have to do here as the core business logic lives now in re-usable modules. +The modules folder itself functions as a library. We need a Terraform stack wiring up the modules, configuring the input variables and connecting the dots essentially. Therefore, we created the `staging` folder. As you can see in the `main.tf` file inside the staging folder, we import our modules and configure the `environment` variable. That’s all we have to do here as the core business logic lives now in re-usable modules. So staging is live, why not deploy prod? @@ -215,4 +215,4 @@ After all, you should have two environments running in your AWS account. ## Next -In the [next lab](../4-parameterization/), we want to introduce another input variable to deploy a new feature to the staging environment while the production environment shouldn't deliver the new upcoming feature. +In the [next lab](../4-parameterization/), we want to introduce another input variable to deploy a new feature to the staging environment while the production environment shouldn’t deliver the new upcoming feature. diff --git a/4-parameterization/README.md b/4-parameterization/README.md index ee7a83b..ace37db 100644 --- a/4-parameterization/README.md +++ b/4-parameterization/README.md @@ -66,9 +66,9 @@ The API becomes more powerful in this lab, but we want to be careful and only ro }; ``` -We extended the *API* module by introducing a new input variable `enable_greeting_feature`. The default is set to `false`, so we can't accidentally distribute the new feature. In the `main.tf` file, we simply pass the input variable down to the AWS Lambda function as an environment variable. Finally, in the Lambda function, we use the environment variable to flip on the new feature. +We extended the *API* module by introducing a new input variable `enable_greeting_feature`. The default is set to `false`, so we can’t accidentally distribute the new feature. In the `main.tf` file, we simply pass the input variable down to the AWS Lambda function as an environment variable. Finally, in the Lambda function, we use the environment variable to flip on the new feature. -The new feature wouldn't appear after deployment (feel free to try it and deploy your staging and production environment). We need to configure the new input variable explicitly. Let's do it. +The new feature wouldn’t appear after deployment (feel free to try it and deploy your staging and production environment). We need to configure the new input variable explicitly. Let’s do it. ## Rollout @@ -111,7 +111,7 @@ The new feature wouldn't appear after deployment (feel free to try it and deploy ``` 5. Here we go! The new feature works on staging. -With input variables, we can make modules configurable for different scenarios. In this case, we only want to deploy a new feature to the staging environment, but not to production. In practice, it's a common requirement to configure environments differently. For example, we want to configure provisioned capacities (like CPU or memory allocation), a global CDN or custom domains with SSL certificates. +With input variables, we can make modules configurable for different scenarios. In this case, we only want to deploy a new feature to the staging environment, but not to production. In practice, it’s a common requirement to configure environments differently. For example, we want to configure provisioned capacities (like CPU or memory allocation), a global CDN or custom domains with SSL certificates. ## Final words