diff --git a/content/ecs-spot-capacity-providers/Introduction/EC2 Spot Instances/_index.md b/content/ecs-spot-capacity-providers/Introduction/EC2 Spot Instances/_index.md
deleted file mode 100644
index 9e424798..00000000
--- a/content/ecs-spot-capacity-providers/Introduction/EC2 Spot Instances/_index.md
+++ /dev/null
@@ -1,11 +0,0 @@
-+++
-title = "Introduction to EC2 Spot Instances"
-weight = 15
-+++
-
-[EC2 Spot Instances] (https://aws.amazon.com/ec2/spot/) offer spare compute capacity available in the AWS Cloud at steep discounts compared to On-Demand prices. EC2 can interrupt Spot Instances with two minutes of notification when EC2 needs the capacity back. You can use Spot Instances for various fault-tolerant and flexible applications. Some examples are analytics, containerized workloads, high-performance computing (HPC), stateless web servers, rendering, CI/CD, and other test and development workloads.
-
-### Spot Instances in Containerized workloads
-
-
-Many containerized workloads are usually stateless and fault tolerant and are great fit for running them on EC2 Spot. In this workshop we will explore how to run containers on inturruptable EC2 Spot instances and save significantly compared to running them on EC2 On-Demand instances.
diff --git a/content/ecs-spot-capacity-providers/Introduction/_index.md b/content/ecs-spot-capacity-providers/Introduction/_index.md
index 6c258df4..034924e7 100644
--- a/content/ecs-spot-capacity-providers/Introduction/_index.md
+++ b/content/ecs-spot-capacity-providers/Introduction/_index.md
@@ -3,9 +3,8 @@ title = "Introduction"
weight = 10
+++
-Introduction to Docker Containers, Amazon ECS, EC2 Spot Instances and Application Scaling
----
-This self-paced workshop is designed to educate engineers that might not be familiar with Fargate, ECS, EC2 Spot, and possibly even Docker container workflow.
+If you're already familiar with the below concepts or already have experience with operating ECS clusters, you can skip the introduction and proceed to [**Setup the workshop environment on AWS**](/ecs-spot-capacity-providers/workshopsetup.html) section to start the workshop.
+
{{% children %}}
\ No newline at end of file
diff --git a/content/ecs-spot-capacity-providers/Introduction/about_containers/_index.md b/content/ecs-spot-capacity-providers/Introduction/about_containers/_index.md
index 01a9b36d..d3eb24c3 100644
--- a/content/ecs-spot-capacity-providers/Introduction/about_containers/_index.md
+++ b/content/ecs-spot-capacity-providers/Introduction/about_containers/_index.md
@@ -19,9 +19,4 @@ Why Containers?
Benefits of Containers
---
-- Containers are powerful way for developers to package and deploy their applications. They are lightweight and provide a consistent, portable software environment for applications to easily run and scale anywhere. Building and deploying microservices, running batch jobs, for machine learning applications, and moving existing applications into the cloud are just some of the popular use cases for containers. Some other benefits of containers include:
-
-Speed
----
-- Workload Isolation
-- Single artifact to test from local to production, avoid drift
\ No newline at end of file
+- Containers are a powerful way for developers to package and deploy their applications. They are lightweight and provide a consistent, portable software environment for applications to easily run and scale anywhere. Building and deploying microservices, running batch jobs, for machine learning applications, and moving existing applications into the cloud are just some of the popular use cases for containers.
\ No newline at end of file
diff --git a/content/ecs-spot-capacity-providers/Introduction/about_ecs/ecs_cluster.md b/content/ecs-spot-capacity-providers/Introduction/about_ecs/ecs_cluster.md
index f56d2be0..4d38270c 100644
--- a/content/ecs-spot-capacity-providers/Introduction/about_ecs/ecs_cluster.md
+++ b/content/ecs-spot-capacity-providers/Introduction/about_ecs/ecs_cluster.md
@@ -6,10 +6,9 @@ weight = 15
An Amazon ECS cluster is a logical grouping of tasks or services.
-If you are running tasks or services that use the EC2 launch type, a cluster is also a grouping of container instances.
-
-If you are using capacity providers, a cluster is also a logical grouping of capacity providers.
-A Cluster can be a combination of Fargate and EC2 launch types.
+- If you are running tasks or services that use the EC2 launch type, a cluster is also a grouping of container instances.
+- If you are using capacity providers, a cluster is also a logical grouping of capacity providers.
+- A Cluster can be a combination of Fargate and EC2 launch types.
When you first use Amazon ECS, a default cluster is created for you, but you can create multiple clusters in an account to keep your resources separate.
diff --git a/content/ecs-spot-capacity-providers/Introduction/about_ecs/fargate.md b/content/ecs-spot-capacity-providers/Introduction/about_ecs/fargate.md
index 4d86cf44..c15ff78f 100644
--- a/content/ecs-spot-capacity-providers/Introduction/about_ecs/fargate.md
+++ b/content/ecs-spot-capacity-providers/Introduction/about_ecs/fargate.md
@@ -1,5 +1,5 @@
+++
-title = "Serverless Compute"
+title = "Fargate"
weight = 35
+++
diff --git a/content/ecs-spot-capacity-providers/Introduction/about_ecs/service_discovery.md b/content/ecs-spot-capacity-providers/Introduction/about_ecs/service_discovery.md
index cbe4a7e2..89e81384 100644
--- a/content/ecs-spot-capacity-providers/Introduction/about_ecs/service_discovery.md
+++ b/content/ecs-spot-capacity-providers/Introduction/about_ecs/service_discovery.md
@@ -10,4 +10,4 @@ AWS Cloud Map is a cloud resource discovery service. With Cloud Map, you can def
Cloud Map natively integrates with ECS, and as we build services in the workshop, will see this firsthand. For more information on service discovery with ECS, please see [here](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-discovery.html).
-![Service Discovery](/images/ecs-spot-capacity-providers/cloudmapproduct.png)
\ No newline at end of file
+![Service Discovery](/images/ecs-spot-capacity-providers/cloudmapproduct.png)
\ No newline at end of file
diff --git a/content/ecs-spot-capacity-providers/Introduction/about_ecs/services.md b/content/ecs-spot-capacity-providers/Introduction/about_ecs/services.md
index 5e37b986..441c2a9e 100644
--- a/content/ecs-spot-capacity-providers/Introduction/about_ecs/services.md
+++ b/content/ecs-spot-capacity-providers/Introduction/about_ecs/services.md
@@ -1,22 +1,17 @@
+++
-title = "Services, Relica, and Deamon"
+title = "Services"
weight = 30
+++
-Services
----
-
Amazon ECS allows you to run and maintain a specified number of instances of a task definition simultaneously in an Amazon ECS cluster. This is called a service. If any of your tasks should fail or stop for any reason, the Amazon ECS service scheduler launches another instance of your task definition to replace it and maintain the desired count of tasks in the service depending on the scheduling strategy used.
-In addition to maintaining the desired count of tasks in your service, you can optionally run your service behind a load balancer. The load balancer distributes traffic across the tasks that are associated with the service.
+In addition to maintaining the desired count of tasks in your service, you can optionally run your service behind a load balancer. The load balancer distributes traffic across the tasks that are associated with the service.
There are two service scheduler strategies available:
-Replica
--------
-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. For more information, see [Replica](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html#service_scheduler_replica).
+- REPLICA:
+ - The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. For more information, see [Replica](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html#service_scheduler_replica).
-Deamon
-------
-The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. The service scheduler evaluates the task placement constraints for running tasks and will stop tasks that do not meet the placement constraints. When using this strategy, there is no need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies. For more information, see [Daemon](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html#service_scheduler_daemon).
\ No newline at end of file
+- DAEMON:
+ - The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. The service scheduler evaluates the task placement constraints for running tasks and will stop tasks that do not meet the placement constraints. When using this strategy, there is no need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies. For more information, see [Daemon](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html#service_scheduler_daemon).
\ No newline at end of file
diff --git a/content/ecs-spot-capacity-providers/Introduction/application_scaling/_index.md b/content/ecs-spot-capacity-providers/Introduction/application_scaling/_index.md
index 36aa391c..59ee2656 100644
--- a/content/ecs-spot-capacity-providers/Introduction/application_scaling/_index.md
+++ b/content/ecs-spot-capacity-providers/Introduction/application_scaling/_index.md
@@ -1,12 +1,12 @@
+++
-title = "Application Scaling-Infrastructure First Approach"
+title = "Application Scaling"
weight = 20
+++
-Its estimating how much compute capacity your application might need and provision server components based on it. In other words, your Infrastructure will start first before you application starts which is a notion we call - Infrastructure First. However this has few challenges.
+Infrastructure first approach means estimating how much compute capacity your application might need and provision server components (EC2 Instances) based on it. In other words, your Infrastructure will start first before you application starts which is a notion we call - Infrastructure First. However, this has few challenges.
-In the existing architecture with EC2 ASG used for ECS Cluster, you provision the infrastructure first (i.e. EC2 ASG) which will create instances (i.e. capacity) and then run your application services/tasks on this capacity using the EC2 Launch Type. In this case, running any task/service fails if there are no instances (zero capacity) in the ASG.
+In the existing architecture with EC2 Auto Scaling groups used for your ECS Cluster, you provision the infrastructure first, which will create EC2 instances and then run (schedule) the ECS tasks on this capacity using the EC2 Launch Type. In this case, running any task/service fails if there are no instances (zero capacity) in the Auto Scaling group
1. ECS is unaware of the EC2 ASGs. So there is disconnect between the application tasks resource requirements and EC2 ASG scale out/in policies. The ASG scale out/in polices are based on the tasks or instances which are already running in that cluster and does not account for the new application tasks which needs to be scheduled. This means ASG custom scaling policies may not scale out/in well as per the application requirements.
diff --git a/content/ecs-spot-capacity-providers/Introduction/ec2_spot_instances/_index.md b/content/ecs-spot-capacity-providers/Introduction/ec2_spot_instances/_index.md
new file mode 100644
index 00000000..4ded48d4
--- /dev/null
+++ b/content/ecs-spot-capacity-providers/Introduction/ec2_spot_instances/_index.md
@@ -0,0 +1,11 @@
++++
+title = "Amazon EC2 Spot Instances"
+weight = 15
++++
+
+[Amazon EC2 Spot Instances] (https://aws.amazon.com/ec2/spot/) offer spare compute capacity available in the AWS Cloud at steep discounts compared to On-Demand prices. EC2 can interrupt Spot Instances with two minutes of notification when EC2 needs the capacity back. You can use Spot Instances for various fault-tolerant and flexible applications. Some examples are analytics, containerized workloads, high-performance computing (HPC), stateless web servers, rendering, CI/CD, and other test and development workloads.
+
+### Spot Instances in Containerized workloads
+
+
+Many containerized workloads are usually stateless and fault tolerant and are great fit for running on EC2 Spot Instances. In this workshop we will explore how to run containers on interruptible EC2 Spot Instances and achieve significant cost savings.
\ No newline at end of file
diff --git a/content/ecs-spot-capacity-providers/WorkshopSetup/_index.md b/content/ecs-spot-capacity-providers/WorkshopSetup/_index.md
index a6d9d2ac..6a74ada0 100644
--- a/content/ecs-spot-capacity-providers/WorkshopSetup/_index.md
+++ b/content/ecs-spot-capacity-providers/WorkshopSetup/_index.md
@@ -8,11 +8,9 @@ Launch the CloudFormation stack
To save time on the initial setup, a CloudFormation template will be used to create the required resources needed for the workshop.
-To create the stack
-
1. You can view and download the CloudFormation template from GitHub [here, Change location before making it live] (https://github.com/ec2-spot-workshops/workshops/ecs-spot-capacity-providers/ecs-spot-workshop-cfn.yaml).
2. Take a moment to review the CloudFormation template so you understand the resources it will be creating.
-3. Browse to the [AWS CloudFormation console] (https://console.aws.amazon.com/cloudformation). Make sure you are in AWS Region designated by the facilitators of the workshop
+3. Browse to the [AWS CloudFormation console] (https://console.aws.amazon.com/cloudformation). Make sure you are in AWS region designated by the facilitators of the workshop
4. Click *Create stack*.
5. In the *Specify template* section, select *Upload a template file*. Click *Choose file* and, select the template you downloaded in step 1.
6. Click *Next*.
@@ -35,19 +33,17 @@ The *Events* tab displays each major step in the creation of the stack sorted by
The *CREATE_IN_PROGRESS* event is logged when AWS CloudFormation reports that it has begun to create the resource. The *CREATE_COMPLETE* event is logged when the resource is successfully created.
When AWS CloudFormation has successfully created the stack, you will see the *CREATE_COMPLETE* event at the top of the Events tab:
-Take a moment and checkout all the resources created by this stack.
+Take a moment and check out all the resources created by this stack.
![Cloud Formation Stack](/images/ecs-spot-capacity-providers/stack1.png)
-The Cloud formation creates the following Resources which we will be using later during the workshop.
-
-
-* One VPC with 3 public and 3 private subnets
-* Application Load Balancer (ALB) with its own security group
-* Target Group(TG) and an ALB listener to forward the traffic to this TG
-* IAM Role for Cloud 9 Environment
-* Security Group for ECS Container Instance
-* EC2 Launch Template with ECS optimized AMI and required ECS config in the user data section to bootstrap the instance
-* ECR Repository
+The CloudFormation template creates the following resources which we will be using later during the workshop.
+* One VPC with 3 public and 3 private subnets.
+* Application Load Balancer (ALB) with its own security group.
+* Target Group (TG) and an ALB listener to forward the traffic to this TG.
+* IAM Role for the Cloud9 Environment.
+* Security Group for the ECS Container Instances.
+* EC2 Launch Template configured with the ECS optimized AMI, and ECS bootstrapping configuration in the user data section to bootstrap the EC2 Instances into the ECS cluster.
+* ECR Repository to host our containers.
\ No newline at end of file
diff --git a/content/ecs-spot-capacity-providers/WorkshopSetup/attach_iam_role.md b/content/ecs-spot-capacity-providers/WorkshopSetup/attach_iam_role.md
index 53d1209d..9f82a4b1 100644
--- a/content/ecs-spot-capacity-providers/WorkshopSetup/attach_iam_role.md
+++ b/content/ecs-spot-capacity-providers/WorkshopSetup/attach_iam_role.md
@@ -3,14 +3,11 @@ title: "Attach the IAM role to your Workspace"
weight: 10
---
-Attach the IAM role for your Workspace
----
-
-In order to work with ECS from our workstation, we will need the appropriate permissions for our developer workstation instance.
+In order to work with ECS from our new Cloud9 IDE environment, we will need the appropriate permissions.
-* Find your Cloud9 EC2 instance from [here] (https://console.aws.amazon.com/ec2/v2/home?#Instances)
+* Find your Cloud9 EC2 instance [here] (https://console.aws.amazon.com/ec2/v2/home?#Instances)
-* Select the instance, then choose Actions / Instance Settings / Attach/Replace IAM Role
+* Select the instance, then choose Actions -> Instance Settings -> Attach/Replace IAM Role
* Choose **EcsSpotWorkshop-Cloud9InstanceProfile** from the *IAM Role* drop down, and select *Apply*
![Attach IAM Role](/images/ecs-spot-capacity-providers/c9_1.png)
@@ -29,7 +26,7 @@ Use the [GetCallerIdentity] (https://docs.aws.amazon.com/cli/latest/reference/st
aws sts get-caller-identity
```
-The output assumed-role name should contain:
+The output assumed-role name should contain the name of the role.
```
{
@@ -37,5 +34,4 @@ The output assumed-role name should contain:
"Account": "0004746XXXXX",
"Arn": "arn:aws:sts::0004746XXXXX:assumed-role/EcsSpotWorkshop-Cloud9InstanceRole/i-0eedc304975256fac"
}
-```
-
+```
\ No newline at end of file
diff --git a/content/ecs-spot-capacity-providers/WorkshopSetup/cli_setup.md b/content/ecs-spot-capacity-providers/WorkshopSetup/cli_setup.md
index ec1bbe9a..96914942 100644
--- a/content/ecs-spot-capacity-providers/WorkshopSetup/cli_setup.md
+++ b/content/ecs-spot-capacity-providers/WorkshopSetup/cli_setup.md
@@ -58,4 +58,4 @@ do
done
```
-***Congratulations !!!*** Now you are done with workspace setup, Proceed to Module-1 of this workshop.
\ No newline at end of file
+***Congratulations***, your Cloud9 workspace setup is complete, and you can proceed to next steps of this workshop.
\ No newline at end of file
diff --git a/content/ecs-spot-capacity-providers/WorkshopSetup/workspace1.md b/content/ecs-spot-capacity-providers/WorkshopSetup/workspace1.md
index 2dcfabe0..3da9248f 100644
--- a/content/ecs-spot-capacity-providers/WorkshopSetup/workspace1.md
+++ b/content/ecs-spot-capacity-providers/WorkshopSetup/workspace1.md
@@ -4,18 +4,16 @@ weight: 5
---
-If you are running the workshop on your own, the Cloud9 workspace should be built by an IAM user with Administrator privileges, not the root account user. Please ensure you are logged in as an IAM user, not the root account user.
+If you are running the workshop on your own, the Cloud9 workspace should be built by an IAM user with Administrator privileges, not the root account user. Please ensure you are logged in as an IAM user.
We will create a Cloud9 environment first to execute all the commands needed for this workshop.
-1. Login into AWS console with your account credentials
-1. When working on AWS provided account your facilitator provides which Region to choose.
-1. On your own AWS account, select any region of your choice
-1. Select **Services** and type cloud9
+1. Login into AWS console with your account credentials and choose the region where you deployed the CloudFormation template.
+1. Select **Services** and type Cloud9
1. Select **Create environment**
-1. Name it **ecsspotworkshop**. Click " **Next Step**", keep all other defaults and click " **Next Step**". keep all other defaults and click " **Create Environment**"
-1. When it comes up, customize the environment by closing the **welcome tab** and **lower work area** , and opening a new **terminal** tab in the main work area:
-1. If you like this theme, you can choose it yourself by selecting **View / Themes / Solarized / Solarized Dark** in the Cloud9 workspace menu.
+1. Name it **ecsspotworkshop**. Click "**Next Step**", keep all other defaults and click "**Next Step**". Leep all other defaults and click "**Create Environment**"
+1. When it comes up, customize the environment by closing the **welcome tab** and **lower work area**, and opening a new **terminal** tab in the main work area:
+1. If you like the dark theme seen below, you can choose it yourself by selecting **View / Themes / Solarized / Solarized Dark** in the Cloud9 workspace menu.
#### Your workspace should now look like this:
diff --git a/content/ecs-spot-capacity-providers/_index.md b/content/ecs-spot-capacity-providers/_index.md
index 462556ac..2ef71648 100644
--- a/content/ecs-spot-capacity-providers/_index.md
+++ b/content/ecs-spot-capacity-providers/_index.md
@@ -7,9 +7,9 @@ pre: "⁃ "
## Overview
-Welcome! In this workshop you learn how to **cost optimize** running a sample container based web application, using Amazon ECS and EC2 Spot Instances.
+Welcome! In this workshop you learn how to **cost optimize** your Amazon ECS clusters using EC2 Spot Instances.
-The **learning objective** of this hands-on workshop is to help understand the different options to cost optimize container workloads running on **[Amazon ECS](https://aws.amazon.com/ecs/)** using **[EC2 Spot Instances](https://aws.amazon.com/ec2/spot/)** and **[Amazon Fargate Spot](https://aws.amazon.com/fargate/)**. This workshop covers topics such as ECS Cluster Auto scaling and how to use scale efficiently with **[Capacity Providers](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cluster-capacity-providers.html)** to spread your tasks across a mix of resources, both on AWS Fargate and AWS Fargate Spot and EC2 OnDemand and Spot Instances.
+The **learning objective** of this hands-on workshop is to help understand the different options to cost optimize container workloads running on **[Amazon ECS](https://aws.amazon.com/ecs/)** using **[EC2 Spot Instances](https://aws.amazon.com/ec2/spot/)** and **[Amazon Fargate Spot](https://aws.amazon.com/fargate/)**. This workshop covers topics such as ECS cluster auto scaling and how to scale efficiently with **[Capacity Providers](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cluster-capacity-providers.html)** to spread your tasks across a mix of resources, both on AWS Fargate and AWS Fargate Spot as well as EC2 On-Demand and Spot Instances.
{{% children showhidden="false" %}}
@@ -17,9 +17,9 @@ The **learning objective** of this hands-on workshop is to help understand the d
The estimated time for completing the workshop is **90 to 120 minutes**. The estimated cost will be less than **$5**.
{{% /notice %}}
-These labs are designed to be completed in sequence. If you are reading this at a live AWS event, the workshop attendants will give you a high level run down of the labs. Then it's up to you to follow the instructions below to complete the labs. Don't worry if you're embarking on this journey in the comfort of your office or home, this site contains all the materials for you'll need to complete this workshop.
+The workshop is designed to be completed in sequence. If you are reading this at a live AWS event, the workshop attendants will give you a high level run down of the workshop. Then it's up to you to follow the instructions below to completion. Don't worry if you're embarking on this journey in the comfort of your office or home, this site contains all the materials for you'll need to complete this workshop.
### About Spot Instances in Containerized workloads
-Many containerized workloads are usually stateless and fault tolerant and are great fit for running them on EC2 Spot. In this workshop we will explore how to run containers on interruptible EC2 Spot instances and save significantly compared to running them on EC2 On-Demand instances.
+Many containerized workloads are usually stateless and fault tolerant and are great fit for running on EC2 Spot Instances. In this workshop we will explore how to run containers on interruptible EC2 Spot Instances and achieve significant cost savings.
diff --git a/content/ecs-spot-capacity-providers/architecture.md b/content/ecs-spot-capacity-providers/architecture.md
index 22c474a1..b2499ac9 100644
--- a/content/ecs-spot-capacity-providers/architecture.md
+++ b/content/ecs-spot-capacity-providers/architecture.md
@@ -7,13 +7,13 @@ weight: 4
Your Challenge
---
-Your company hosts an external facing Apache web server serving millions of users across the globe. The web servers are based on the micro service architecture and running as docker containers on AWS ECS Cluster. The underlying computing platform/dataplane for the ECS Cluster is completely based on EC2 on-demand instances. The current auto scale in/out of the EC2 instances is based on the vCPU based metrics. However it is observed that ECS Cluster does not scale fast enough to handle the sudden surge of web traffic during peak hours. And during the scale in, sometimes EC2 instances that are actively running ECS tasks are getting terminated, causing disruption to the web service.
+Your company hosts an external facing Apache web server serving millions of users across the globe, based on microservice architecture and running inside docker containers on an Amazon ECS Cluster. The underlying compute for the ECS Cluster is completely based on EC2 On-demand Instances. The current auto scale in/out of the EC2 instances is based on the vCPU utilization metrics. However, it is observed that ECS Cluster does not scale fast enough to handle the sudden surge of web traffic during peak hours. And during the scale in, sometimes EC2 instances that are actively running ECS tasks are getting terminated, causing disruption to the web service.
-Along with faster service response time, the company is also looking to optimize costs. Also as a long term strategy, your company does not want invest resources in undifferentiated heavy lifting such as managing the underlying computing infrastructure. Also, wants to leverage any serverless options and focus on their business critical applications.
+Along with faster service response time, the company is also looking to optimize costs. Also as a long term strategy, your company does not want invest resources in undifferentiated heavy lifting such as managing the underlying compute infrastructure. The company also wants to evaluate running some of the containerized workloads on a serverless container platform, to further focus on the application and not the infrastructure.
-You were introduced to Amazon EC2 Spot Instances and a few ECS features that can improve autoscaling configuration and efficiency. You were asked by your manager to re-architect the existing the solution with EC2 Spot and explore both EC2 Spot Instances and serverless options such as Fargate and Fargate Spot. Apart from cost optimization, you are also expected to solve the cluster scaling issues and increase the resilience of the application.
+You were introduced to Amazon EC2 Spot Instances and a few ECS features that can improve auto scaling configuration and efficiency. You were asked by your manager to re-architect the existing the solution with EC2 Spot and explore both EC2 Spot Instances and serverless options such as Fargate and Fargate Spot. Apart from cost optimization, you are also expected to solve the cluster scaling issues and increase the resilience of the application.
-What are the various options do you have to incorporate Spot Instances in your solution?
+What are the various options you have to incorporate Spot Instances in your architecture?
How do you decide which one is the right solution for the workload? How do you plan to fix the scaling issue?
Here is the overall architecture of what you will be building throughout this workshop. By the end of the workshop, you will achieve the following
diff --git a/content/ecs-spot-capacity-providers/before/_index.md b/content/ecs-spot-capacity-providers/before/_index.md
index afdb4391..806e60ca 100644
--- a/content/ecs-spot-capacity-providers/before/_index.md
+++ b/content/ecs-spot-capacity-providers/before/_index.md
@@ -9,6 +9,4 @@ To start the workshop, follow one of the following pages, depending on whether y
{{% children %}}
-Once you are done with either setup, continue with [**Modules**](/ecs-spot-capacity-providers/modules.html)
-
diff --git a/content/ecs-spot-capacity-providers/before/aws_event/_index.md b/content/ecs-spot-capacity-providers/before/aws_event/_index.md
index 73f4f1ad..efcd411d 100644
--- a/content/ecs-spot-capacity-providers/before/aws_event/_index.md
+++ b/content/ecs-spot-capacity-providers/before/aws_event/_index.md
@@ -1,6 +1,6 @@
+++
title = "...At an AWS event"
-weight = 5
+weight = 1
+++
{{% notice warning %}}
@@ -15,11 +15,12 @@ If you are at an AWS event, an AWS acccount was created for you to use throughou
2. Enter the Hash in the text box, and click **Proceed**
3. In the User Dashboard screen, click **AWS Console**
4. In the popup page, click **Open Console**
-5. Select the AWS Region specified by your facilitator.
+5. Select the AWS region specified by your facilitator.
You are now logged in to the AWS console in an account that was created for you, and will be available only throughout the workshop run time.
-You can now start the workshop by heading to [**Modules**](/ecs-spot-capacity-providers/modules.html)
+You can now proceed to the the workshop steps [**Setup the workshop environment on AWS**](/ecs-spot-capacity-providers/workshopsetup.html)
+
Optional:
-If you want to read through basic concepts on Amazon ECS before doing hands-on Modules, you may go to [**Introduction**](/ecs-spot-capacity-providers/introduction.html)
\ No newline at end of file
+If you want to read through basic concepts on Amazon ECS before doing workshop steps, you may go to [**Introduction**](/ecs-spot-capacity-providers/introduction.html)
\ No newline at end of file
diff --git a/content/ecs-spot-capacity-providers/before/self_paced/_index.md b/content/ecs-spot-capacity-providers/before/self_paced/_index.md
index ab4cc266..7c270eec 100644
--- a/content/ecs-spot-capacity-providers/before/self_paced/_index.md
+++ b/content/ecs-spot-capacity-providers/before/self_paced/_index.md
@@ -1,6 +1,6 @@
+++
-title = "...On your own(Self-paced)"
-weight = 5
+title = "...On your own (self-paced)"
+weight = 10
+++
### Running the workshop self-paced, in your own AWS account
@@ -8,6 +8,8 @@ weight = 5
To complete this workshop, have access to an AWS account with administrative permissions. An IAM user with administrator access (**arn:aws:iam::aws:policy/AdministratorAccess**) will do nicely.
-You can start the workshop by heading to [**Introduction**](/ecs-spot-capacity-providers/introduction.html)
+If you need an introduction to containers, Amazon ECS and AWS Fargate, continue to the [**Introduction section.**](/ecs-spot-capacity-providers/introduction.html) OR
+
+You can directly go to [**Setup the workshop environment on AWS**](/ecs-spot-capacity-providers/workshopsetup.html) section.
To avoid unwanted costs in your account, don't forget to go through the [**Cleanup step**](/ecs-spot-capacity-providers/cleanup.html) when you finish the workshop, or if you deploy the CloudFormation template but don't complete the workshop.
diff --git a/content/ecs-spot-capacity-providers/module-1/_index.md b/content/ecs-spot-capacity-providers/module-1/_index.md
index 793f62aa..f384a678 100644
--- a/content/ecs-spot-capacity-providers/module-1/_index.md
+++ b/content/ecs-spot-capacity-providers/module-1/_index.md
@@ -1,21 +1,19 @@
---
-title: "Module-1: Savings costs using EC2 spot with Auto Scaling Group Capacity Providers"
+title: "Cost optimizing ECS using Spot Instances with Auto Scaling groups Capacity Providers"
+chapter: true
weight: 20
---
-Amazon ECS Cluster Auto Scaling
----
+### Amazon ECS Cluster Auto Scaling
Amazon ECS cluster auto scaling enables you to have more control over how you scale tasks within a cluster. Each cluster has one or more capacity providers and an optional default capacity provider strategy. The capacity providers determine the infrastructure to use for the tasks, and the capacity provider strategy determines how the tasks are spread across the capacity providers. When you run a task or create a service, you may either use the cluster's default capacity provider strategy or specify a capacity provider strategy that overrides the cluster's default strategy
-Amazon ECS Capacity Providers
----
+### Amazon ECS Capacity Providers
Amazon ECS capacity providers use EC2 Auto Scaling groups to manage the Amazon EC2 instances registered to their clusters.
-Amazon ECS Capacity Provider - Managed Scaling
----
+### Amazon ECS Capacity Provider - Managed Scaling
When creating a capacity provider, you can optionally enable managed scaling. When managed scaling is enabled, Amazon ECS manages the scale-in and scale-out actions of the Auto Scaling group. On your behalf, Amazon ECS creates an AWS Auto Scaling scaling plan with a target tracking scaling policy based on the target capacity value you specify. Amazon ECS then associates this scaling plan with your Auto Scaling group. For each of the capacity providers with managed scaling enabled, an Amazon ECS managed CloudWatch metric with the prefix AWS/ECS/ManagedScaling is created along with two CloudWatch alarms. The CloudWatch metrics and alarms are used to monitor the container instance capacity in your Auto Scaling groups and will trigger the Auto Scaling group to scale in and scale out as needed.
diff --git a/content/ecs-spot-capacity-providers/module-1/application_view.md b/content/ecs-spot-capacity-providers/module-1/application_view.md
index ff4dc4bf..13ea31fd 100644
--- a/content/ecs-spot-capacity-providers/module-1/application_view.md
+++ b/content/ecs-spot-capacity-providers/module-1/application_view.md
@@ -1,25 +1,25 @@
---
-title: "Explore ECS Service-Test web application"
+title: "Explore the service"
weight: 65
---
-Let’s first check if our application is up and running fine. Go to the Target Group in the AWS console. check click on the targets. Ensure that all the targets are healthy.
+In this step, we check that our application is available, and how our tasks got distributed on our ECS instances.
-Get the DNS name of the Applicaton Load Balancer from the output section of the Cloud formation stack.
+Get the DNS name of the Applicaton Load Balancer from the output section of the CloudFormation stack.
![Get DNS](/images/ecs-spot-capacity-providers/CFN.png)
-Open a brower tab and enter this DNS Name. You should see a simple web page displaying various useful info about the underlyong infrastucture used to run this application inside a docker container.
+Open a brower tab and enter this URL. You should see a simple web page displaying various useful info about the underlyong infrastucture used to run this application inside a docker container.
![Application](/images/ecs-spot-capacity-providers/app.png)
-As you keep refresh the web page, you will notice that some of the above data changing as ALB keeps routing the requests to different docker containers across the CPs in the ECS Cluster.
+As you keep refreshing the web page, you will notice some of the above data changing as ALB keeps routing the requests to different tasks across the instances in the ECS Cluster.
-Now let’s check if the tasks are distributed on on-demand and spot Capacity Providers as per the strategy.
+Now let's check if the tasks are distributed on On-Demand and Spot Capacity Providers according to the Capacity Provider Strategy that we configured.
Run the below command to see how tasks are distributed across the Capacity Providers.
-```
+```bash
export cluster_name=EcsSpotWorkshop
export service_name=ec2-service-split
aws ecs describe-tasks \
@@ -30,10 +30,8 @@ aws ecs describe-tasks \
--output table
```
-The output of the above command should display a table like this below.
+You will see the result in table similar to the below:
![Results Table](/images/ecs-spot-capacity-providers/table.png)
-What did you notice? Do you have an explanation for the above distribution of 4 tasks on CP-OD and 6 on CP-SPOT? Take a guess before reading further.
-
-Alternative let’s look at the C3VIS dashboard to see the visual representation of the ECS cluster and the distribution of tasks on different CPs. Before you see the visual representation, try calculating yourself what the task distribution would be as per the CPS? Notice the CPS used for this service is CP-OD,base=2,weight=1, CP-SPOT,weight=3
\ No newline at end of file
+Does the split between CP-OD and CP-SPOT adhere to our Capacity Provider Strategy? move to the next step in the workshop to dive deeper into the result.
\ No newline at end of file
diff --git a/content/ecs-spot-capacity-providers/module-1/asg_with_od.md b/content/ecs-spot-capacity-providers/module-1/asg_with_od.md
index d9260105..91f5ab47 100644
--- a/content/ecs-spot-capacity-providers/module-1/asg_with_od.md
+++ b/content/ecs-spot-capacity-providers/module-1/asg_with_od.md
@@ -1,33 +1,32 @@
---
-title: "Create an Auto Scaling Group (ASG) with EC2 On-Demand Instances"
+title: "Create an Auto Scaling group with EC2 On-Demand Instances"
weight: 10
---
-In this section, we will create an EC2 Auto Scaling Group for On-Demand Instances using the Launch Template created in previous section.
+In this section, we will create an EC2 Auto Scaling group (ASG) for On-Demand Instances using the Launch Template created in previous section.
-Copy the file **templates/asg.json** for the Auto scaling group configuration.
+Copy the file **templates/asg.json** for the Auto Scaling group configuration.
-```
+```bash
cp templates/asg.json ./asg_od.json
```
-Take a moment to look at the user asg_od.json to see various configuration options in the ASG.
+Take a moment to look at the **asg_od.json** file to see various configuration options for the EC2 Auto Scaling group.
-Set the following commands to set variables and substitute them in the template
+Run the following commands to set variables and substitute them in the template
-```
+```bash
export ASG_NAME=ecs-spot-workshop-asg-od
export OD_PERCENTAGE=100 # Note that ASG will have 100% On-Demand, 0% Spot
sed -i -e "s#%ASG_NAME%#$ASG_NAME#g" -e "s#%OD_PERCENTAGE%#$OD_PERCENTAGE#g" -e "s#%PUBLIC_SUBNET_LIST%#$VPCPublicSubnets#g" asg_od.json
```
-Create the ASG for the On Demand Instances
-
-```
-aws autoscaling create-auto-scaling-group --cli-input-json file://asg_od.json
-```
-The output of the above command looks like below
+Create the ASG for the On-Demand Instances
+```bash
+aws autoscaling create-auto-scaling-group --cli-input-json file://asg_od.json
```
+The output of the above command looks like the below:
+```plaintext
EcsSpotWorkshop-ASG-OD ARN=arn:aws:autoscaling:us-east-1:000474600478:autoScalingGroup:1e9de503-068e-4d78-8272-82536fc92d14:autoScalingGroupName/EcsSpotWorkshop-ASG-OD
```
The above auto scaling group looks like below in the console
diff --git a/content/ecs-spot-capacity-providers/module-1/asg_with_spot.md b/content/ecs-spot-capacity-providers/module-1/asg_with_spot.md
index d55d0ec3..6a610d31 100644
--- a/content/ecs-spot-capacity-providers/module-1/asg_with_spot.md
+++ b/content/ecs-spot-capacity-providers/module-1/asg_with_spot.md
@@ -3,54 +3,56 @@ title: "Create an Auto Scaling Group (ASG) with EC2 Spot Instances"
weight: 15
---
-In this section, let us create an Auto Scaling group for EC2 Spot Instances using the Launch Template created in previous section. This procedure is exactly same as the previous section except the few changes specific to the configuration for EC2 Spot instances.
+In this section, you create an Auto Scaling group for EC2 Spot Instances using the Launch Template created in previous section. This procedure is exactly same as the previous section except for a few changes specific to the configuration for Spot Instances.
-One of the best practices for adoption of Spot Instances is to diversify the EC2 instances across different instance types and availability zones, in order to tap into multiple spare capacity pools. The ASG currently will support up to 20 different instance type configurations for diversification.
+One of the best practices for adoption of Spot Instances is to diversify the EC2 instances across different instance types and availability zones, in order to tap into multiple spare capacity pools.
-One key criteria for choosing the instance size can be based on the ECS Task vCPU and Memory limit configuration. For example, look at the ECS task resource limits in the file **webapp-ec2-task.json**
+One key criteria for choosing the instance size can be based on the ECS Task vCPU and Memory limit configuration. For example, look at the ECS task resource limits in the file **webapp-ec2-task.json**:
+```plaintext
_**"cpu": "256", "memory": "1024"**_
+```
-This means the ratio for vCPU:Memory is **1:4**. So it would be ideal to select instance size which satisfy this criteria. The instance lowest size which satisfy this critera are of large size. Please note there may be bigger sizes which satisfy 1:4 ratio. But in this workshop, let's select the smallest size i.e. large to illustrate the aspect of EC2 spot diversification.
+This means the ratio for vCPU:Memory in our ECS task that would run in the cluster is **1:4**. Ideally, we should select instance types with similar vCPU:Memory ratio, in order to have good utilization of the resources in the EC2 instances. The smallest instance type which would satisfy this critera from the latest generation of x86_64 EC2 instance types is m5.large. To learn more about EC2 instance types click [here](https://aws.amazon.com/ec2/instance-types/)
-So let's select different instance types and generations for large size using the Instance Types console within the AWS EC2 console as follows.
+In order to adhere to EC2 Spot best practices and diversify our use of instance types (in order to tap into multiple spare capacity pools), we can use the EC2 Instance Types console to find instance types which have similar hardware characteristics to the m5.large. The t2 & t3 instance types are burstable instance types, which also fit our objective in this workshop. To learn more about EC2 burstable instance types, click [here](https://aws.amazon.com/ec2/instance-types/t3/)
![OD ASG](/images/ecs-spot-capacity-providers/ec1.png)
![OD ASG](/images/ecs-spot-capacity-providers/ec2.png)
![OD ASG](/images/ecs-spot-capacity-providers/ec3.png)
-We selected 10 different instant types as seen asg.json but you can configure up to 20 different instance types in an Autoscaling group.
+We selected 10 different instance types as can seen asg.json, but you can configure up to 20 different instance types in an ASG. We chose instance types with similar hardware characteristics in order to have a consistent auto scaling experience.
-Copy the file **templates/asg.json** for the Auto scaling group configuration.
+Copy the file **templates/asg.json** for the Auto Scaling group configuration.
-```
+```bash
cp templates/asg.json ./asg_spot.json
```
Take a moment to look at the user asg_spot.json to see various configuration options in the ASG.
-Set the following commands to set variables and substitute them in the template
+Run the following commands to set variables and substitute them in the template
-```
+```bash
export ASG_NAME=ecs-spot-workshop-asg-spot
export OD_PERCENTAGE=0 # Note that ASG will have 0% On-Demand, 100% Spot
sed -i -e "s#%ASG_NAME%#$ASG_NAME#g" -e "s#%OD_PERCENTAGE%#$OD_PERCENTAGE#g" -e "s#%PUBLIC_SUBNET_LIST%#$PUBLIC_SUBNET_LIST#g" -e "s#%SERVICE_ROLE_ARN%#$SERVICE_ROLE_ARN#g" asg_spot.json
```
-Create the Auto scaling group for EC2 spot
+Create the Auto scaling group that will run Spot Instances in our cluster
-```
+```bash
aws autoscaling create-auto-scaling-group --cli-input-json file://asg_spot.json
ASG_ARN=$(aws autoscaling describe-auto-scaling-groups --auto-scaling-group-name $ASG_NAME | jq -r '.AutoScalingGroups[0].AutoScalingGroupARN')
echo "$ASG_NAME ARN=$ASG_ARN"
```
-The output for the above command looks like this
+The output of the above command looks like the below:
-```
+```plaintext
EcsSpotWorkshop-ASG-SPOT ARN=arn:aws:autoscaling:us-east-1:000474600478:autoScalingGroup:dd7a67e0-4df0-4cda-98d7-7e13c36dec5b:autoScalingGroupName/EcsSpotWorkshop-ASG-SPOT
```
-The above auto scaling looks like below in console
+Your Auto Scaling group should look like this in the AWS Management Console:
![Spot ASG](/images/ecs-spot-capacity-providers/22.png)
\ No newline at end of file
diff --git a/content/ecs-spot-capacity-providers/module-1/cloudwatch_dashboard.md b/content/ecs-spot-capacity-providers/module-1/cloudwatch_dashboard.md
index 858fd2b9..60058902 100644
--- a/content/ecs-spot-capacity-providers/module-1/cloudwatch_dashboard.md
+++ b/content/ecs-spot-capacity-providers/module-1/cloudwatch_dashboard.md
@@ -1,5 +1,5 @@
---
-title: "Create Cloudwatch Dashboard to view key metrics of the ECS Cluster"
+title: "Create a Cloudwatch Dashboard to view key metrics of the ECS Cluster"
weight: 40
---
@@ -12,22 +12,27 @@ aws cloudwatch put-dashboard --dashboard-name EcsSpotWorkshop --dashboard-body f
```
The output of the command looks like below
+```plaintext
{
"DashboardValidationMessages": []
}
+```
In the [AWS Cloudwatch console] (https://console.aws.amazon.com/cloudwatch/home?region=us-east-1#) select this newly created dashboard, drag it right/down to expand to view the graphs properly and save the dashboard.
![Cloud Watch](/images/ecs-spot-capacity-providers/cwt4.png)
-Now observer the initial values for the CPR metric for both ASG CPs i.e. CP-OD and CP-SPOT. What values do you expect for both them initially when they are no tasks/instances running in the Cluster. Make a guess before you see next graph.
+{{%expand "Question: What are the values for the cluster's ManagedScaling metrics when there are no tasks/instances running in the cluster, and why? Click to expand the answer." %}}
![CPR Metric](/images/ecs-spot-capacity-providers/CP3.png)
-Why do you think both values are 100 intially?
+Why are the values 100?
+Consider the Managed Scaling formula: **Capacity Provider Reservation = M/N * 100**.
+
+As explained in the introduction section of the workshop, M is just a relative propotion value to N. As it is a brand new cluster, there are no instances running in the cluster, meaning N = 0. M is also caluclated as zero because there is no need for capacity to facilitate tasks (since none are running). In other words, the capacity that the cluster requested (M=0) is identical to the available capacity (N=0) which means that the Capacity Providers satisfy the target capacity value of 100.
-Well, let’s re-look at the formula once again. CPR = M/N * 100. As explained earlier, M is just a relative propotion value to N. As it is a brand new cluster, there are no instances i.e. N = 0. M is also caluclated as zero. In other words, what the cluster needed (M=0) is same what is available capacity (N=0) which means CP satisfy the TC value of 100.
+For more details on the how ECS cluster auto scaling works, refer to ths [blog] (https://aws.amazon.com/blogs/containers/deep-dive-on-amazon-ecs-cluster-auto-scaling/).
-For more details on the how the ECS cluster autoscaler works, refer to ths [blog] (https://aws.amazon.com/blogs/containers/deep-dive-on-amazon-ecs-cluster-auto-scaling/).
+{{% /expand%}}
-Now let’s us deploy some tasks on this cluster and see the CP Managed Scaling in Action !!!
+Continue to the next step in the workshop to start deploying tasks to the cluster, and seeing ECS Manager Scaling in action.
\ No newline at end of file
diff --git a/content/ecs-spot-capacity-providers/module-1/cluster_visualizer.md b/content/ecs-spot-capacity-providers/module-1/cluster_visualizer.md
index 68df42b7..f6386910 100644
--- a/content/ecs-spot-capacity-providers/module-1/cluster_visualizer.md
+++ b/content/ecs-spot-capacity-providers/module-1/cluster_visualizer.md
@@ -1,74 +1,70 @@
---
-title: "Explore ECS Service-ECS Cluster visualizer by C3Vis"
+title: "Explore the ECS service with C3Vis"
weight: 70
---
-Refersh the C3Vis page by clicking on the “Roload Server Cache” and click CPU metric. You will see something like below.
+Refersh the C3Vis page by clicking on the “Roload Server Cache” button and click CPU metric. Your result should be similar to the below:
![Visualize](/images/ecs-spot-capacity-providers/cp13.png)
-Please note that the exact distribution of tasks across instances within a CP-OD or CP-SPOT may be different in than what is shown above, depending upon when the instances are ready for task placement.
-But ECS will respect the CP strategy w.r.t the number of tasks to be placed on CP-OD and CP-SPOT.
+Please note that the exact distribution of tasks across instances within the Capacity Providers may be different from what is shown above, depending on when the instances were ready for task placement.
+ECS will respect the CP strategy for the number of tasks to be placed on CP-OD and CP-SPOT.
-What did you notice in the above placement? Well, there are 4 instances running 14 tasks including 10 application tasks (pink color) and 4 cwt container insights daemons (blue color)
+What did you notice in the above placement? Well, there are 4 instances running 14 tasks, including 10 application tasks (pink color) and 4 CloudWatch Container Insights Daemons (blue color).
+
+But how are these 10 tasks distributed across CPs? We only see the instances' IP addresses in this tool. To check which CP (OD or SPOT) launches the instance, right click on the IP address and select the option “Open ECS container instance console” which will send you back to the ECS console's Task page
-But how are these 10 tasks distributed across CPs? We see only IP address of the instances in this tool. To check which CP (OD or SPOT) does this instance belongs to, right click on the IP address and select the option “Open ECS container instance console”
![Visualize](/images/ecs-spot-capacity-providers/cp16.png)
![Visualize](/images/ecs-spot-capacity-providers/cp17.png)
-So the instance with IP 10.0.171.50 belongs to CP-OD and runs 3 tasks. Now let’s check all other instance details
+In this example, The instance with IP 10.0.171.50 belongs to CP-OD and runs 3 tasks. Now let’s check all other instance details
![Visualize](/images/ecs-spot-capacity-providers/cp18.png)
![Visualize](/images/ecs-spot-capacity-providers/cp20.png)
![Visualize](/images/ecs-spot-capacity-providers/cp21.png)
-Now let’s label these instances and summarize our finding in this table for easy understanding
+Summarizing the result:
![Summary](/images/ecs-spot-capacity-providers/summary.png)
------
-Can you explain why CP-SPOT has 6 and CP-OD has only 4?
+{{%expand "Question: Why were 6 tasks launched on CP-Spot and 4 tasks on CP-OD?" %}}
-Let’s understand the CPS first. CP-OD,base=2,weight=1, CP-SPOT,weight=3
-CP-OD has base=2 which means at CP-OD should always have min of 2 tasks running first. This can be transalted to the bare minimum required number of tasks needed on on-demand to support your business critical application services. So ECS first assigns 2 tasks out of 10 to CP-OD as per the base parameter value.
-Then the remaining 8 (10-2) will distributed according to the weights. CP-OD weight is 1 and CP-SPOT weight is 3. That means for every 1 task assigned to CP-OD, 3 will be assigned to CP-SPOT. This translated to 2 to CP-OD and 6 to CP-SPOT.
+Quick reminder about our Capacity Provider Strategy: CP-OD-base=2,weight=1, CP-SPOT-weight=3
+CP-OD has base=2 which means CP-OD should always have a baseline of 2 tasks running first. This can be transalted to the bare minimum required number of tasks needed to support your business critical application services. So ECS first assigns 2 tasks out of 10 to CP-OD as per the base parameter value.
+Then the remaining 8 tasks will be distributed according to the weights. CP-OD weight is 1 and CP-SPOT weight is 3. That means that for every 1 task assigned to CP-OD, 3 will be assigned to CP-SPOT. This is translated to 2 to CP-OD and 6 to CP-SPOT.
-Now let’s look the new values of CPR in the CWT dashboard for these 2 CPs and also look at the other metrics w.r.t number of instances and tasks.
+Now let’s look the new values of Capacity Provider Reservation in the CloudWatch dashboard for these 2 CPs, and also look at the other metrics around number of instances and tasks.
![CPR](/images/ecs-spot-capacity-providers/cp24.png)
-So Why do you think CPR is changed from 200 to 100? As you can guess, the value of M is 4 which is same as N value which is 4, hence CPR is 100 which means cluster is stable. You can also notice the graph reflect the change in number of tasks and instances.
-
+So Why do you think CPR is changed from 200 to 100? As you can guess, the value of M is 4 which is same as N value which is 4, hence Capacity Provider Reservation is 100, which means that all the desired capacity reuqired to run the ECS tasks is fulfilled. You can also notice the graph reflecting the change in number of tasks and instances.
+{{% /expand%}}
-Now let’s do some the scale in actions on this Cluster by reducing the number of tasks from 10 to 6.
-
-Optional Exercise:
-try scale down the service by reducing task count from 10 to 6
+Now let’s test the scale in behavior on this cluster by reducing the number of tasks from 10 to 6.
-```
+```base
aws ecs update-service --cluster EcsSpotWorkshop \
--service ec2-service-split --desired-count 6
```
-What do you think should happen now w.r.t CPR and task distribution? Let’s look at C3VIS tool again
+What would be the result of decreasing the desired count for the tasks in the service? Check V3Vis to see the result.
![Visualizer](/images/ecs-spot-capacity-providers/cp25.png)
-And the table looks below after the service scale in activity.
-
+And this is our result:
![Table](/images/ecs-spot-capacity-providers/table2.png)
-Can you explain why both CP-SPOT and CP-OD has 3 tasks each?
-Out of 6, CP-OD will have 2 as per base and remaining 4 (6-2) will be split with 1 on CP-OD and 3 on CP-SPOT.
+Out of 6 tasks remaining, CP-OD will have 2 as per the base configuration, and the remaining 4 tasks will be split with 1 on CP-OD and 3 on CP-SPOT.
-Did you notice service scale in does not translate to ASG scale in in this case i.e. no reduction in the number on instances in ASGs. This is because there are no idle instances without any tasks. Since there is no change in desired number of instances (i,e, M), so there is no change in CPR value.
+Did you notice service scale in does not translate to ASG scale in in this case? There was no reduction in the number on instances in ASGs. This is because there are no idle instances without any tasks. Since there is no change in the desired number of instances (M), so there is no change in Capacity Provider Reservation value.
Earlier the task distribution was SPOT1 → 3, SPOT2 → 3, OD1 → 2, OD2 → 2.
The new distribution is SPOT1 → 1, SPOT2 → 2, OD1 → 1, OD2 → 2.
@@ -124,4 +120,3 @@ The task distribution after the scale in activity looks like below.
Earlier the task distribution was SPOT1 → 1, SPOT2 → 2, OD1 → 1, OD2 → 2.
The new distribution is SPOT2 → 1, OD1 → 1, OD2 → 2.
-***Congratulations !!!*** you have successfully completed Module-1 and learnt how to create ASG CPs and schedule ECS services across Spot and On-demand CPs.
diff --git a/content/ecs-spot-capacity-providers/module-1/cp_with_ec2od.md b/content/ecs-spot-capacity-providers/module-1/cp_with_ec2od.md
index 2ccb98ea..cdea5ab1 100644
--- a/content/ecs-spot-capacity-providers/module-1/cp_with_ec2od.md
+++ b/content/ecs-spot-capacity-providers/module-1/cp_with_ec2od.md
@@ -1,32 +1,31 @@
---
-title: "Create a Capacity Provider using ASG with EC2 On-demand instances."
+title: "Create a Capacity Provider using ASG with EC2 On-demand instances"
weight: 11
---
To create the CP, follow these steps:
-* Open the [ECS console] (https://console.aws.amazon.com/ecs/home) in the region where you are looking to launch your cluster.
+* Open the [ECS console] (https://console.aws.amazon.com/ecs/home) in the region where you deployed the CFn template
* Click *Clusters*
-* Click [EcsSpotWorkshop] (https://console.aws.amazon.com/ecs/home?region=us-east-1#/clusters/EcsSpotWorkshop)
-* Click on the tab *Capacity Providers*
-* Click on the *Create*
+* Click [EcsSpotWorkshop] (https://console.aws.amazon.com/ecs/home#/clusters/EcsSpotWorkshop)
+* Click the tab *Capacity Providers*
+* Click *Create*
* For Capacity provider name, enter *CP-OD*
* For Auto Scaling group, select **EcsSpotWorkshop-ASG-OD**
* For Managed Scaling, leave with default selection of *Enabled*
* For Target capacity %, enter *100*
* For Managed termination protection, leave with default selection of *Enabled*
-* Click on the *Create* *on the right bottom
+* Click on *Create* on the bottom right
-Here is the description of few important configuration when creating a capacity provider
+![Capacity Provider on OD ASG](/images/ecs-spot-capacity-providers/CP_OD.png)
+Some explanations about what you are building:
-* *Managed Scaling*: When managed scaling is enabled, Amazon ECS manages the scale-in and scale-out actions of the Auto Scaling group through the use of AWS Auto Scaling scaling plans. When managed scaling is disabled, you manage your Auto Scaling groups yourself.
+* *Managed Scaling*: When managed scaling is enabled, Amazon ECS manages the scale-in and scale-out actions of the Auto Scaling group through the use of AWS Auto Scaling scaling plans. When managed scaling is disabled, you manage the scaling aspect of your Auto Scaling groups yourself.
* *Managed termination protection*: When managed termination protection is enabled, Amazon ECS prevents Amazon EC2 instances that contain tasks and that are in an Auto Scaling group from being terminated during a scale-in action. Managed termination protection can only be enabled if the Auto Scaling group also has instance protection from scale in enabled
-![Capacity Provider on OD ASG](/images/ecs-spot-capacity-providers/CP_OD.png)
-
-Refresh the tab *Capacity Providers* and you will see the CP-OD is created and attachd to the cluster.
+Refresh the *Capacity Providers* tab and you will see the CP-OD is created and attachd to the cluster.
![Capacity Provider on OD ASG](/images/ecs-spot-capacity-providers/CP_OD1.png)
diff --git a/content/ecs-spot-capacity-providers/module-1/cp_with_ec2spot.md b/content/ecs-spot-capacity-providers/module-1/cp_with_ec2spot.md
index 904c3e67..4318be62 100644
--- a/content/ecs-spot-capacity-providers/module-1/cp_with_ec2spot.md
+++ b/content/ecs-spot-capacity-providers/module-1/cp_with_ec2spot.md
@@ -8,36 +8,32 @@ To create the CP, follow these steps:
* Open the [ECS console] (https://console.aws.amazon.com/ecs/home) in the region where you are looking to launch your cluster.
* Click *Clusters*
* Click [EcsSpotWorkshop] (https://console.aws.amazon.com/ecs/home?region=us-east-1#/clusters/EcsSpotWorkshop)
-* Click on the tab *Capacity Providers*
-* Click on the *Create*
+* Click the tab *Capacity Providers*
+* Click *Create*
* For Capacity provider name, enter *CP-SPOT*
* For Auto Scaling group, select *EcsSpotWorkshop-ASG-SPOT*
* For Managed Scaling, leave with default selection of *Enabled*
* For Target capacity %, enter *100*
* For Managed termination protection, leave with default selection of *Enabled*
-* Click on the *Create* on the right bottom
-
+* Click on *Create* on the bottom right
+*
![Capacity Provider on Spot ASG](/images/ecs-spot-capacity-providers/CP_SPOT.png)
-Refresh the tab “*Capacity Providers” *and you will see the CP-SPOT is created and attachd to the cluster.
+Refresh the *Capacity Providers* tab and you will see the CP-SPOT is created and attached to the cluster.
![Capacity Provider on Spot ASG](/images/ecs-spot-capacity-providers/CP_SPOT1.png)
-Now you will see that the CP creates a target tracking policy on the EcsSpotWorkshop-ASG-SPOT. Go to the AWS EC2 Console and select this scaling policies tab on this ASG.
+The CP creates a target tracking policy on the EcsSpotWorkshop-ASG-SPOT. Go to the EC2 Management Console and select the scaling policies tab on this ASG.
![Spot ASG](/images/ecs-spot-capacity-providers/ASG2.png)
-The ECS cluster should now contain 4 Capacity Providers: 2 from Auto Scaling groups (1 for OD and 1 for Spot), 1 from FARGATE and 1 from FARGATE_SPOT
-
-
-
### Update ECS Cluster with Auto Scaling Capacity Providers
So far we created two Auto Scaling Capacity Providers. Now let's update our existing ECS Cluster with these Capacity Providers.
-Run the following command to create the ECS Cluster
+Run the following command to create the Capacity Providers on the ECS cluster:
-```
+```bash
aws ecs put-cluster-capacity-providers \
--cluster EcsSpotWorkshopCluster \
--capacity-providers FARGATE FARGATE_SPOT od-capacity_provider ec2spot-capacity_provider \
@@ -45,12 +41,12 @@ aws ecs put-cluster-capacity-providers \
--region $AWS_REGION
```
-The ECS cluster should now contain 4 Capacity Providers: 2 from Auto Scaling groups (1 for OD and 1 for Spot), 1 from FARGATE and 1 from FARGATE_SPOT
+The ECS cluster should now contain 4 Capacity Providers: 2 from Auto Scaling groups (1 for On-Demand and 1 for Spot), 1 from FARGATE and 1 from FARGATE_SPOT. The Fargate capacity providers are created by default.
-Also note the default capacity provider strategy used in the above command. It sets base=1 and weight=1 for On-demand Auto Scaling Group Capacity Provider. This will override the previous default capacity strategy which is set to FARGATE capacity provider.
+Also note the default capacity provider strategy used in the above command. It sets base=1 and weight=1 for the On-demand Auto Scaling group Capacity Provider. This will override the previous default capacity provider strategy which is set to FARGATE capacity provider.
Click on the **Update Cluster** on the top right corner to see default Capacity Provider Strategy. As shown base=1 is set for OD Capacity Provider.
-That means if there is no capacity provider strategy specified during the deploying Tasks/Services, ECS by default chooses the OD Capacity Provider to launch them.
+That means if there is no capacity provider strategy specified during the deployment of ECS Tasks or Services, ECS by default chooses the OD Capacity Provider to launch them.
Click on Cancel as we don't want to change the default strategy for now.
diff --git a/content/ecs-spot-capacity-providers/module-1/create_ec2_launch_template.md b/content/ecs-spot-capacity-providers/module-1/create_ec2_launch_template.md
index bdd08bf3..76571579 100644
--- a/content/ecs-spot-capacity-providers/module-1/create_ec2_launch_template.md
+++ b/content/ecs-spot-capacity-providers/module-1/create_ec2_launch_template.md
@@ -5,9 +5,9 @@ weight: 5
- EC2 Launch Templates reduce the number of steps required to create an instance by capturing all launch parameters within one resource.
-- You can create a launch template that contains the configuration information to launch an instance. Launch templates enable you to store launch parameters so that you do not have to specify them every time you launch an instance. For example, a launch template can contain the ECS optimized AMI, instance type, User data section, Instance Profile / Role and network settings that you typically use to launch instances. When you launch an instance using the Amazon EC2 console, an AWS SDK, or a command line tool, you can specify the launch template to use. Instance user data required to bootstrap the instance into the ECS Cluster.
+- For example, a launch template can contain the ECS optimized AMI, instance type, User data section, Instance Profile / Role, and network settings that you typically use to launch instances. When you launch an instance using the Amazon EC2 console, an AWS SDK, a command line tool or an EC2 Auto Scaling group (like we will use in this workshop), you can specify the launch template to use.
-- The Ec2 Launch Template is already created using the CFN stack. Take a moment to see the configuration. Please note that Launch templates are mandatory to use Mixed Instance Group (i.e. using on-demand and spot purchase options) in an Autoscaling group.
+- The EC2 Launch Template is already created using the CloudFormation stack - you can use the AWS Management Console to see the configuration. Please note that Launch templates are mandatory in order to use EC2 Auto Scaling groups with mixed instances policy (to allow for mixing On-Demand and Spot Instances in an Auto Scaling group, and diversifying the instance type selection)
![Launch Template](/images/ecs-spot-capacity-providers/c9_6.png)
diff --git a/content/ecs-spot-capacity-providers/module-1/modify_default_cps.md b/content/ecs-spot-capacity-providers/module-1/modify_default_cps.md
index a78ed414..dd27d7bb 100644
--- a/content/ecs-spot-capacity-providers/module-1/modify_default_cps.md
+++ b/content/ecs-spot-capacity-providers/module-1/modify_default_cps.md
@@ -1,22 +1,22 @@
---
-title: "Modify the default capacity provider strategy (CPS)"
+title: "Modify the default capacity provider strategy"
weight: 25
---
To modify the CP, follow these steps:
-* Click on the tab *Capacity Providers*
-* Click on the *Update Cluster* on the top right
-* For Capacity provider name, enter *CP-SPOT*
+* Click on the *Capacity Providers* tab
+* Click on the *Update Cluster* option on the top right
+* For Capacity Provider name, enter *CP-SPOT*
* Click on *Add another provider*
* Click on *Add another provider* one more time
-* For Provider 1, select *CP-OD*, set base value to *2* and leave weight to default value of *1*
-* For Provider 2, select *CP-SPOT*, leave base to default value of *0* and set weight to *3*
+* For Provider 1, select *CP-OD*, set base value to *2* and leave weight to default value of *1*
+* For Provider 2, select *CP-SPOT*, leave base to default value of *0* and set weight to *3*
* Click on *Update* on bottom right
![Capacity Provider Strategy](/images/ecs-spot-capacity-providers/CPS.png)
-Also note the default capacity provider strategy used in the above command. It sets base=2 and weight=1 for On-demand ASG CP and weight of 3 for CP-SPOT. That means, ECS will first place 2 tasks (since base=2) on CP-OD and splits the remaining tasks between CP-OD and CP-SOT in 1:3 ratio, which means for every 1 task on CP-OD, 3 will be placed on CP-SPOT.
+Also note the default capacity provider strategy used in the above command. It sets base=2 and weight=1 for On-demand ASG CP and weight of 3 for CP-SPOT. That means, ECS will first place 2 tasks (since base=2) on CP-OD and split the remaining tasks between CP-OD and CP-SOT in 1:3 ratio, which means that for every 1 task on CP-OD, 3 will be placed on CP-SPOT.
-You can override this default CPS and specify a different custom strategy for each service independently.
+You can override this default Capacity Provider strategy and specify a different strategy for each service independently.
diff --git a/content/ecs-spot-capacity-providers/module-1/service.md b/content/ecs-spot-capacity-providers/module-1/service.md
index 619ebd23..3528d86c 100644
--- a/content/ecs-spot-capacity-providers/module-1/service.md
+++ b/content/ecs-spot-capacity-providers/module-1/service.md
@@ -1,19 +1,17 @@
---
-title: "Create ECS Service"
+title: "Create an ECS Service"
weight: 55
---
+In this section, we will create an ECS Service which distributes tasks on CP-OD and CP-SPOT with a custom strategy: **CP-OD base=2 & weight=1** and **CP-SPOT weight=3**. This Capacity Provider Strategy is driven by the following application requirements:
-In this section, we will create an ECS Service which distributes tasks on CP-OD and CP-SPOT with a custom strategy with CP-OD base=2 weight=1 and CP-SPOT weight=3. This Capacity Provider Strategy results from the following application requirements
-
-* There should be always at least 2 tasks running all the time for the regular traffic. The base=2 configuration satisfies this requirement.
-* Any spiky or elastic traffic should be handled by tasks deployed on on-demand and spot instances in the ratio 1:3
-
+* There should be at least a baseline of 2 tasks running for normal traffic - the **base=2** configuration satisfies this requirement.
+* Any spiky traffic should be handled by tasks deployed on On-Demand and Spot Instances in the ratio of 1:3
To create the service, follow these steps:
-* Click on the tab *Services*
-* Click on the *Create*
+* Click on the *Services* tab
+* Click on *Create*
* For Capacity provider strategy, leave it to default value *Cluster default Strategy*
* For Task Definition Family, select *ec2-task*
* For Task Definition Revision, select *1*
diff --git a/content/ecs-spot-capacity-providers/module-1/service_view.md b/content/ecs-spot-capacity-providers/module-1/service_view.md
index 36331cea..b21cfa7c 100644
--- a/content/ecs-spot-capacity-providers/module-1/service_view.md
+++ b/content/ecs-spot-capacity-providers/module-1/service_view.md
@@ -1,24 +1,22 @@
---
-title: "Explore ECS Service"
+title: "Managed Scaling in action"
weight: 60
---
-Click on this Service in the [AWS ECS Console](https://console.aws.amazon.com/ecs/home?#/clusters) and it looks like below
+Click the service name in the [ECS Console](https://console.aws.amazon.com/ecs/home?#/clusters)
![Capacity Provider](/images/ecs-spot-capacity-providers/CP4.png)
-What did you notice?
-
-Look at the pending task count of 10, which will cause the CPR value to change as ECS calculates new value for M (from initial zero) to accommodate these pending tasks. Let’s looks at the CWT dashboard for the new CPR values for both CPs.
+The pending task count is 10, which will cause the Capacity Provider Reservation value to change as ECS calculates a new value for M (from zero initially) to accommodate these pending tasks. Lets look at the CloudWatch Dashboard for the new Capacity Provider Reservations values for both Capacity Providers.
![Capacity Provider Reservation](/images/ecs-spot-capacity-providers/cp5.png)
-So CPR is 200 which means twice the earlier value of 100. This indicates the new value of M is higher than N by a factor 2X which indicates the scaling (out) factor. After 1 min, let’s see if the ASG target tracking CWT Alarm is fired. Go to the CWT consile and click on the Alarms and you should see something like below.
+The Capacity Provider Reseration metric value is 200. This indicates the new value of M is higher than N by a factor 2X, which indicates the scaling (out) factor. After 1 min, let’s see if the ASG target tracking CloudWatch Alarm is triggered. Go to the CloudWatch console and click on the Alarms section.
![Cloud Watch Alarms](/images/ecs-spot-capacity-providers/cp6.png)
-These alarms will cause the scale out action on both ASGs. Go to EC2 console, select any of the two ASGs and click on the Activity History. You will see two instances are launched.
+These alarms will cause the scale out action on both ASGs. Go to EC2 console, select any of the two ASGs and click on the Activity tab. You will see two instances are launched.
![ASG Scale Out](/images/ecs-spot-capacity-providers/cp10.png)
-So we see that CP Managed Scaling did its job of responding to the application service intent and scale out 2 instancs from zero capacity. Then what about task distributiuon on these CPs? Well, as you can recall, that is dictated by the CPS.
+So we see that Capacity Providers Managed Scaling did its job of responding to the application service intent, and scaled out by launching 2 instancs. Move to the next step in the workshop to examine how the tasks were distributed across the On-Demand and Spot capacity providers.
\ No newline at end of file
diff --git a/content/ecs-spot-capacity-providers/module-1/spot_inturruption_handling.md b/content/ecs-spot-capacity-providers/module-1/spot_inturruption_handling.md
index be20ea10..7c9c48f1 100644
--- a/content/ecs-spot-capacity-providers/module-1/spot_inturruption_handling.md
+++ b/content/ecs-spot-capacity-providers/module-1/spot_inturruption_handling.md
@@ -5,41 +5,39 @@ weight: 80
Amazon EC2 terminates your Spot Instance when it needs the capacity back. Amazon EC2 provides a Spot Instance interruption notice, which gives the instance a two-minute warning before it is interrupted.
-When Amazon EC2 is going to interrupt your Spot Instance, the interruption notification will be available in two ways
+When Amazon EC2 is going to interrupt your Spot Instance, the interruption notification will be available in two ways:
1. ***Amazon EventBridge Events:*** EC2 service emits an event two minutes prior to the actual interruption. This event can be detected by Amazon CloudWatch Events.
-1. ***Instance-action in the MetaData service (IMDS):*** If your Spot Instance is marked to be stopped or terminated by the Spot service, the instance-action item is present in your instance metadata.
+1. ***EC2 Instance Metadata service (IMDS):*** If your Spot Instance is marked for termination by EC2, the instance-action item is present in your instance metadata.
-look at the user data section in the Launch template configuration.
-
-```
+In the Launch Template configuration, we added:
+```plaintext
echo "ECS_ENABLE_SPOT_INSTANCE_DRAINING=true" >> /etc/ecs/ecs.config
```
+When Amazon ECS Spot Instance draining is enabled on the instance, ECS receives the Spot Instance interruption notice and places the instance in DRAINING status. When a container instance is set to DRAINING, Amazon ECS prevents new tasks from being scheduled for placement on the container instance [Click here](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/container-instance-spot.html) to learn more.
-The above configuration enables automatic draining of spot instances at the time of spot interruption notice. The ECS container agent runnining on the ECS container instances handles the interruption using the Instance Metadata service.
+The web application (app.py) we used to buld docker image shows two ways to handle the EC2 Spot interruption within a docker container. This allows you to perform actions such as preventing the processing of new work, checkpointing the progress of a batch job, or gracefully exiting the application to complete tasks such as ensuring database connections are properly closed
-If the application can also handle the interruption to implement any checkpointing or saving the data. The web application (app.py) we used to buld docker image in the Module-2 shows two ways to handle the spot interruption within a docker container.
+In the first method, it checks the instance metadata service for spot interruption and display a message to web page notifying the users (this is, of course, just a demonstration and not for real-life scenarios).
-In the first method, it check the instance metadata service for spot interruption and display a message to web page notifying the users.
+{{% notice warning %}}
+In a production environment, you should not provide access from the ECS tasks to the IMDS. This is done in this workshop for simplification purposes.
+{{% /notice %}}
-Note: The ECS tasks should not be accessing EC2 metadata. For security reasons, this should be blocked this in a Prod environment.
-```
+```plaintext
URL = "http://169.254.169.254/latest/meta-data/spot/termination-time"
SpotInt = requests.get(URL)
if SpotInt.status_code == 200:
- response += "
This Spot Instance Got Interruption and Termination Date is {}
".format(SpotInt.text)
+ response += "
This Spot Instance will be terminated at: {}
".format(SpotInt.text)
```
-In the second method, it listens to the **SIGTERM** signal. The ECS container agent calls StopTask API to stop all the tasks running on the Spot Instance.
+In the second method, it listens to the **SIGTERM** signal. The ECS container agent calls the StopTask API to stop all the tasks running on the Spot Instance.
When StopTask is called on a task, the equivalent of docker stop is issued to the containers running in the task. This results in a **SIGTERM** value and a default 30-second timeout, after which the SIGKILL value is sent and the containers are forcibly stopped. If the container handles the **SIGTERM** value gracefully and exits within 30 seconds from receiving it, no SIGKILL value is sent.
-
-The application can listen to the **SIGTERM** signal and handle the interruption gracefully.
-
-```
+```python
class Ec2SpotInterruptionHandler:
signals = {
signal.SIGINT: 'SIGINT',
@@ -71,5 +69,4 @@ is using. Specifying a stopTimeout value gives you time between the moment the t
• The **SIGTERM** signal must be received from within the container to perform any cleanup actions.
-
-***Congratulations !!!*** you have successfully completed the workshop module and learnt how to create ASG CPs and schedule ECS services across Spot and On-demand CPs. You may proceed to optional module using Fargate Spot Capacity Providers.
\ No newline at end of file
+***Congratulations !!!*** you have leant how to create ASG CPs and schedule ECS services across Spot and On-demand CPs, You may proceed to next section(optional) to utilize Fargate Capacity providers.
diff --git a/content/ecs-spot-capacity-providers/module-1/visualizer.md b/content/ecs-spot-capacity-providers/module-1/visualizer.md
index c8e95200..06dcb3e9 100644
--- a/content/ecs-spot-capacity-providers/module-1/visualizer.md
+++ b/content/ecs-spot-capacity-providers/module-1/visualizer.md
@@ -4,11 +4,11 @@ weight: 35
---
-The [C3vis] (https://github.com/ExpediaDotCom/c3vis) is a useful to show the visual representation of the tasks placements across instances in an ECS Cluster.
+The [C3vis](https://github.com/ExpediaDotCom/c3vis) opensource tool is useful to show the visual representation of the tasks placements across instances in an ECS Cluster.
-Run the following commands on a new terminal in the Cloud 9 environment.
+Run the following commands on a new terminal in the Cloud9 environment.
-```
+```bash
git clone https://github.com/ExpediaDotCom/c3vis.git
cd c3vis
docker build -t c3vis .
@@ -28,6 +28,6 @@ Open the application in a new window as follows
![c3vis](/images/ecs-spot-capacity-providers/c3vis3.png)
-The initial screen looks below since there are no tasks or instances running in the cluster.
+The initial screen will look like the below, since there are no tasks or instances running in the cluster for now.
![c3vis](/images/ecs-spot-capacity-providers/c3vis2.png)
diff --git a/content/ecs-spot-capacity-providers/module-1/webapp.md b/content/ecs-spot-capacity-providers/module-1/webapp.md
index 285f8f21..9b0e4f2c 100644
--- a/content/ecs-spot-capacity-providers/module-1/webapp.md
+++ b/content/ecs-spot-capacity-providers/module-1/webapp.md
@@ -3,9 +3,9 @@ title: "Building the webapp container"
weight: 45
---
-Run the below command to build the container
+Run the below command to build the container which we will run inside an ECS task in our cluster.
-```
+```bash
cd webapp
docker build --no-cache -t ecs-spot-workshop/webapp .
@@ -17,19 +17,21 @@ docker push $ECR_REPO_URI:latest
Copy the template file *templates/ec2-task.json* to the current directory and substitute the template with actual values.
-```
+```bash
cd ..
cp -Rfp templates/ec2-task.json .
sed -i -e "s#DOCKER_IMAGE_URI#$ECR_REPO_URI:latest#g" ec2-task.json
```
+## Creating a task definition
+
In this section, we will create a task definition for for tasks to be launched on the Auto Scaling Capacity Providers.
-### Run the below command to create the task definition
+Run the below command to create the task definition
-```
+```bash
aws ecs register-task-definition --cli-input-json file://ec2-task.json
```
-### The task will look like this in console
+The task definition will look like this in console:
![Task](/images/ecs-spot-capacity-providers/task1.png)
\ No newline at end of file
diff --git a/content/ecs-spot-capacity-providers/module-2/_index.md b/content/ecs-spot-capacity-providers/module-2/_index.md
index b8d67195..b64e8d1f 100644
--- a/content/ecs-spot-capacity-providers/module-2/_index.md
+++ b/content/ecs-spot-capacity-providers/module-2/_index.md
@@ -1,76 +1,42 @@
---
-title: "Module-2: Spot Interruption Handling"
+title: "Saving costs using AWS Fargate Spot Capacity Providers (Optional)"
weight: 40
---
-Inturruption Handling On EC2 Spot Instances
+AWS Fargate Capacity Providers
---
-Amazon EC2 terminates your Spot Instance when it needs the capacity back. Amazon EC2 provides a Spot Instance interruption notice, which gives the instance a two-minute warning before it is interrupted.
+Amazon ECS cluster capacity providers enable you to use both Fargate and Fargate Spot capacity with your Amazon ECS tasks. With Fargate Spot you can run interruption tolerant Amazon ECS tasks at a discounted rate compared to the Fargate price. Fargate Spot runs tasks on spare compute capacity. When AWS needs the capacity back, your tasks will be interrupted with a two-minute warning
-When Amazon EC2 is going to interrupt your Spot Instance, the interruption notification will be available in two ways
-
-1. ***Amazon EventBridge Events:*** EC2 service emits an event two minutes prior to the actual interruption. This event can be detected by Amazon CloudWatch Events.
+Creating a New ECS Cluster That Uses Fargate Capacity Providers
+---
-1. ***Instance-action in the MetaData service (IMDS):*** If your Spot Instance is marked to be stopped or terminated by the Spot service, the instance-action item is present in your instance metadata.
+When a new Amazon ECS cluster is created, you specify one or more capacity providers to associate with the cluster. The associated capacity providers determine the infrastructure to run your tasks on. Set the following global variables for the names of resources be created in this workshop
-look at the user data section in the Launch template configuration.
+Run the following command to create a new cluster and associate both the Fargate and Fargate Spot capacity providers with it.
```
-echo "ECS_ENABLE_SPOT_INSTANCE_DRAINING=true" >> /etc/ecs/ecs.config
+aws ecs create-cluster \
+--cluster-name EcsSpotWorkshop \
+--capacity-providers FARGATE FARGATE_SPOT \
+--region $AWS_REGION \
+--default-capacity-provider-strategy capacityProvider=FARGATE,base=1,weight=1
```
-
-The above configuration enables automatic draining of spot instances at the time of spot interruption notice. The ECS container agent runnining on the ECS container instances handles the interruption using the Instance Metadata service.
-
-If the application can also handle the interruption to implement any checkpointing or saving the data. The web application (app.py) we used to buld docker image in the Module-2 shows two ways to handle the spot interruption within a docker container.
-
-In the first method, it check the instance metadata service for spot interruption and display a message to web page notifying the users.
-
-Note: The ECS tasks should not be accessing EC2 metadata. For security reasons, this should be blocked this in a Prod environment.
+If the above command fails with below error, run the command again. It should create the cluster now.
```
-URL = "http://169.254.169.254/latest/meta-data/spot/termination-time"
-SpotInt = requests.get(URL)
-if SpotInt.status_code == 200:
- response += "
This Spot Instance Got Interruption and Termination Date is {}
".format(SpotInt.text)
+“An error occurred (InvalidParameterException) when calling the CreateCluster operation: Unable to assume the service linked role. Please verify that the ECS service linked role exists.“
```
-In the second method, it listens to the **SIGTERM** signal. The ECS container agent calls StopTask API to stop all the tasks running on the Spot Instance.
-
-When StopTask is called on a task, the equivalent of docker stop is issued to the containers running in the task. This results in a **SIGTERM** value and a default 30-second timeout, after which the SIGKILL value is sent and the containers are forcibly stopped. If the container handles the **SIGTERM** value gracefully and exits within 30 seconds from receiving it, no SIGKILL value is sent.
-
+The ECS cluster will look like below in the AWS Console. Select ECS in **Services** and click on **Clusters** on left panel
-The application can listen to the **SIGTERM** signal and handle the interruption gracefully.
-
-```
-class Ec2SpotInterruptionHandler:
- signals = {
- signal.SIGINT: 'SIGINT',
- signal.SIGTERM: 'SIGTERM'
- }
-
-def __init__(self):
- signal.signal(signal.SIGINT, self.exit_gracefully)
- signal.signal(signal.SIGTERM, self.exit_gracefully)
-
-def exit_gracefully(self, signum, frame):
- print("\nReceived {} signal".format(self.signals[signum]))
- if self.signals[signum] == 'SIGTERM':
- print("Looks like there is a Spot Interruption. Let's wrap up the processing to avoid forceful killing of the applucation in next 30 sec ...")
-```
-
-Spot Interruption Handling on ECS Fargate Spot
----
+![ECS Cluster](/images/ecs-spot-capacity-providers/c1.png)
-When tasks using Fargate Spot capacity are stopped due to a Spot interruption, a two-minute warning is sent before a task is stopped. The warning is sent as a task state change event to Amazon EventBridge
-and a SIGTERM signal to the running task. When using Fargate Spot as part of a service, the service
-scheduler will receive the interruption signal and attempt to launch additional tasks on Fargate Spot if
-capacity is available.
+Note that above ECS cluster create command also specifies a default capacity provider strategy.
-To ensure that your containers exit gracefully before the task stops, the following can be configured:
+The strategy sets FARGATE as the default capacity provider. That means if there is no capacity provider strategy specified during the deployment of Tasks/Services, ECS by default chooses the FARGATE Capacity Provider to launch them.
-• A stopTimeout value of 120 seconds or less can be specified in the container definition that the task
-is using. Specifying a stopTimeout value gives you time between the moment the task state change event is received and the point at which the container is forcefully stopped.
+Click _***Update Cluster***_ on the top right corner to see default Capacity Provider Strategy. As shown base=1 is set for FARGATE Capacity Provider.
-• The **SIGTERM** signal must be received from within the container to perform any cleanup actions.
+![ECS Cluster](/images/ecs-spot-capacity-providers/c2.png)
diff --git a/content/ecs-spot-capacity-providers/module-3/fargate_service.md b/content/ecs-spot-capacity-providers/module-2/fargate_service.md
similarity index 98%
rename from content/ecs-spot-capacity-providers/module-3/fargate_service.md
rename to content/ecs-spot-capacity-providers/module-2/fargate_service.md
index 31a87576..585fc1ed 100644
--- a/content/ecs-spot-capacity-providers/module-3/fargate_service.md
+++ b/content/ecs-spot-capacity-providers/module-2/fargate_service.md
@@ -97,4 +97,4 @@ As you see 3 tasks were placed on FARGATE and 1 is placed on FARGATE_SPOT Capaci
***Optional Exercise:***
Try changing the Capacity Provider Strategy by assigning different weightrs to FARGATE and FARGATE_SPOT Capacity Providers and update the service.
-***Congratulations !!!*** you have successfully completed Module-3.
+***Congratulations !!!*** you have successfully completed the workshop!!!.
diff --git a/content/ecs-spot-capacity-providers/module-3/fargate_task.md b/content/ecs-spot-capacity-providers/module-2/fargate_task.md
similarity index 100%
rename from content/ecs-spot-capacity-providers/module-3/fargate_task.md
rename to content/ecs-spot-capacity-providers/module-2/fargate_task.md
diff --git a/content/ecs-spot-capacity-providers/module-3/_index.md b/content/ecs-spot-capacity-providers/module-3/_index.md
deleted file mode 100644
index e3fbf823..00000000
--- a/content/ecs-spot-capacity-providers/module-3/_index.md
+++ /dev/null
@@ -1,42 +0,0 @@
----
-title: "Module-3 (Optional): Saving costs using AWS Fargate Spot Capacity Providers"
-weight: 40
----
-
-AWS Fargate Capacity Providers
----
-
-Amazon ECS cluster capacity providers enable you to use both Fargate and Fargate Spot capacity with your Amazon ECS tasks. With Fargate Spot you can run interruption tolerant Amazon ECS tasks at a discounted rate compared to the Fargate price. Fargate Spot runs tasks on spare compute capacity. When AWS needs the capacity back, your tasks will be interrupted with a two-minute warning
-
-Creating a New ECS Cluster That Uses Fargate Capacity Providers
----
-
-When a new Amazon ECS cluster is created, you specify one or more capacity providers to associate with the cluster. The associated capacity providers determine the infrastructure to run your tasks on. Set the following global variables for the names of resources be created in this workshop
-
-Run the following command to create a new cluster and associate both the Fargate and Fargate Spot capacity providers with it.
-
-```
-aws ecs create-cluster \
---cluster-name EcsSpotWorkshop \
---capacity-providers FARGATE FARGATE_SPOT \
---region $AWS_REGION \
---default-capacity-provider-strategy capacityProvider=FARGATE,base=1,weight=1
-```
-If the above command fails with below error, run the command again. It should create the cluster now.
-
-```
-“An error occurred (InvalidParameterException) when calling the CreateCluster operation: Unable to assume the service linked role. Please verify that the ECS service linked role exists.“
-```
-
-The ECS cluster will look like below in the AWS Console. Select ECS in **Services** and click on **Clusters** on left panel
-
-![ECS Cluster](/images/ecs-spot-capacity-providers/c1.png)
-
-Note that above ECS cluster create command also specifies a default capacity provider strategy.
-
-The strategy sets FARGATE as the default capacity provider. That means if there is no capacity provider strategy specified during the deployment of Tasks/Services, ECS by default chooses the FARGATE Capacity Provider to launch them.
-
-Click _***Update Cluster***_ on the top right corner to see default Capacity Provider Strategy. As shown base=1 is set for FARGATE Capacity Provider.
-
-![ECS Cluster](/images/ecs-spot-capacity-providers/c2.png)
-
diff --git a/content/ecs-spot-capacity-providers/modules.md b/content/ecs-spot-capacity-providers/modules.md
deleted file mode 100644
index 30661cd5..00000000
--- a/content/ecs-spot-capacity-providers/modules.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: "Workshop Modules"
-weight: 13
----
-
-This workshop has been broken down into modules.
-
-These modules are designed to be completed in sequence. If you are reading this at a live AWS event, the workshop attendants will give you a high level run down of the labs. Then it’s up to you to follow the instructions below to complete the labs.
-
-
-| Modules | Description |
-| --- | --- |
-| **Module-1** | Saving costs using EC2 spot with Auto Scaling Group Capacity Providers |
-| **Module-2** | Spot Interruption Handling |
-| **Module-3 (Optional)** | Saving costs using AWS Fargate Spot Capacity Providers |
diff --git a/content/ecs-spot-capacity-providers/prerequisites.md b/content/ecs-spot-capacity-providers/prerequisites.md
index b1445579..cbfbc65b 100644
--- a/content/ecs-spot-capacity-providers/prerequisites.md
+++ b/content/ecs-spot-capacity-providers/prerequisites.md
@@ -3,7 +3,7 @@ title: "Prerequisites"
weight: 3
---
-To run through this workshop we expect you to have some familiarity with [Docker](https://en.wikipedia.org/wiki/Docker_(software)), AWS, any container orchestration tools such as [Amazon Elastic Container Service (ECS)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html), [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/), [Kubernetes](https://kubernetes.io/). During the workshop you will be using [AWS Cloud9](https://aws.amazon.com/cloud9/) editor and terminal to run [AWS CLI](https://aws.amazon.com/cli/) commands. Use the AWS Region that is specified by the facilitator when running this workshop at AWS hosted event. You may use any AWS Region while running it self-paced mode on your own AWS account.
+To run through this workshop we expect you to have some familiarity with [Docker](https://en.wikipedia.org/wiki/Docker_(software)), AWS, any container orchestration tools such as [Amazon Elastic Container Service (ECS)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html), [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/), [Kubernetes](https://kubernetes.io/). During the workshop you will be using [AWS Cloud9](https://aws.amazon.com/cloud9/) IDE to run [AWS CLI](https://aws.amazon.com/cli/) commands. Use the AWS region that is specified by the facilitator when running this workshop at AWS hosted event. You may use any AWS region while running it self-paced mode on your own AWS account.
## Conventions:
@@ -17,7 +17,6 @@ The command starts after `$`. Words that are ***UPPER_ITALIC_BOLD*** indicate a
## General requirements and notes:
-1. This workshop is self-paced. The instructions will walk you through achieving the workshop’s goal using the AWS Management Console.
-
-2. While the workshop provides step by step instructions, *please do take a moment to look around and understand what is happening at each step* as this will enhance your learning experience. The workshop is meant as a getting started guide, but you will learn the most by digesting each of the steps and thinking about how they would apply in your own environment and in your own organization. You can even consider experimenting with the steps to challenge yourself.
+1. This workshop is self-paced. The instructions will walk you through achieving the workshop’s learning objective using the AWS Management Console and CLI.
+2. While the workshop provides step by step instructions, *please do take a moment to look around and understand what is happening at each step* as this will enhance your learning experience. The workshop is meant as a getting started guide, but you will learn the most by digesting each of the steps and thinking about how they would apply in your own environment and in your own organization. You can even consider experimenting with the steps to challenge yourself.
\ No newline at end of file
diff --git a/workshops/ecs-spot-capacity-providers/ecs-spot-workshop-cfn.yaml b/workshops/ecs-spot-capacity-providers/ecs-spot-workshop-cfn.yaml
index 5be2f170..2a79be95 100644
--- a/workshops/ecs-spot-capacity-providers/ecs-spot-workshop-cfn.yaml
+++ b/workshops/ecs-spot-capacity-providers/ecs-spot-workshop-cfn.yaml
@@ -1,3312 +1,561 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- ec2-spot-workshops/ecs-spot-workshop-cfn.yaml at master · jalawala/ec2-spot-workshops · GitHub
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- You can’t perform that action at this time.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- You signed in with another tab or window. Reload to refresh your session.
- You signed out in another tab or window. Reload to refresh your session.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- We use cookies and similar technologies ("cookies") to provide and secure our websites, as well as to analyze the usage of our websites, in order to offer you a great user experience. To learn more about our use of cookies see our Privacy Statement.
-
-
- Select Accept all to consent to this use, Reject all to decline this use, or More info to control your cookie preferences. You can always update your selection by clicking Cookie Preferences at the bottom of the page.
-
-
-
-
-
-
-
-
-
-
-
-
-
- We use cookies and similar technologies ("cookies") to provide and secure our websites, as well as to analyze the usage of our websites, in order to offer you a great user experience. To learn more about our use of cookies see our Privacy Statement.
-
-
-
-
-
-
-
-
-
-
-
-
Essential cookies
-
We use essential cookies to perform essential website functions, e.g. they're used to log you in.
- Learn more
-
-
-
-
Always active
-
-
-
-
-
-
Analytics cookies
-
We use analytics cookies to understand how you use our websites so we can make them better, e.g. they're used to gather information about the pages you visit and how many clicks you need to accomplish a task.
- Learn more
-