Skip to content

Commit

Permalink
workshop updated and ready to be tested
Browse files Browse the repository at this point in the history
  • Loading branch information
ruecarlo committed Sep 8, 2022
1 parent 4331633 commit 4828171
Show file tree
Hide file tree
Showing 3 changed files with 6 additions and 3 deletions.
4 changes: 3 additions & 1 deletion content/karpenter/050_scaling/fis_experiment.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@ date: 2022-08-31T13:12:00-07:00
weight: 50
---

During this workshop we have been making extensive use of Spot instances. One question users of Spot instances ask is how they can reproduce the effects of an instance termination so they can qualify if an application would have degradation or issues when spot instances are terminated and replaced by other instances from pools where capacity is available.

In this section, you're going to create and run an experiment to [trigger the interruption of Amazon EC2 Spot Instances using AWS Fault Injection Simulator (FIS)](https://aws.amazon.com/blogs/compute/implementing-interruption-tolerance-in-amazon-ec2-spot-with-aws-fault-injection-simulator/). When using Spot Instances, you need to be prepared to be interrupted. With FIS, you can test the resiliency of your workload and validate that your application is reacting to the interruption notices that EC2 sends before terminating your instances. You can target individual Spot Instances or a subset of instances in clusters managed by services that tag your instances such as ASG, EC2 Fleet, and EKS.

#### What do you need to get started?
Expand All @@ -25,7 +27,7 @@ Parameters:
DurationBeforeInterruption:
Description: Number of minutes before the interruption
Default: 2
Default: 3
Type: Number
Resources:
Expand Down
4 changes: 2 additions & 2 deletions content/karpenter/200_cleanup/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@ kubectl delete -f inflate-spot.yaml
kubectl delete -f inflate.yaml
helm uninstall aws-node-termination-handler --namespace kube-system
helm uninstall karpenter -n karpenter
helm uninstall kube-ops-view
kubectl delete -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.0/components.yaml
kubectl delete -k $HOME/environment/kube-ops-view
kubectl delete -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.1/components.yaml
```

## Removing the cluster, Managed node groups and Karpenter pre-requisites
Expand Down
1 change: 1 addition & 0 deletions content/karpenter/300_conclusion/conclusion.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ In the session, we have:
- We have learned how Karpenter support custom AMI's and bootsrapping.
- We learned how Karpenter uses well-known labels and acts on them procuring capacity that meets criterias such as which architecture to use, which type of instances (On-Demand or Spot) to use.
- We learned how Karpenter applies best practices for large scale deployment by diversifying and using allocation strategies for both on demand instances and EC2 Spot instances, we also learned applications have still full control and can set Node Selectors such as `node.kubernetes.io/instance-type: m5.2xlarge` or `topology.kubernetes.io/zone=us-east-1c` to specify explicitely what instance type to use or which AZ an application must be deployed in.
- Learned how deprovisioning works in Karpenter and how to set up the Cluster Consolidation option.
- Configured a DaemonSet using **AWS-Node-Termination-Handler** to handle spot interruptions gracefully. We also learned that in future version the integration with the termination controller will be proactive in handling Spot Terminations and Rebalance recommendations.

# EC2 Spot Savings
Expand Down

0 comments on commit 4828171

Please sign in to comment.