From 4a47c5e4d22508e6c8e0c46f10fcf492c906e12c Mon Sep 17 00:00:00 2001 From: Vara Bonthu Date: Sun, 18 Apr 2021 18:28:21 +0100 Subject: [PATCH 1/2] Bottlerocket OS nodegroup module added --- README.md | 64 ++++++--- .../eks-with-bottlerocket-nodegroup.tfvars | 133 ++++++++++++++++++ .../bottlerocket_alb_ingress_deployment.yaml | 81 +++++++++++ .../bottlerocket_with_traefik_deployment.yaml | 63 +++++++++ .../eu-west-1/application/dev/base.tfvars | 15 +- modules/launch-templates/main.tf | 17 ++- .../templates/bottlerocket-userdata.sh.tpl | 4 + modules/launch-templates/variables.tf | 17 ++- source/main.tf | 39 +++++ source/variables.tf | 36 +++++ 10 files changed, 440 insertions(+), 29 deletions(-) create mode 100644 examples/eks-with-bottlerocket-nodegroup/eks-with-bottlerocket-nodegroup.tfvars create mode 100644 examples/eks-with-bottlerocket-nodegroup/k8s/bottlerocket_alb_ingress_deployment.yaml create mode 100644 examples/eks-with-bottlerocket-nodegroup/k8s/bottlerocket_with_traefik_deployment.yaml create mode 100644 modules/launch-templates/templates/bottlerocket-userdata.sh.tpl diff --git a/README.md b/README.md index 7103be4a07..736eede21d 100644 --- a/README.md +++ b/README.md @@ -10,30 +10,33 @@ EKS Terraform accelerator module helps you to provision **EKS clusters**, **Mana This folder contains `backend.conf` and `base.tfvars` which are used to create a unique Terraform state for each cluster environment. Terraform backend configuration can be updated in `backend.conf` and cluster common configuration variables in `base.tfvars` -* **source** folder contains main driver file `main.tf` -* **modules** folder contains all the AWS resource modules -* **helm** folder contains all the Helm chart modules -* **examples** folder contains sample template files with `base.tfvars` which can be used to deploy clusters with multiple add-on options +* `source` folder contains main driver file `main.tf` +* `modules` folder contains all the AWS resource modules +* `helm` folder contains all the Helm chart modules +* `examples` folder contains sample template files with `base.tfvars` which can be used to deploy clusters with multiple add-on options # EKS Cluster Deployment Options This module helps you to provision the following EKS resources - 1. VPC, Subnets(Public and Private) and VPC endpoints for fully private EKS Clusters (Optional) - 2. EKS Cluster with multiple networking options - 2.1 Fully private EKS Cluster - 2.2 Public + Private EKS Cluster - 2.3 Public Cluster - 3. AWS Managed Node Groups with on-demand and Spot instances, self-managed node groups and Fargate profiles - 4. AWS Managed node groups with launch templates - 5. AWS SSM agent deployed through launch templates - 6. RBAC for Developers and Administrators with IAM roles - 7. Kubernetes Addons using Helm Charts - 8. Metrics Server - 9. Cluster Autoscaler - 10. AWS LB Ingress Controller - 11. Traefik Ingress Controller - 12. FluentBit to Cloudwatch for Managed Node groups - 13. FluentBit to Cloudwatch for Fargate Containers +1. [VPC and Subnets(Public and Private)](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html) +2. [VPC endpoints for fully private EKS Clusters](https://docs.aws.amazon.com/eks/latest/userguide/private-clusters.html) +3. [EKS Cluster with multiple networking options](https://aws.amazon.com/blogs/containers/de-mystifying-cluster-networking-for-amazon-eks-worker-nodes/) + 1. Fully Private EKS Cluster + 2. Public + Private EKS Cluster + 3. Public Cluster +4. [Managed Node Groups with On-Demand](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html) - AWS Managed Node Groups with On-Demand Instances +5. [Managed Node Groups with Spot](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html) - AWS Managed Node Groups with Spot Instances +6. [Fargate Profiles](https://docs.aws.amazon.com/eks/latest/userguide/fargate-profile.html) +7. [SSM agent](https://aws.amazon.com/blogs/containers/introducing-launch-template-and-custom-ami-support-in-amazon-eks-managed-node-groups/) deployed through launch templates to Managed Node Groups +8. [Bottlerocket OS](https://github.com/bottlerocket-os/bottlerocket) - Managed Node Groups with Bottlerocket OS and Launch Templates +9. [RBAC](https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html) for Developers and Administrators with IAM roles +10. Kubernetes Addons using [Helm Charts](https://helm.sh/docs/topics/charts/) +11. [Metrics Server](https://github.com/kubernetes-sigs/metrics-server) +12. [Cluster Autoscaler](https://github.com/kubernetes/autoscaler) +13. [AWS LB Ingress Controller](https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html) +14. [Traefik Ingress Controller](https://doc.traefik.io/traefik/providers/kubernetes-ingress/) +15. [FluentBit to Cloudwatch for Managed Node groups](https://github.com/aws/aws-for-fluent-bit) +16. [FluentBit to Cloudwatch for Fargate Containers](https://aws.amazon.com/blogs/containers/fluent-bit-for-amazon-eks-on-aws-fargate-is-here/) # Helm Charts Modules Helm Chart Module within this framework allows you to deploy kubernetes apps using Terraform helm chart provider with **enabled** conditional parameter in `base.tfvars`. @@ -85,6 +88,27 @@ This modules ships the Fargate Continaer logs to CloudWatch `fargate_fluent_bit_enable = true` +# Bottlerocket OS + +Bottlerocket is an open source operating system specifically designed for running containers. Bottlerocket build system is based on Rust. It's a container host OS and doesn't have additional softwares or package managers other than what is needed for running contianers hence its very light weight and secure. Container optimized operating systems are ideal when you need to run applications in Kubernetes with minimal setup and do not want to worry about security or updates, or want OS support from cloud provider. Container operating systems does updates transactionally. + +Bottlerocket has two contianer runtimes running. Control container **on** by default used for AWS Systems manager and remote API access. Admin container **off** by default for deep debugging and exploration. + +Bottlerocket [Launch templates userdata](modules/launch-templates/templates/bottlerocket-userdata.sh.tpl) uses the TOML format with Key-value pairs. Remote API access API via SSM agent. You can launch trouble shooting continaer via user data `[settings.host-containers.admin] enabled = true`. + +### Features +* [Secure](https://github.com/bottlerocket-os/bottlerocket/blob/develop/SECURITY_FEATURES.md) - Opninionated, specialized and highly secured +* **Flexible** - Multi cloud and multi orchestrator +* **Transactional** - Image basesd upgraded and roll backs +* **Isolated** - Seprate Contianer Runtimes + +### Updates +Bottlerocket can be updated automatically via Kubernetes Operator + + $ kubectl apply -f Bottlerocket_k8s.csv.yaml + $ kubectl get ClusterServiceVersion Bottlerocket_k8s | jq.'status' + + # How to Deploy ## Pre-requisites: diff --git a/examples/eks-with-bottlerocket-nodegroup/eks-with-bottlerocket-nodegroup.tfvars b/examples/eks-with-bottlerocket-nodegroup/eks-with-bottlerocket-nodegroup.tfvars new file mode 100644 index 0000000000..07df4dddb3 --- /dev/null +++ b/examples/eks-with-bottlerocket-nodegroup/eks-with-bottlerocket-nodegroup.tfvars @@ -0,0 +1,133 @@ +/* + * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. + * SPDX-License-Identifier: MIT-0 + * + * Permission is hereby granted, free of charge, to any person obtaining a copy of this + * software and associated documentation files (the "Software"), to deal in the Software + * without restriction, including without limitation the rights to use, copy, modify, + * merge, publish, distribute, sublicense, and/or sell copies of the Software, and to + * permit persons to whom the Software is furnished to do so. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, + * INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A + * PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION + * OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE + * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + */ + +#---------------------------------------------------------# +# EKS CLUSTER CORE VARIABLES +#---------------------------------------------------------# +#Following fields used in tagging resources and building the name of the cluster +#e.g., eks cluster name will be {tenant}-{environment}-{zone}-{resource} +#---------------------------------------------------------# +org = "aws" # Organization Name. Used to tag resources +tenant = "aws001" # AWS account name or unique id for tenant +environment = "preprod" # Environment area eg., preprod or prod +zone = "dev" # Environment with in one sub_tenant or business unit +terraform_version = "Terraform v0.14.9" +#---------------------------------------------------------# +# VPC and PRIVATE SUBNET DETAILS for EKS Cluster +#---------------------------------------------------------# +#This provides two options Option1 and Option2. You should choose either of one to provide VPC details to the EKS cluster +#Option1: Creates a new VPC, private Subnets and VPC Endpoints by taking the inputs of vpc_cidr_block and private_subnets_cidr. VPC Endpoints are S3, SSM , EC2, ECR API, ECR DKR, KMS, CloudWatch Logs, STS, Elastic Load Balancing, Autoscaling +#Option2: Provide an existing vpc_id and private_subnet_ids + +#---------------------------------------------------------# +# OPTION 1 +#---------------------------------------------------------# +create_vpc = true +vpc_cidr_block = "10.1.0.0/18" +private_subnets_cidr = ["10.1.0.0/22", "10.1.4.0/22", "10.1.8.0/22"] +enable_public_subnets = true +public_subnets_cidr = ["10.1.12.0/22", "10.1.16.0/22", "10.1.20.0/22"] + +#---------------------------------------------------------# +# OPTION 2 +#---------------------------------------------------------# +//create_vpc = false +//vpc_id = "xxxxxx" +//private_subnet_ids = ['xxxxxx','xxxxxx','xxxxxx'] + +#---------------------------------------------------------# +# EKS CONTROL PLANE VARIABLES +#---------------------------------------------------------# +kubernetes_version = "1.19" +endpoint_private_access = true +endpoint_public_access = true +enable_irsa = true + +enabled_cluster_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"] +cluster_log_retention_period = 7 + +#---------------------------------------------------------# +# MANAGED WORKER NODE INPUT VARIABLES FOR ON DEMAND INSTANCES - Worker Group1 +#---------------------------------------------------------# +on_demand_node_group_name = "mg-m5-on-demand" +on_demand_ami_type = "AL2_x86_64" +on_demand_disk_size = 50 +on_demand_instance_type = ["m5.xlarge"] +on_demand_desired_size = 3 +on_demand_max_size = 3 +on_demand_min_size = 3 + +#---------------------------------------------------------# +# BOTTLEROCKET - Worker Group3 +#---------------------------------------------------------# +# Amazon EKS optimized Bottlerocket AMI ID for a region and Kubernetes version. +bottlerocket_node_group_name = "mg-m5-bottlerocket" +bottlerocket_ami = "ami-0326716ad575410ab" +bottlerocket_disk_size = 50 +bottlerocket_instance_type = ["m5.large"] +bottlerocket_desired_size = 3 +bottlerocket_max_size = 3 +bottlerocket_min_size = 3 +#---------------------------------------------------------# +# MANAGED WORKER NODE INPUT VARIABLES FOR SPOT INSTANCES - Worker Group2 +#---------------------------------------------------------# +spot_node_group_name = "mg-m5-spot" +spot_instance_type = ["m5.large", "m5a.large"] +spot_ami_type = "AL2_x86_64" +spot_desired_size = 3 +spot_max_size = 6 +spot_min_size = 3 + +#---------------------------------------------------------# +# Creates a Fargate profile for default namespace +#---------------------------------------------------------# +fargate_profile_namespace = "default" + +#---------------------------------------------------------# +# ENABLE HELM MODULES +# Please note that you may need to download the docker images for each +# helm module and push it to ECR if you create fully private EKS Clusters with no access to internet to fetch docker images. +# README with instructions available in each HELM module under helm/ +#---------------------------------------------------------# +# Enable this if worker Node groups has access to internet to download the docker images + +public_docker_repo = true + +#---------------------------------------------------------# +# ENABLE METRICS SERVER +#---------------------------------------------------------# +metrics_server_enable = true + +#---------------------------------------------------------# +# ENABLE CLUSTER AUTOSCALER +#---------------------------------------------------------# +cluster_autoscaler_enable = true + + +//---------------------------------------------------------// +// ENABLE ALB INGRESS CONTROLLER +//---------------------------------------------------------// +lb_ingress_controller_enable = true + +#---------------------------------------------------------# +# ENABLE AWS_FLUENT-BIT +#---------------------------------------------------------# +aws_for_fluent_bit_enable = true +fargate_fluent_bit_enable = true + +ekslog_retention_in_days = 1 diff --git a/examples/eks-with-bottlerocket-nodegroup/k8s/bottlerocket_alb_ingress_deployment.yaml b/examples/eks-with-bottlerocket-nodegroup/k8s/bottlerocket_alb_ingress_deployment.yaml new file mode 100644 index 0000000000..f2087c79e5 --- /dev/null +++ b/examples/eks-with-bottlerocket-nodegroup/k8s/bottlerocket_alb_ingress_deployment.yaml @@ -0,0 +1,81 @@ +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: bottlerocket-app1-nginx-deployment + labels: + app: bottlerocket-app1-nginx + WorkerType: ON_DEMAND_BOTTLEROCKET +spec: + replicas: 2 + selector: + matchLabels: + app: bottlerocket-app1-nginx + template: + metadata: + labels: + app: bottlerocket-app1-nginx + WorkerType: ON_DEMAND_BOTTLEROCKET + spec: + containers: + - name: app1-nginx + image: stacksimplify/kube-nginxapp1:1.0.0 + # image: 958351136353.dkr.ecr.eu-west-1.amazonaws.com/stacksimplify/kube-nginxapp:1.0.0 + ports: + - containerPort: 80 + nodeSelector: + WorkerType: ON_DEMAND_BOTTLEROCKET +--- +apiVersion: v1 +kind: Service +metadata: + name: bottlerocket-app1-nginx-nodeport-service + labels: + app: bottlerocket-app1-nginx + WorkerType: ON_DEMAND_BOTTLEROCKET + annotations: + #Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html +spec: + type: NodePort + selector: + app: bottlerocket-app1-nginx + WorkerType: ON_DEMAND_BOTTLEROCKET + ports: + - port: 80 + targetPort: 80 +--- + +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: ingress-nginx-path-based + labels: + app: ingress-nginx-path-based + annotations: + # Ingress Core Settings + kubernetes.io/ingress.class: "alb" + alb.ingress.kubernetes.io/scheme: internet-facing + # Health Check Settings + alb.ingress.kubernetes.io/healthcheck-protocol: HTTP + alb.ingress.kubernetes.io/healthcheck-port: traffic-port + #Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + #alb.ingress.kubernetes.io/healthcheck-path: /usermgmt/health-status + alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' + alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' + alb.ingress.kubernetes.io/success-codes: '200' + alb.ingress.kubernetes.io/healthy-threshold-count: '2' + alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' + # This is required for bottlerocket + alb.ingress.kubernetes.io/target-type: ip +spec: + rules: + - http: + paths: + - path: /app1/* + pathType: Prefix + backend: + service: + name: bottlerocket-app1-nginx-nodeport-service + port: + number: 80 \ No newline at end of file diff --git a/examples/eks-with-bottlerocket-nodegroup/k8s/bottlerocket_with_traefik_deployment.yaml b/examples/eks-with-bottlerocket-nodegroup/k8s/bottlerocket_with_traefik_deployment.yaml new file mode 100644 index 0000000000..5a1423a739 --- /dev/null +++ b/examples/eks-with-bottlerocket-nodegroup/k8s/bottlerocket_with_traefik_deployment.yaml @@ -0,0 +1,63 @@ + +# This service can be accessed using NLB DNS e.g., http://>:8000/bottlerocket-greeting +--- +apiVersion: v1 +kind: Service +metadata: + name: bottlerocket-greeting-service + namespace: default +spec: + selector: + app: bottlerocket-greeting-pod + ports: + - name: web + port: 8000 + targetPort: 8080 + #type: NodePort +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: bottlerocket-greeting + namespace: default +spec: + replicas: 3 + selector: + matchLabels: + app: bottlerocket-greeting-pod + template: + metadata: + labels: + app: bottlerocket-greeting-pod + spec: + containers: + - name: bottlerocket-greeting-pod + # NOTE: If you are deploying this to private cluster without Internet access then pull the docker image locally and push it to ECR. refer ECR image location below +# image: 439595162109.dkr.ecr.eu-west-1.amazonaws.com/bottlerocket-greeting:latest + image: pahud/greeting + ports: + - containerPort: 8080 + nodeSelector: + WorkerType: ON_DEMAND_BOTTLEROCKET + +--- +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: bottlerocket-greeting-ingress + namespace: default + annotations: + traefik.ingress.kubernetes.io/router.entrypoints: web + traefik.ingress.kubernetes.io/router.pathmatcher: PathPrefix +spec: + rules: + - http: + paths: + - path: "/bottlerocket-greeting" + pathType: Prefix + backend: + service: + name: bottlerocket-greeting-service + port: + number: 8000 + diff --git a/live/preprod/eu-west-1/application/dev/base.tfvars b/live/preprod/eu-west-1/application/dev/base.tfvars index bb54d52165..07df4dddb3 100644 --- a/live/preprod/eu-west-1/application/dev/base.tfvars +++ b/live/preprod/eu-west-1/application/dev/base.tfvars @@ -72,6 +72,17 @@ on_demand_desired_size = 3 on_demand_max_size = 3 on_demand_min_size = 3 +#---------------------------------------------------------# +# BOTTLEROCKET - Worker Group3 +#---------------------------------------------------------# +# Amazon EKS optimized Bottlerocket AMI ID for a region and Kubernetes version. +bottlerocket_node_group_name = "mg-m5-bottlerocket" +bottlerocket_ami = "ami-0326716ad575410ab" +bottlerocket_disk_size = 50 +bottlerocket_instance_type = ["m5.large"] +bottlerocket_desired_size = 3 +bottlerocket_max_size = 3 +bottlerocket_min_size = 3 #---------------------------------------------------------# # MANAGED WORKER NODE INPUT VARIABLES FOR SPOT INSTANCES - Worker Group2 #---------------------------------------------------------# @@ -120,7 +131,3 @@ aws_for_fluent_bit_enable = true fargate_fluent_bit_enable = true ekslog_retention_in_days = 1 - - - - diff --git a/modules/launch-templates/main.tf b/modules/launch-templates/main.tf index a04f8fbb67..127b353141 100644 --- a/modules/launch-templates/main.tf +++ b/modules/launch-templates/main.tf @@ -20,6 +20,15 @@ data "template_file" "launch_template_userdata" { template = file("${path.module}/templates/userdata.sh.tpl") } +data "template_file" "launch_template_bottle_rocket_userdata" { + template = file("${path.module}/templates/bottlerocket-userdata.sh.tpl") + vars = { + cluster_endpoint = var.cluster_endpoint + cluster_auth_base64 = var.cluster_auth_base64 + cluster_name = var.cluster_name + } +} + resource "aws_launch_template" "default" { name_prefix = "${var.cluster_name}-${var.node_group_name}" description = "Launch Template for EKS Managed clusters" @@ -37,7 +46,7 @@ resource "aws_launch_template" "default" { ebs_optimized = true - // image_id = var.eks_optimized_ami + image_id = var.self_managed ? var.bottlerocket_ami : "" // instance_type = var.instance_type monitoring { @@ -55,8 +64,10 @@ resource "aws_launch_template" "default" { security_groups = [var.worker_security_group_id] } - user_data = base64encode( - data.template_file.launch_template_userdata.rendered, + user_data = var.self_managed ? base64encode( + data.template_file.launch_template_bottle_rocket_userdata.rendered, + ) : base64encode( + data.template_file.launch_template_userdata.rendered, ) lifecycle { diff --git a/modules/launch-templates/templates/bottlerocket-userdata.sh.tpl b/modules/launch-templates/templates/bottlerocket-userdata.sh.tpl new file mode 100644 index 0000000000..90bae33234 --- /dev/null +++ b/modules/launch-templates/templates/bottlerocket-userdata.sh.tpl @@ -0,0 +1,4 @@ +[settings.kubernetes] +api-server = "${cluster_endpoint}" +cluster-certificate = "${cluster_auth_base64}" +cluster-name = "${cluster_name}" diff --git a/modules/launch-templates/variables.tf b/modules/launch-templates/variables.tf index 1a1a59d5b2..e79a701f8c 100644 --- a/modules/launch-templates/variables.tf +++ b/modules/launch-templates/variables.tf @@ -16,11 +16,24 @@ * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ -variable "cluster_name" {} +variable "cluster_auth_base64" { +} +variable "cluster_endpoint" { +} +variable "cluster_name" { +} variable "node_group_name" {} //variable "instance_type" {} variable "volume_size" { default = "50" } variable "tags" {} -variable "worker_security_group_id" {} \ No newline at end of file +variable "worker_security_group_id" {} +variable "bottlerocket_ami" { + type = string + default = "ami-0326716ad575410ab" + description = "/aws/service/bottlerocket/aws-k8s-1.19/x86_64/latest/image_id" +} +variable "self_managed" { + default = false +} \ No newline at end of file diff --git a/source/main.tf b/source/main.tf index 252961da12..8a23235f87 100644 --- a/source/main.tf +++ b/source/main.tf @@ -240,6 +240,28 @@ module "eks" { ExtraTag = var.on_demand_node_group_name Name = "${module.eks-label.id}-${var.on_demand_node_group_name}" } + }, + mg-m5-bottlerocket = { + desired_capacity = var.bottlerocket_desired_size + max_capacity = var.bottlerocket_max_size + min_capacity = var.bottlerocket_min_size + subnets = var.create_vpc == false ? var.private_subnet_ids : module.vpc.private_subnets + launch_template_id = module.launch-templates-bottlerocket.launch_template_id + launch_template_version = module.launch-templates-bottlerocket.launch_template_latest_version + instance_types = var.bottlerocket_instance_type + capacity_type = "ON_DEMAND" + // ami_type = var.on_demand_ami_type + + k8s_labels = { + Environment = var.environment + Zone = var.zone + OS = "bottlerocket" + WorkerType = "ON_DEMAND_BOTTLEROCKET" + } + additional_tags = { + ExtraTag = var.bottlerocket_node_group_name + Name = "${module.eks-label.id}-${var.bottlerocket_node_group_name}" + } } } #---------------------------------------------------------------------------------- @@ -291,6 +313,8 @@ module "launch-templates-on-demand" { worker_security_group_id = module.eks.worker_security_group_id node_group_name = var.on_demand_node_group_name tags = module.eks-label.tags + cluster_auth_base64 = module.eks.cluster_certificate_authority_data + cluster_endpoint = module.eks.cluster_endpoint // instance_type = var.instance_type } @@ -301,9 +325,24 @@ module "launch-templates-spot" { worker_security_group_id = module.eks.worker_security_group_id node_group_name = var.spot_node_group_name tags = module.eks-label.tags + cluster_auth_base64 = module.eks.cluster_certificate_authority_data + cluster_endpoint = module.eks.cluster_endpoint // instance_type = var.instance_type } +module "launch-templates-bottlerocket" { + source = "../modules/launch-templates" + cluster_name = module.eks.cluster_id + volume_size = "50" + worker_security_group_id = module.eks.worker_security_group_id + node_group_name = var.bottlerocket_node_group_name + tags = module.eks-label.tags + bottlerocket_ami = var.bottlerocket_ami + self_managed = true + cluster_auth_base64 = module.eks.cluster_certificate_authority_data + cluster_endpoint = module.eks.cluster_endpoint + // instance_type = var.instance_type +} # --------------------------------------------------------------------------------------------------------------------- # IAM Module # --------------------------------------------------------------------------------------------------------------------- diff --git a/source/variables.tf b/source/variables.tf index 762e98bcb3..da7cec036f 100644 --- a/source/variables.tf +++ b/source/variables.tf @@ -154,6 +154,42 @@ variable "map_additional_iam_users" { #---------------------------------------------------------- // EKS WORKER NODES #---------------------------------------------------------- + +variable "bottlerocket_ami" { + type = string + default = "ami-0326716ad575410ab" + description = "/aws/service/bottlerocket/aws-k8s-1.19/x86_64/latest/image_id" +} +variable "bottlerocket_node_group_name" { + type = string + default = "mg-m5-bottlerocket" + description = "AWS eks managed node group name" +} +variable "bottlerocket_disk_size" { + type = number + default = 50 + description = "Disk size in GiB for worker nodes. Defaults to 20. Terraform will only perform drift detection if a configuration value is provided" +} +variable "bottlerocket_instance_type" { + type = list(string) + default = ["m5.large"] + description = "Set of instance types associated with the EKS Node Group" +} +variable "bottlerocket_desired_size" { + type = number + default = 3 + description = "Desired number of worker nodes" +} +variable "bottlerocket_max_size" { + type = number + default = 3 + description = "The maximum size of the AutoScaling Group" +} +variable "bottlerocket_min_size" { + type = number + default = 3 + description = "The minimum size of the AutoScaling Group" +} variable "on_demand_node_group_name" { type = string default = "mg-m5-on-demand" From 6bff465fc26da6245d2ebef18d68ae8e6db799bd Mon Sep 17 00:00:00 2001 From: Vara Bonthu Date: Sun, 18 Apr 2021 18:39:01 +0100 Subject: [PATCH 2/2] Updated Readme with Bottlerocket details --- README.md | 68 +++++++++++++++++++++++++++---------------------------- 1 file changed, 34 insertions(+), 34 deletions(-) diff --git a/README.md b/README.md index 736eede21d..cce71c1e41 100644 --- a/README.md +++ b/README.md @@ -4,7 +4,7 @@ The main purpose of this project is to provide a Terraform framework to help you get started on deploying **EKS Clusters** in multi-tenant environments using Hashicorp Terraform with AWS and Helm Providers. # Overview -EKS Terraform accelerator module helps you to provision **EKS clusters**, **Managed node groups** with **on-demand** and **spot instances**, **Fargate profiles** and all the necessary plugins/addons for EKS cluster. Terraform **Helm provider** is used to deploy the common Kubernetes add-ons with publicly available [Helm Charts](https://artifacthub.io/). This project leverages the official [terraform-aws-eks](https://github.com/terraform-aws-modules/terraform-aws-eks) module to create EKS Clusters. This framework helps you to design and create EKS clusters for different environments in various AWS accounts across multiple regions with a **unique Terraform configuration and state file** for each EKS cluster. +EKS Terraform accelerator module helps you to provision **EKS clusters**, **Managed node groups** with **on-demand** and **spot instances**, **Fargate profiles** and all the necessary plugins/addons for EKS cluster. Terraform **Helm provider** is used to deploy the common Kubernetes add-ons with publicly available [Helm Charts](https://artifacthub.io/). This project leverages the official [terraform-aws-eks](https://github.com/terraform-aws-modules/terraform-aws-eks) module to create EKS Clusters. This framework helps you to design and create EKS clusters for different environments in various AWS accounts across multiple regions with a **unique Terraform configuration and state file** for each EKS cluster. * Top level **live** folder contains the configuration setup for each cluster. Each folder under `live//application` represents an EKS cluster environment(e.g., dev, test, load etc.). This folder contains `backend.conf` and `base.tfvars` which are used to create a unique Terraform state for each cluster environment. @@ -30,32 +30,32 @@ This module helps you to provision the following EKS resources 7. [SSM agent](https://aws.amazon.com/blogs/containers/introducing-launch-template-and-custom-ami-support-in-amazon-eks-managed-node-groups/) deployed through launch templates to Managed Node Groups 8. [Bottlerocket OS](https://github.com/bottlerocket-os/bottlerocket) - Managed Node Groups with Bottlerocket OS and Launch Templates 9. [RBAC](https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html) for Developers and Administrators with IAM roles -10. Kubernetes Addons using [Helm Charts](https://helm.sh/docs/topics/charts/) -11. [Metrics Server](https://github.com/kubernetes-sigs/metrics-server) -12. [Cluster Autoscaler](https://github.com/kubernetes/autoscaler) +10. Kubernetes Addons using [Helm Charts](https://helm.sh/docs/topics/charts/) +11. [Metrics Server](https://github.com/Kubernetes -sigs/metrics-server) +12. [Cluster Autoscaler](https://github.com/Kubernetes /autoscaler) 13. [AWS LB Ingress Controller](https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html) -14. [Traefik Ingress Controller](https://doc.traefik.io/traefik/providers/kubernetes-ingress/) -15. [FluentBit to Cloudwatch for Managed Node groups](https://github.com/aws/aws-for-fluent-bit) -16. [FluentBit to Cloudwatch for Fargate Containers](https://aws.amazon.com/blogs/containers/fluent-bit-for-amazon-eks-on-aws-fargate-is-here/) +14. [Traefik Ingress Controller](https://doc.traefik.io/traefik/providers/Kubernetes -ingress/) +15. [FluentBit to CloudWatch for Managed Node groups](https://github.com/aws/aws-for-fluent-bit) +16. [FluentBit to CloudWatch for Fargate Containers](https://aws.amazon.com/blogs/containers/fluent-bit-for-amazon-eks-on-aws-fargate-is-here/) # Helm Charts Modules -Helm Chart Module within this framework allows you to deploy kubernetes apps using Terraform helm chart provider with **enabled** conditional parameter in `base.tfvars`. +Helm Chart Module within this framework allows you to deploy Kubernetes apps using Terraform helm chart provider with **enabled** conditional parameter in `base.tfvars`. **NOTE**: Docker images used in Helm Charts requires downloading locally and push it to ECR repo for **fully private EKS Clusters**. This project provides both options of public docker hub repo and private ECR repo for all Helm chart modules. You can find the README for each Helm module with instructions on how to download the images from Docker Hub or third-party repos and upload it to your private ECR repo. For example, [ALB Ingress Controller](helm/lb_ingress_controller/README.md) for AWS LB Ingress Controller module. ## Ingress Controller Modules -Ingress is an API object that defines the traffic routing rules (e.g. load balancing, SSL termination, path-based routing, protocol), whereas the Ingress Controller is the component responsible for fulfilling those requests. +Ingress is an API object that defines the traffic routing rules (e.g., load balancing, SSL termination, path-based routing, protocol), whereas the Ingress Controller is the component responsible for fulfilling those requests. * [ALB Ingress Controller](helm/lb_ingress_controller/README.md) can be deployed by specifying the following line in `base.tfvars` file. -**AWS ALB Ingress controller** triggers the creation of an ALB and the necessary supporting AWS resources whenever a Kubernetes user declares an Ingress resource in the cluster. -[ALB Docs](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/) +**AWS ALB Ingress controller** triggers the creation of an ALB and the necessary supporting AWS resources whenever a Kubernetes user declares an Ingress resource in the cluster. +[ALB Docs](https://Kubernetes -sigs.github.io/aws-load-balancer-controller/latest/) `alb_ingress_controller_enable = true` * [Traefik Ingress Controller](helm/traefik_ingress/README.md) can be deployed by specifying the following line in `base.tfvars` file. -**Treafik is an open source Kubernetes Ingress Controller**. The Traefik Kubernetes Ingress provider is a Kubernetes Ingress controller; that is to say, it manages access to cluster services by supporting the Ingress specification. For more detials about [Traefik can be found here](https://doc.traefik.io/traefik/providers/kubernetes-ingress/) +**Traefik is an open source Kubernetes Ingress Controller**. The Traefik Kubernetes Ingress provider is a Kubernetes Ingress controller; that is to say, it manages access to cluster services by supporting the Ingress specification. For more details about [Traefik can be found here](https://doc.traefik.io/traefik/providers/Kubernetes -ingress/) `traefik_ingress_controller_enable = true` @@ -63,14 +63,14 @@ Ingress is an API object that defines the traffic routing rules (e.g. load balan **Cluster Autoscaler** and **Metric Server** Helm Modules gets deployed by default with the EKS Cluster. * [Cluster Autoscaler](helm/cluster_autoscaler/README.md) can be deployed by specifying the following line in `base.tfvars` file. -The Kubernetes Cluster Autoscaler automatically adjusts the number of nodes in your cluster when pods fail or are rescheduled onto other nodes. It's not deployed by default in EKS clusters. -That is, the AWS Cloud Provider implementation within the Kubernetes Cluster Autoscaler controls the **DesiredReplicas** field of Amazon EC2 Auto Scaling groups. +The Kubernetes Cluster Autoscaler automatically adjusts the number of nodes in your cluster when pods fail or are rescheduled onto other nodes. It's not deployed by default in EKS clusters. +That is, the AWS Cloud Provider implementation within the Kubernetes Cluster Autoscaler controls the **DesiredReplicas** field of Amazon EC2 Auto Scaling groups. The Cluster Autoscaler is typically installed as a **Deployment** in your cluster. It uses leader election to ensure high availability, but scaling is one done by a single replica at a time. `cluster_autoscaler_enable = true` * [Metrics Server](helm/metrics_server/README.md) can be deployed by specifying the following line in `base.tfvars` file. -The Kubernetes Metrics Server, used to gather metrics such as cluster CPU and memory usage over time, is not deployed by default in EKS clusters. +The Kubernetes Metrics Server, used to gather metrics such as cluster CPU and memory usage over time, is not deployed by default in EKS clusters. `metrics_server_enable = true` @@ -84,26 +84,26 @@ For more details, see [aws-for-fluent-bit](https://gallery.ecr.aws/aws-observabi `aws-for-fluent-bit_enable = true` * [fargate-fluentbit](helm/fargate_fluentbit) can be deployed by specifying the following line in `base.tfvars` file. -This modules ships the Fargate Continaer logs to CloudWatch +This module ships the Fargate Container logs to CloudWatch `fargate_fluent_bit_enable = true` # Bottlerocket OS -Bottlerocket is an open source operating system specifically designed for running containers. Bottlerocket build system is based on Rust. It's a container host OS and doesn't have additional softwares or package managers other than what is needed for running contianers hence its very light weight and secure. Container optimized operating systems are ideal when you need to run applications in Kubernetes with minimal setup and do not want to worry about security or updates, or want OS support from cloud provider. Container operating systems does updates transactionally. +Bottlerocket is an open source operating system specifically designed for running containers. Bottlerocket build system is based on Rust. It's a container host OS and doesn't have additional software's or package managers other than what is needed for running containers hence its very light weight and secure. Container optimized operating systems are ideal when you need to run applications in Kubernetes with minimal setup and do not want to worry about security or updates, or want OS support from cloud provider. Container operating systems does updates transactionally. -Bottlerocket has two contianer runtimes running. Control container **on** by default used for AWS Systems manager and remote API access. Admin container **off** by default for deep debugging and exploration. +Bottlerocket has two container runtimes running. Control container **on** by default used for AWS Systems manager and remote API access. Admin container **off** by default for deep debugging and exploration. -Bottlerocket [Launch templates userdata](modules/launch-templates/templates/bottlerocket-userdata.sh.tpl) uses the TOML format with Key-value pairs. Remote API access API via SSM agent. You can launch trouble shooting continaer via user data `[settings.host-containers.admin] enabled = true`. +Bottlerocket [Launch templates userdata](modules/launch-templates/templates/bottlerocket-userdata.sh.tpl) uses the TOML format with Key-value pairs. Remote API access API via SSM agent. You can launch trouble shooting container via user data `[settings.host-containers.admin] enabled = true`. ### Features * [Secure](https://github.com/bottlerocket-os/bottlerocket/blob/develop/SECURITY_FEATURES.md) - Opninionated, specialized and highly secured * **Flexible** - Multi cloud and multi orchestrator -* **Transactional** - Image basesd upgraded and roll backs -* **Isolated** - Seprate Contianer Runtimes +* **Transactional** - Image based upgraded and roll backs +* **Isolated** - Separate container Runtimes ### Updates -Bottlerocket can be updated automatically via Kubernetes Operator +Bottlerocket can be updated automatically via Kubernetes Operator $ kubectl apply -f Bottlerocket_k8s.csv.yaml $ kubectl get ClusterServiceVersion Bottlerocket_k8s | jq.'status' @@ -116,13 +116,13 @@ Ensure that you installed the following tools in your Mac or Windows Laptop befo 1. [aws cli] (https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html) 2. [aws-iam-authenticator] (https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html) - 3. [kubectl] (https://kubernetes.io/docs/tasks/tools/) + 3. [kubectl] (https://Kubernetes .io/docs/tasks/tools/) 4. wget ## Deployment Steps The following steps walks you through the deployment of example [DEV cluster](live/preprod/eu-west-1/application/dev/base.tfvars) configuration. This config deploys a private EKS cluster with public and private subnets. Two managed worker nodes with On-demand and Spot instances along with one fargate profile for default namespace placed in private subnets. ALB placed in Public subnets created by LB Ingress controller. -It also deploys few kubernetes apps i.e., LB Ingress Controller, Metrics Server, Cluster Autoscaler, aws-for-fluent-bit CloudWatch logging for Managed node groups, FluentBit CloudWatch logging for Fargate etc. +It also deploys few Kubernetes apps i.e., LB Ingress Controller, Metrics Server, Cluster Autoscaler, aws-for-fluent-bit CloudWatch logging for Managed node groups, FluentBit CloudWatch logging for Fargate etc. ### Provision VPC (optional) and EKS cluster with selected Helm modules @@ -148,7 +148,7 @@ It's highly recommended to use remote state in S3 instead of using local backend key = "ekscluster/preprod/application/dev/terraform-main.tfstate" #### Step4: Assume IAM role before creating a EKS cluster. -This role will become the Kubernetes Admin by default. +This role will become the Kubernetes Admin by default. $ aws-mfa --assume-role arn:aws:iam:::role/ @@ -188,7 +188,7 @@ EKS Cluster details can be extracted from terraform output or from AWS Console t `example` folder contains multiple cluster templates with pre-populated `.tfvars` which can be used as a quick start. Reuse the templates from `examples` and follow the above Deployment steps as mentioned above. # EKS Addons update -Amazon EKS doesn't modify any of your Kubernetes add-ons when you update a cluster to newer versions. +Amazon EKS doesn't modify any of your Kubernetes add-ons when you update a cluster to newer versions. It's important to upgrade EKS Addons Amazon VPC CNI plug-in, DNS (CoreDNS) and KubeProxy for each EKS release. This [README](eks_cluster_addons_upgrade/README.md) guides you to update the EKS addons for newer versions that matches with your EKS cluster version @@ -196,11 +196,11 @@ This [README](eks_cluster_addons_upgrade/README.md) guides you to update the EKS Updating a EKS cluster instructions can be found in [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html). # Important note -This module tested only with **Kubernetes v1.19 version**. Helm Charts addon modules aligned with k8s v1.19. If you are looking to use this code to deploy different versions of Kubernetes then ensure Helm charts and docker images aligned with k8s version. +This module tested only with **Kubernetes v1.19 version**. Helm Charts addon modules aligned with k8s v1.19. If you are looking to use this code to deploy different versions of Kubernetes then ensure Helm charts and docker images aligned with k8s version. -The `kubernetes_version="1.19"` is the required variable in `base.tfvars`. Kubernetes is evolving a lot, and each major version includes new features, fixes, or changes. +The `Kubernetes _version="1.19"` is the required variable in `base.tfvars`. Kubernetes is evolving a lot, and each major version includes new features, fixes, or changes. -Always check [Kubernetes Release Notes](https://kubernetes.io/docs/setup/release/notes/) before updating the major version. You also need to ensure your applications and Helm addons updated, +Always check [Kubernetes Release Notes](https://Kubernetes .io/docs/setup/release/notes/) before updating the major version. You also need to ensure your applications and Helm addons updated, or workloads could fail after the upgrade is complete. For action, you may need to take before upgrading, see the steps in the EKS documentation. # Notes: @@ -208,20 +208,20 @@ If you are using an existing VPC then you may need to ensure that the following Add Tags to VPC - Key = kubernetes.io/cluster/${local.cluster_name} Value = Shared + Key = Kubernetes .io/cluster/${local.cluster_name} Value = Shared Add Tags to Public Subnets tagging requirement public_subnet_tags = { - "kubernetes.io/cluster/${local.cluster_name}" = "shared" - "kubernetes.io/role/elb" = "1" + "Kubernetes .io/cluster/${local.cluster_name}" = "shared" + "Kubernetes .io/role/elb" = "1" } Add Tags to Private Subnets tagging requirement private_subnet_tags = { - "kubernetes.io/cluster/${local.cluster_name}" = "shared" - "kubernetes.io/role/internal-elb" = "1" + "Kubernetes .io/cluster/${local.cluster_name}" = "shared" + "Kubernetes .io/role/internal-elb" = "1" } For fully Private EKS clusters requires the following VPC endpoints to be created to communicate with AWS services. This module will create these endpoints if you choose to create VPC. If you are using an existing VPC then you may need to ensure these endpoints are created.