From 9e3f0af14cccd9bfec90315457968f961e60720b Mon Sep 17 00:00:00 2001 From: Reinaldo Martinez Date: Tue, 29 Aug 2017 09:54:58 -0400 Subject: [PATCH 1/3] Added new practical example with private networking and the use of bastions --- .../kops-tests-private-net-bastion-host.md | 379 ++++++++++++++++++ 1 file changed, 379 insertions(+) create mode 100644 docs/examples/kops-tests-private-net-bastion-host.md diff --git a/docs/examples/kops-tests-private-net-bastion-host.md b/docs/examples/kops-tests-private-net-bastion-host.md new file mode 100644 index 0000000000000..9d53d4c71ab25 --- /dev/null +++ b/docs/examples/kops-tests-private-net-bastion-host.md @@ -0,0 +1,379 @@ +# USING KOPS WITH PRIVATE NETWORKING AND A BASTION HOST IN A HIGLY-AVAILABLE SETUP + +## WHAT WE WANT TO ACOMPLISH HERE ?. + +The exercise described on this document will focus on the following goals: + +- Demonstrate how to use a production-setup with 3 masters and two workers in different availability zones. +- Demonstrate how to use a private networking setup with a bastion host. +- Ensure our masters are deployed on 3 different AWS availability zones. +- Ensure our nodes are deployed on 2 different AWS availability zones. +- Add true high-availability to the bastion instance group. + + +## PRE-FLIGHT CHECK: + +Before rushing in to replicate this exercise, please ensure your basic environment is correctly setup. See the [KOPS AWS tutorial for more information](https://github.com/kubernetes/kops/blob/master/docs/aws.md). + +Ensure that the following points are covered and working in your environment: + +- AWS cli fully configured (aws account already with proper permissions/roles needed for kops). Depending on your distro, you can setup directly from packages, or if you want the most updated version, use "pip" and install awscli by issuing a "pip install awscli" command. Your choice !. +- Local ssh key ready on ~/.ssh/id_rsa / id_rsa.pub. You can generate it using "ssh-keygen" command: `ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ""` +- Region set to us-east-1 (az's: us-east-1a, us-east-1b, us-east-1c, us-east-1d and us-east-1e). For this exercise we'll deploy our cluster on US-EAST-1. For real HA at kubernetes master level, you need 3 masters. If you want to ensure that each master is deployed on a different availability zone, then a region with "at least" 3 availabity zones is required here. You can still deploy a multi-master kubenetes setup on regions with just 2 az's, but this mean that two masters will be deployed on a single az, and of this az goes offline then you'll lose two master !. If possible, always pick a region with at least 3 different availability zones for real H.A. You always can check amazon regions and az's on the link: [AWS Global Infrastructure](https://aws.amazon.com/about-aws/global-infrastructure/) +- kubectl and kops installed. For this last part, you can do this with using following commnads (do this as root please). Next commands asume you are running a amd64/x86_64 linux distro: + +```bash +cd ~ +curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl +wget https://github.com/kubernetes/kops/releases/download/1.7.0/kops-linux-amd64 +chmod 755 kubectl kops-linux-amd64 +mv kops-linux-amd64 kops +mv kubectl kops /usr/local/bin +``` + + +## AWS/KOPS ENVIRONMENT SETUP: + +First, using some scripting and asuming you already configured your "aws" environment on your linux system, use the following commands in order to export your AWS access/secret (this will work if you are using the default profile): + +```bash +export AWS_ACCESS_KEY_ID=`grep aws_access_key_id ~/.aws/credentials|awk '{print $3}'` +export AWS_SECRET_ACCESS_KEY=`grep aws_secret_access_key ~/.aws/credentials|awk '{print $3}'` +echo "$AWS_ACCESS_KEY_ID $AWS_SECRET_ACCESS_KEY" +``` + +If you are using multiple profiles (and not the default one), you should use the following command instead in order to export your profile: + +```bash +export AWS_PROFILE=name_of_your_profile +``` + +Create a bucket (if you don't already have one) for your cluster state: + +```bash +aws s3api create-bucket --bucket my-kops-s3-bucket-for-cluster-state --region us-east-1 +``` + +Then export the name of your cluster along with the "S3" URL of your bucket: + +```bash +export NAME=privatekopscluster.k8s.local +export KOPS_STATE_STORE=s3://my-kops-s3-bucket-for-cluster-state +``` + +Some things to note from here: + +- "NAME" will be an environment variable that we'll use from now in order to refer to our cluster name. For this practical exercise, our cluster name is "privatekopscluster.k8s.local". +- Because we'll use gossip DNS instead of a valid DNS domain on AWS ROUTE53 service, our cluster name need to include the string **".k8s.local"** at the end (this is covered on our AWS tutorials). You can see more about this on our [Getting Started Doc.](https://github.com/kubernetes/kops/blob/master/docs/aws.md) + + +## KOPS PRIVATE CLUSTER CREATION: + +Let's first create our cluster ensuring a multi-master setup with 3 masters in a multi-az setup, two worker nodes also in a multi-az setup, and using both private networking and a bastion server: + +```bash +kops create cluster \ +--cloud=aws \ +--master-zones=us-east-1a,us-east-1b,us-east-1c \ +--zones=us-east-1a,us-east-1b,us-east-1c \ +--node-count=2 \ +--topology private \ +--networking kopeio-vxlan \ +--node-size=t2.micro \ +--master-size=t2.micro \ +${NAME} +``` + +A few things to note here: + +- The environment variable ${NAME} was previously exported with our cluster name: privatekopscluster.k8s.local. +- "--cloud=aws": As kops grows and begin to support more clouds, we need to tell the command to use the specific cloud we want for our deployment. In this case: amazon web services (aws). +- For true HA at the master level, we need to pick a region with at least 3 availability zones. For this practical exercise, we are using "us-east-1" AWS region which contains 5 availability zones (az's for short): us-east-1a, us-east-1b, us-east-1c, us-east-1d and us-east-1e. +- The "--master-zones=us-east-1a,us-east-1b,us-east-1c" KOPS argument will actually enforce that we want 3 masters here. "--node-count=2" only applies to the worker nodes (not the masters). +- The "--topology private" argument will ensure that all our instances will have private IP's and no public IP's from amazon. +- We are including the arguments "--node-size" and "master-size" to specify the "instance types" for both our masters and worker nodes. +- And finally, the "--networking kopeio-vxlan" argument. With the private networking model, we need to tell kops which networking subsystem to use. More information about kops supported networking models can be obtained from the [KOPS Kubernetes Networking Documentation](https://github.com/kubernetes/kops/blob/master/docs/networking.md). For this exercise we'll use "kopeio-vxlan" (or "kopeio" for short). + +**NOTE**: You can add the "--bastion" argument here if you are not using "gossip dns" and create the bastion from start, but if you are using "gossip-dns" this will make this cluster to fail (this is a bug we are correcting now). For the moment don't use "--bastion" when using gossip DNS. We'll show you how to get around this by first creating the private cluster, then creation the bastion instance group once the cluster is running. + +With those points clarified, let's deploy our cluster: + +```bash +kops update cluster ${NAME} --yes +``` + +Go for a coffee or just take a 10~15 minutes walk. After that, the cluster will be up-and-running. We can check this with the following commands: + +```bash +kops validate cluster + +Using cluster from kubectl context: privatekopscluster.k8s.local + +Validating cluster privatekopscluster.k8s.local + +INSTANCE GROUPS +NAME ROLE MACHINETYPE MIN MAX SUBNETS +master-us-east-1a Master t2.micro 1 1 us-east-1a +master-us-east-1b Master t2.micro 1 1 us-east-1b +master-us-east-1c Master t2.micro 1 1 us-east-1c +nodes Node t2.micro 2 2 us-east-1a,us-east-1b,us-east-1c + +NODE STATUS +NAME ROLE READY +ip-172-20-111-44.ec2.internal master True +ip-172-20-44-102.ec2.internal node True +ip-172-20-53-10.ec2.internal master True +ip-172-20-64-151.ec2.internal node True +ip-172-20-74-55.ec2.internal master True + +Your cluster privatekopscluster.k8s.local is ready +``` + +The ELB created by kops will expose the Kubernetes API trough "https" (configured on our ~/.kube/config file): + +```bash +grep server ~/.kube/config + +server: https://api-privatekopscluster-k8-djl5jb-1946625559.us-east-1.elb.amazonaws.com +``` + +But, all the cluster instances (masters and worker nodes) will have private IP's only (no AWS public IP's). Then, in order to reach our instances, we need to add a "bastion host" to our cluster. + + +## ADDING A BASTION HOST TO OUR CLUSTER. + +We mentioned earlier that we can't add the "--bastion" argument to our "kops create cluster" command if we are using "gossip dns" (a fix it's on the way as we speaks). That forces us to add the bastion afterwards, once the cluster is up and running. + +Let's add a bastion here by using the following command: + +```bash +kops create instancegroup bastions --role Bastion --subnet utility-us-east-1a --name ${NAME} +``` + +**Explanation of this command:** +- This command will add to our cluster definition a new instance group called "bastions" with the "Bastion" role on the aws subnet "utility-us-east-1a". Note that the "Bastion" role need the first letter to be a capital (Bastion=ok, bastion=not ok). +- The subnet "utility-us-east-1a" was created when we created our cluster the first time. KOPS add the "utility-" prefix to all subnets created on all specified AZ's. In other words, if we instructed kops to deploy our instances on us-east-1a, use-east-1b and use-east-1c, kops will create the subnets "utility-us-east-1a", "utility-us-east-1b" and "utility-us-east-1c". Because we need to tell kops where to deploy our bastion (or bastions), wee need to specify the subnet. + +You'll see the following output in your editor when you can change your bastion group size and add more networks. + +```bash +apiVersion: kops/v1alpha2 +kind: InstanceGroup +metadata: + creationTimestamp: null + name: bastions +spec: + image: kope.io/k8s-1.7-debian-jessie-amd64-hvm-ebs-2017-07-28 + machineType: t2.micro + maxSize: 1 + minSize: 1 + role: Bastion + subnets: + - utility-us-east-1a +``` + +If want a H.A. setup for your bastions, modify minSize and maxSize and add more subnets. We'll do this later on this exercise. + +Save this and deploy the changes: + +```bash +kops update cluster ${NAME} --yes +``` + +You will see an output like the following: + +```bash +I0828 13:06:33.153920 16528 apply_cluster.go:420] Gossip DNS: skipping DNS validation +I0828 13:06:34.686722 16528 executor.go:91] Tasks: 0 done / 116 total; 40 can run +I0828 13:06:36.181677 16528 executor.go:91] Tasks: 40 done / 116 total; 26 can run +I0828 13:06:37.602302 16528 executor.go:91] Tasks: 66 done / 116 total; 34 can run +I0828 13:06:39.116916 16528 launchconfiguration.go:327] waiting for IAM instance profile "bastions.privatekopscluster.k8s.local" to be ready +I0828 13:06:49.761535 16528 executor.go:91] Tasks: 100 done / 116 total; 9 can run +I0828 13:06:50.897272 16528 executor.go:91] Tasks: 109 done / 116 total; 7 can run +I0828 13:06:51.516158 16528 executor.go:91] Tasks: 116 done / 116 total; 0 can run +I0828 13:06:51.944576 16528 update_cluster.go:247] Exporting kubecfg for cluster +Kops has set your kubectl context to privatekopscluster.k8s.local + +Cluster changes have been applied to the cloud. + + +Changes may require instances to restart: kops rolling-update cluster +``` + +This is "kops" creating the instance group with your bastion instance. Let's validate our cluster: + +```bash +kops validate cluster +Using cluster from kubectl context: privatekopscluster.k8s.local + +Validating cluster privatekopscluster.k8s.local + +INSTANCE GROUPS +NAME ROLE MACHINETYPE MIN MAX SUBNETS +bastions Bastion t2.micro 1 1 utility-us-east-1a +master-us-east-1a Master t2.micro 1 1 us-east-1a +master-us-east-1b Master t2.micro 1 1 us-east-1b +master-us-east-1c Master t2.micro 1 1 us-east-1c +nodes Node t2.micro 2 2 us-east-1a,us-east-1b,us-east-1c + +NODE STATUS +NAME ROLE READY +ip-172-20-111-44.ec2.internal master True +ip-172-20-44-102.ec2.internal node True +ip-172-20-53-10.ec2.internal master True +ip-172-20-64-151.ec2.internal node True +ip-172-20-74-55.ec2.internal master True + +Your cluster privatekopscluster.k8s.local is ready +``` + +Our bastion instance group is there. Also, kops created an ELB for our "bastions" instance group that we can check with the following command: + +```bash +aws elb --output=table describe-load-balancers|grep DNSName.\*bastion|awk '{print $4}' +bastion-privatekopscluste-bgl0hp-1327959377.us-east-1.elb.amazonaws.com +``` + +For this LAB, the "ELB" FQDN is "bastion-privatekopscluste-bgl0hp-1327959377.us-east-1.elb.amazonaws.com" We can "ssh" to it: + +```bash +ssh -i ~/.ssh/id_rsa admin@bastion-privatekopscluste-bgl0hp-1327959377.us-east-1.elb.amazonaws.com + +The programs included with the Debian GNU/Linux system are free software; +the exact distribution terms for each program are described in the +individual files in /usr/share/doc/*/copyright. + +Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent +permitted by applicable law. +Last login: Mon Aug 28 18:07:16 2017 from 172.20.0.238 +``` + +Because we really want to use a ssh-agent, start it first (this will : + +```bash +eval `ssh-agent -s` +``` + +And add your key to the agent with "ssh-add": + +```bash +ssh-add ~/.ssh/id_rsa + +Identity added: /home/kops/.ssh/id_rsa (/home/kops/.ssh/id_rsa) +``` + +Then, ssh to your bastion ELB FQDN + +```bash +ssh -A admin@bastion-privatekopscluste-bgl0hp-1327959377.us-east-1.elb.amazonaws.com +``` + +Or if you want to automate it: + +```bash +ssh -A admin@`aws elb --output=table describe-load-balancers|grep DNSName.\*bastion|awk '{print $4}'` +``` + +And from the bastion, you can ssh to your masters or workers: + +```bash +admin@ip-172-20-2-64:~$ ssh admin@ip-172-20-53-10.ec2.internal + +The authenticity of host 'ip-172-20-53-10.ec2.internal (172.20.53.10)' can't be established. +ECDSA key fingerprint is d1:30:c6:5e:77:ff:cd:d2:7d:1f:f9:12:e3:b0:28:e4. +Are you sure you want to continue connecting (yes/no)? yes +Warning: Permanently added 'ip-172-20-53-10.ec2.internal,172.20.53.10' (ECDSA) to the list of known hosts. + +The programs included with the Debian GNU/Linux system are free software; +the exact distribution terms for each program are described in the +individual files in /usr/share/doc/*/copyright. + +Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent +permitted by applicable law. + +admin@ip-172-20-53-10:~$ +``` + +**NOTE:** Remember that you can obtain the local DNS names from your "kops validate cluster" command, or, with the "kubectl get nodes" command. We recommend the first (kops validate cluster) because it will tell you who are the masters and who the worker nodes: + + +```bash +kops validate cluster +Using cluster from kubectl context: privatekopscluster.k8s.local + +Validating cluster privatekopscluster.k8s.local + +INSTANCE GROUPS +NAME ROLE MACHINETYPE MIN MAX SUBNETS +bastions Bastion t2.micro 1 1 utility-us-east-1a +master-us-east-1a Master t2.micro 1 1 us-east-1a +master-us-east-1b Master t2.micro 1 1 us-east-1b +master-us-east-1c Master t2.micro 1 1 us-east-1c +nodes Node t2.micro 2 2 us-east-1a,us-east-1b,us-east-1c + +NODE STATUS +NAME ROLE READY +ip-172-20-111-44.ec2.internal master True +ip-172-20-44-102.ec2.internal node True +ip-172-20-53-10.ec2.internal master True +ip-172-20-64-151.ec2.internal node True +ip-172-20-74-55.ec2.internal master True + +Your cluster privatekopscluster.k8s.local is ready +``` + +## MAKING THE BASTION LAYER "HIGLY AVAILABLE". + +If for any reason "godzilla" decides to destroy the amazon AZ that contains our bastion, we'll basically be unable to enter to our instances. Let's add some H.A. to our bastion layer and force amazon to deploy additional bastion instances on other availability zones. + +First, let's edit our "bastions" instance group: + +```bash +kops edit ig bastions --name ${NAME} +``` + +And change minSize/maxSize to 3 (3 instances) and add more subnets: + +```bash +apiVersion: kops/v1alpha2 +kind: InstanceGroup +metadata: + creationTimestamp: 2017-08-28T17:05:23Z + labels: + kops.k8s.io/cluster: privatekopscluster.k8s.local + name: bastions +spec: + image: kope.io/k8s-1.7-debian-jessie-amd64-hvm-ebs-2017-07-28 + machineType: t2.micro + maxSize: 3 + minSize: 3 + role: Bastion + subnets: + - utility-us-east-1a + - utility-us-east-1b + - utility-us-east-1c +``` + +Save the changes, and update your cluster: + +```bash +kops update cluster ${NAME} --yes +``` + +**NOTE:** After the update command, you'll see the following recurring error: + +```bash +W0828 15:22:46.461033 5852 executor.go:109] error running task "LoadBalancer/bastion.privatekopscluster.k8s.local" (1m5s remaining to succeed): subnet changes on LoadBalancer not yet implemented: actual=[subnet-c029639a] -> expected=[subnet-23f8a90f subnet-4a24ef2e subnet-c029639a] +``` + +This happens because the original ELB created by "kops" only contained the subnet "utility-us-east-1a" and it can't add the additional subnets. In order to fix this, go to your AWS console and add the remaining subnets in your ELB. Then the recurring error will disappear and your bastion layer will be fully redundant. + +**NOTE:** Always think ahead: If you are creating a fully redundant cluster (with fully redundant bastions), always configure the redundancy from the beginning. + +When you are finished playing with kops, then destroy/delete your cluster: + +Finally, let's destroy our cluster: + +```bash +kops delete cluster ${NAME} --yes +``` From e06b5f9a173ee8329eaf535bf59bdd62c2bfe07d Mon Sep 17 00:00:00 2001 From: Reinaldo Martinez Date: Mon, 11 Sep 2017 14:08:59 -0400 Subject: [PATCH 2/3] Added new example with a route53 subdomain --- docs/examples/kops-test-route53-subdomain.md | 870 +++++++++++++++++++ 1 file changed, 870 insertions(+) create mode 100644 docs/examples/kops-test-route53-subdomain.md diff --git a/docs/examples/kops-test-route53-subdomain.md b/docs/examples/kops-test-route53-subdomain.md new file mode 100644 index 0000000000000..f2497d55eb5b1 --- /dev/null +++ b/docs/examples/kops-test-route53-subdomain.md @@ -0,0 +1,870 @@ +# USING KOPS WITH A ROUTE53 BASED SUBDOMAIN AND SCALING UP THE CLUSTER + +## WHAT WE WANT TO ACOMPLISH HERE ?. + +The exercise described on this document will focus on the following goals: + +- Demonstrate how to use a production-setup with 3 masters and two workers in different availability zones. +- Ensure our masters are deployed on 3 different AWS availability zones. +- Ensure our nodes are deployed on 2 different AWS availability zones. +- Use AWS Route53 service for the cluster DNS sub-domain. +- Show how to properly scale-up our cluster. + + +## PRE-FLIGHT CHECK: + +Before rushing in to replicate this exercise, please ensure your basic environment is correctly setup. See the [KOPS AWS tutorial for more information](https://github.com/kubernetes/kops/blob/master/docs/aws.md). + +Ensure that the following points are covered and working in your environment: + +- "jq" utility installed (this is available on most linux distributions). If you are running on Centos, you'll need to add "epel" repository with `yum -y install epel-release` then install jq with `yum -y install jq`. +- "dig" utility installed (this is also available on most linux distributions). We'll need "dig" in order to tests our DNS subdomain. On "centos/rhel" distros, this utility is part of the "bind-utils" package. +- AWS cli fully configured (aws account already with proper permissions/roles needed for kops). Depending on your distro, you can setup directly from packages, or if you want the most updated version, use "pip" and install awscli by issuing a "pip install awscli" command. Your choice !. +- Local ssh key ready on ~/.ssh/id_rsa / id_rsa.pub. You can generate it using "ssh-keygen" command: `ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ""` +- Region set to us-east-1 (az's: us-east-1a, us-east-1b, us-east-1c, us-east-1d and us-east-1e). For this exercise we'll deploy our cluster on US-EAST-1. For real HA at kubernetes master level, you need 3 masters. If you want to ensure that each master is deployed on a different availability zone, then a region with "at least" 3 availabity zones is required here. You can still deploy a multi-master kubenetes setup on regions with just 2 az's, but this mean that two masters will be deployed on a single az, and of this az goes offline then you'll lose two master !. If possible, always pick a region with at least 3 different availability zones for real H.A. You always can check amazon regions and az's on the link: [AWS Global Infrastructure](https://aws.amazon.com/about-aws/global-infrastructure/) +- kubectl and kops installed. For this last part, you can do this with using following commnads (do this as root please). Next commands asume you are running a amd64/x86_64 linux distro: + +```bash +cd ~ +curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl +wget https://github.com/kubernetes/kops/releases/download/1.7.0/kops-linux-amd64 +chmod 755 kubectl kops-linux-amd64 +mv kops-linux-amd64 kops +mv kubectl kops /usr/local/bin +``` + +## DNS Setup - AWS Route53 + +For our setup we already have a hosted DNS domain in AWS: + +```bash + aws route53 list-hosted-zones --output=table +------------------------------------------------------------------------------------------------------------------ +| ListHostedZones | ++----------------------------------------------------------------------------------------------------------------+ +|| HostedZones || +|+---------------------------------------+-----------------------------+--------------+-------------------------+| +|| CallerReference | Id | Name | ResourceRecordSetCount || +|+---------------------------------------+-----------------------------+--------------+-------------------------+| +|| C0461665-01D8-463B-BF2D-62F1747A16DB | /hostedzone/ZTKK4EXR1EWR5 | kopeio.org. | 2 || +|+---------------------------------------+-----------------------------+--------------+-------------------------+| +||| Config ||| +||+-------------------------------------------------------------------+----------------------------------------+|| +||| PrivateZone | False ||| +||+-------------------------------------------------------------------+----------------------------------------+|| +``` + +We can also check our that our domain is reacheable from the Internet using "dig": + +```bash +dig +short kopeio.org soa + +ns-656.awsdns-18.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400 + +dig +short kopeio.org ns + +ns-1056.awsdns-04.org. +ns-656.awsdns-18.net. +ns-9.awsdns-01.com. +ns-1642.awsdns-13.co.uk. +``` + +If both the "soa" and "ns" queries anwers OK, and with the data pointing to amazon, we are set and we can continue. Please always check that your router53 hosted DNS zone is working before doing anything else !. + +Now, let's create a subdomain that we'll use for our cluster: + +```bash +export ID=$(uuidgen) +echo $ID +ae852c68-78b3-41af-85ee-997fc470fd1c + +aws route53 \ +create-hosted-zone \ +--output=json \ +--name kopsclustertest.kopeio.org \ +--caller-reference $ID | \ +jq .DelegationSet.NameServers + +[ + "ns-1383.awsdns-44.org", + "ns-829.awsdns-39.net", + "ns-346.awsdns-43.com", + "ns-1973.awsdns-54.co.uk" +] +``` + +Note that the last command (`aws route53 create-hosted-zone`) will output your name servers for the subdomain: + +```bash +[ + "ns-1383.awsdns-44.org", + "ns-829.awsdns-39.net", + "ns-346.awsdns-43.com", + "ns-1973.awsdns-54.co.uk" +] +``` + +We need the zone parent ID too. We can obtain it with the following command: + +```bash +aws route53 --output=json list-hosted-zones | jq '.HostedZones[] | select(.Name=="kopeio.org.") | .Id' | cut -d/ -f3|cut -d\" -f1 +``` + +It's a good idea if we export this ID as a shell variable by using the following command: + +```bash +export parentzoneid=`aws route53 --output=json list-hosted-zones | jq '.HostedZones[] | select(.Name=="kopeio.org.") | .Id' | cut -d/ -f3|cut -d\" -f1` +``` + +Let's check the var: + +```bash +echo $parentzoneid +ZTKK4EXR1EWR5 +``` + +With the name servers obtained above, we need to construct a "json" file that we'll pass to amazon for our subdomain: + +```bash +cat<~/kopsclustertest.kopeio.org.json +{ + "Comment": "Create a subdomain NS record in the parent domain", + "Changes": [ + { + "Action": "CREATE", + "ResourceRecordSet": { + "Name": "kopsclustertest.kopeio.org", + "Type": "NS", + "TTL": 300, + "ResourceRecords": [ + { + "Value": "ns-1383.awsdns-44.org" + }, + { + "Value": "ns-829.awsdns-39.net" + }, + { + "Value": "ns-346.awsdns-43.com" + }, + { + "Value": "ns-1973.awsdns-54.co.uk" + } + ] + } + } + ] +} +EOF + +``` + +**NOTE:** This step is needed because the subdomain was created, but it does not have "ns" records on it. We are basically adding four NS records to the subdomain here. + +With the json file ready, and the parent zone ID exported in the "$parentzoneid" environment variable, we can finish the task and add the NS records to the subdomain using the following command: + +```bash +aws route53 change-resource-record-sets \ +--output=table \ +--hosted-zone-id $parentzoneid \ +--change-batch file://~/kopsclustertest.kopeio.org.json +``` + +The output of the last command will be something like: + +``` +------------------------------------------------------------------------------------------------------------------------- +| ChangeResourceRecordSets | ++-----------------------------------------------------------------------------------------------------------------------+ +|| ChangeInfo || +|+----------------------------------------------------+------------------------+----------+----------------------------+| +|| Comment | Id | Status | SubmittedAt || +|+----------------------------------------------------+------------------------+----------+----------------------------+| +|| Create a subdomain NS record in the parent domain | /change/CJ7FOVJ7U58L0 | PENDING | 2017-09-06T13:28:12.972Z || +|+----------------------------------------------------+------------------------+----------+----------------------------+| +``` + +Finally, check your records with the following command: + +```bash +aws route53 list-resource-record-sets \ +--output=table \ +--hosted-zone-id `aws route53 --output=json list-hosted-zones | jq '.HostedZones[] | select(.Name=="kopsclustertest.kopeio.org.") | .Id' | cut -d/ -f3|cut -d\" -f1` +``` + +The last command will output the following info: + +```bash +--------------------------------------------------------------------------------------- +| ListResourceRecordSets | ++-------------------------------------------------------------------------------------+ +|| ResourceRecordSets || +|+----------------------------------------------------+----------------+-------------+| +|| Name | TTL | Type || +|+----------------------------------------------------+----------------+-------------+| +|| kopsclustertest.kopeio.org. | 172800 | NS || +|+----------------------------------------------------+----------------+-------------+| +||| ResourceRecords ||| +||+---------------------------------------------------------------------------------+|| +||| Value ||| +||+---------------------------------------------------------------------------------+|| +||| ns-1383.awsdns-44.org. ||| +||| ns-829.awsdns-39.net. ||| +||| ns-346.awsdns-43.com. ||| +||| ns-1973.awsdns-54.co.uk. ||| +||+---------------------------------------------------------------------------------+|| +|| ResourceRecordSets || +|+-------------------------------------------------------+------------+--------------+| +|| Name | TTL | Type || +|+-------------------------------------------------------+------------+--------------+| +|| kopsclustertest.kopeio.org. | 900 | SOA || +|+-------------------------------------------------------+------------+--------------+| +||| ResourceRecords ||| +||+---------------------------------------------------------------------------------+|| +||| Value ||| +||+---------------------------------------------------------------------------------+|| +||| ns-1383.awsdns-44.org. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400 ||| +||+---------------------------------------------------------------------------------+|| +``` + +Also, do a "dig" test in order to check the zone availability on the Internet: + +```bash +dig +short kopsclustertest.kopeio.org soa + +ns-1383.awsdns-44.org. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400 + +dig +short kopsclustertest.kopeio.org ns + +ns-1383.awsdns-44.org. +ns-829.awsdns-39.net. +ns-1973.awsdns-54.co.uk. +ns-346.awsdns-43.com. +``` + +If both your SOA and NS records are there, then your subdomain is ready to be used by KOPS. + + +## AWS/KOPS ENVIRONMENT INFORMATION SETUP: + +First, using some scripting and asuming you already configured your "aws" environment on your linux system, use the following commands in order to export your AWS access/secret (this will work if you are using the default profile): + +```bash +export AWS_ACCESS_KEY_ID=`grep aws_access_key_id ~/.aws/credentials|awk '{print $3}'` +export AWS_SECRET_ACCESS_KEY=`grep aws_secret_access_key ~/.aws/credentials|awk '{print $3}'` +echo "$AWS_ACCESS_KEY_ID $AWS_SECRET_ACCESS_KEY" +``` + +If you are using multiple profiles (and not the default one), you should use the following command instead in order to export your profile: + +```bash +export AWS_PROFILE=name_of_your_profile +``` + +Create a bucket (if you don't already have one) for your cluster state: + +```bash +aws s3api create-bucket --bucket my-kops-s3-bucket-for-cluster-state --region us-east-1 +``` + +Then export the name of your cluster along with the "S3" URL of your bucket. Add your cluster name to the full subdomain: + +```bash +export NAME=mycluster01.kopsclustertest.kopeio.org +export KOPS_STATE_STORE=s3://my-kops-s3-bucket-for-cluster-state +``` + +Some things to note from here: + +- "NAME" will be an environment variable that we'll use from now in order to refer to our cluster name. For this practical exercise, our cluster name is "mycluster01.kopsclustertest.kopeio.org". + + +## KOPS PRIVATE CLUSTER CREATION: + +Let's first create our cluster ensuring a multi-master setup with 3 masters in a multi-az setup, two worker nodes also in a multi-az setup, and using both private networking and a bastion server: + +```bash +kops create cluster \ +--cloud=aws \ +--master-zones=us-east-1a,us-east-1b,us-east-1c \ +--zones=us-east-1a,us-east-1b,us-east-1c \ +--node-count=2 \ +--node-size=t2.micro \ +--master-size=t2.micro \ +${NAME} +``` + +A few things to note here: + +- The environment variable ${NAME} was previously exported with our cluster name: mycluster01.kopsclustertest.kopeio.org. +- "--cloud=aws": As kops grows and begin to support more clouds, we need to tell the command to use the specific cloud we want for our deployment. In this case: amazon web services (aws). +- For true HA at the master level, we need to pick a region with at least 3 availability zones. For this practical exercise, we are using "us-east-1" AWS region which contains 5 availability zones (az's for short): us-east-1a, us-east-1b, us-east-1c, us-east-1d and us-east-1e. +- The "--master-zones=us-east-1a,us-east-1b,us-east-1c" KOPS argument will actually enforce that we want 3 masters here. "--node-count=2" only applies to the worker nodes (not the masters). +- We are including the arguments "--node-size" and "master-size" to specify the "instance types" for both our masters and worker nodes. + +With those points clarified, let's deploy our cluster: + +```bash +kops update cluster ${NAME} --yes +``` + +The last command will generate the following output: + +```bash +I0906 09:42:09.399908 13538 executor.go:91] Tasks: 0 done / 75 total; 38 can run +I0906 09:42:12.033675 13538 vfs_castore.go:422] Issuing new certificate: "master" +I0906 09:42:12.310586 13538 vfs_castore.go:422] Issuing new certificate: "kube-scheduler" +I0906 09:42:12.791469 13538 vfs_castore.go:422] Issuing new certificate: "kube-proxy" +I0906 09:42:13.312675 13538 vfs_castore.go:422] Issuing new certificate: "kops" +I0906 09:42:13.378500 13538 vfs_castore.go:422] Issuing new certificate: "kubelet" +I0906 09:42:13.398070 13538 vfs_castore.go:422] Issuing new certificate: "kube-controller-manager" +I0906 09:42:13.636134 13538 vfs_castore.go:422] Issuing new certificate: "kubecfg" +I0906 09:42:14.684945 13538 executor.go:91] Tasks: 38 done / 75 total; 14 can run +I0906 09:42:15.997588 13538 executor.go:91] Tasks: 52 done / 75 total; 19 can run +I0906 09:42:17.855959 13538 launchconfiguration.go:327] waiting for IAM instance profile "masters.mycluster01.kopsclustertest.kopeio.org" to be ready +I0906 09:42:17.932515 13538 launchconfiguration.go:327] waiting for IAM instance profile "nodes.mycluster01.kopsclustertest.kopeio.org" to be ready +I0906 09:42:18.602180 13538 launchconfiguration.go:327] waiting for IAM instance profile "masters.mycluster01.kopsclustertest.kopeio.org" to be ready +I0906 09:42:18.682038 13538 launchconfiguration.go:327] waiting for IAM instance profile "masters.mycluster01.kopsclustertest.kopeio.org" to be ready +I0906 09:42:29.215995 13538 executor.go:91] Tasks: 71 done / 75 total; 4 can run +I0906 09:42:30.073417 13538 executor.go:91] Tasks: 75 done / 75 total; 0 can run +I0906 09:42:30.073471 13538 dns.go:152] Pre-creating DNS records +I0906 09:42:32.403909 13538 update_cluster.go:247] Exporting kubecfg for cluster +Kops has set your kubectl context to mycluster01.kopsclustertest.kopeio.org + +Cluster is starting. It should be ready in a few minutes. + +Suggestions: + * validate cluster: kops validate cluster + * list nodes: kubectl get nodes --show-labels + * ssh to the master: ssh -i ~/.ssh/id_rsa admin@api.mycluster01.kopsclustertest.kopeio.org +The admin user is specific to Debian. If not using Debian please use the appropriate user based on your OS. + * read about installing addons: https://github.com/kubernetes/kops/blob/master/docs/addons.md +``` + +Note that KOPS will create a DNS record for your API: api.mycluster01.kopsclustertest.kopeio.org. You can check this record with the following "dig" command: + +```bash +dig +short api.mycluster01.kopsclustertest.kopeio.org A +34.228.219.212 +34.206.72.126 +54.83.144.111 +``` + +KOPS created a DNS round-robin resource record with all the public IP's assigned to the masters. Do you remember we specified 3 masters ?. Well, there are their IP's. + +After about 10~15 minutes (depending on how fast or how slow are amazon services during the cluster creation) you can check your cluster: + +```bash +kops validate cluster + +Using cluster from kubectl context: mycluster01.kopsclustertest.kopeio.org + +Validating cluster mycluster01.kopsclustertest.kopeio.org + +INSTANCE GROUPS +NAME ROLE MACHINETYPE MIN MAX SUBNETS +master-us-east-1a Master t2.micro 1 1 us-east-1a +master-us-east-1b Master t2.micro 1 1 us-east-1b +master-us-east-1c Master t2.micro 1 1 us-east-1c +nodes Node t2.micro 2 2 us-east-1a,us-east-1b,us-east-1c + +NODE STATUS +NAME ROLE READY +ip-172-20-125-42.ec2.internal master True +ip-172-20-33-58.ec2.internal master True +ip-172-20-43-160.ec2.internal node True +ip-172-20-64-116.ec2.internal master True +ip-172-20-68-15.ec2.internal node True + +Your cluster mycluster01.kopsclustertest.kopeio.org is ready + +``` + +Also with "kubectl": + +```bash +kubectl get nodes + +NAME STATUS AGE VERSION +ip-172-20-125-42.ec2.internal Ready 6m v1.7.2 +ip-172-20-33-58.ec2.internal Ready 6m v1.7.2 +ip-172-20-43-160.ec2.internal Ready 5m v1.7.2 +ip-172-20-64-116.ec2.internal Ready 6m v1.7.2 +ip-172-20-68-15.ec2.internal Ready 5m v1.7.2 +``` + +Let's try to send a command to our masters using "ssh": + +```bash +ssh -i ~/.ssh/id_rsa admin@api.mycluster01.kopsclustertest.kopeio.org "ec2metadata --public-ipv4" +34.206.72.126 +``` + +Our "api.xxxx" resource record is working OK. + +## DNS RESOURCE RECORDS CREATED BY KOPS ON ROUTE 53 + +Let's do a fast review (using aws cli tools) of the resource records created by KOPS inside our subdomain: + +```bash +aws route53 list-resource-record-sets \ +--output=table \ +--hosted-zone-id `aws route53 --output=json list-hosted-zones | jq '.HostedZones[] | select(.Name=="kopsclustertest.kopeio.org.") | .Id' | cut -d/ -f3|cut -d\" -f1` +``` + +The output: + +``` +--------------------------------------------------------------------------------------- +| ListResourceRecordSets | ++-------------------------------------------------------------------------------------+ +|| ResourceRecordSets || +|+----------------------------------------------------+----------------+-------------+| +|| Name | TTL | Type || +|+----------------------------------------------------+----------------+-------------+| +|| kopsclustertest.kopeio.org. | 172800 | NS || +|+----------------------------------------------------+----------------+-------------+| +||| ResourceRecords ||| +||+---------------------------------------------------------------------------------+|| +||| Value ||| +||+---------------------------------------------------------------------------------+|| +||| ns-1383.awsdns-44.org. ||| +||| ns-829.awsdns-39.net. ||| +||| ns-346.awsdns-43.com. ||| +||| ns-1973.awsdns-54.co.uk. ||| +||+---------------------------------------------------------------------------------+|| +|| ResourceRecordSets || +|+-------------------------------------------------------+------------+--------------+| +|| Name | TTL | Type || +|+-------------------------------------------------------+------------+--------------+| +|| kopsclustertest.kopeio.org. | 900 | SOA || +|+-------------------------------------------------------+------------+--------------+| +||| ResourceRecords ||| +||+---------------------------------------------------------------------------------+|| +||| Value ||| +||+---------------------------------------------------------------------------------+|| +||| ns-1383.awsdns-44.org. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400 ||| +||+---------------------------------------------------------------------------------+|| +|| ResourceRecordSets || +|+--------------------------------------------------------------+---------+----------+| +|| Name | TTL | Type || +|+--------------------------------------------------------------+---------+----------+| +|| api.mycluster01.kopsclustertest.kopeio.org. | 60 | A || +|+--------------------------------------------------------------+---------+----------+| +||| ResourceRecords ||| +||+---------------------------------------------------------------------------------+|| +||| Value ||| +||+---------------------------------------------------------------------------------+|| +||| 34.206.72.126 ||| +||| 34.228.219.212 ||| +||| 54.83.144.111 ||| +||+---------------------------------------------------------------------------------+|| +|| ResourceRecordSets || +|+-----------------------------------------------------------------+-------+---------+| +|| Name | TTL | Type || +|+-----------------------------------------------------------------+-------+---------+| +|| api.internal.mycluster01.kopsclustertest.kopeio.org. | 60 | A || +|+-----------------------------------------------------------------+-------+---------+| +||| ResourceRecords ||| +||+---------------------------------------------------------------------------------+|| +||| Value ||| +||+---------------------------------------------------------------------------------+|| +||| 172.20.125.42 ||| +||| 172.20.33.58 ||| +||| 172.20.64.116 ||| +||+---------------------------------------------------------------------------------+|| +|| ResourceRecordSets || +|+------------------------------------------------------------------+-------+--------+| +|| Name | TTL | Type || +|+------------------------------------------------------------------+-------+--------+| +|| etcd-a.internal.mycluster01.kopsclustertest.kopeio.org. | 60 | A || +|+------------------------------------------------------------------+-------+--------+| +||| ResourceRecords ||| +||+---------------------------------------------------------------------------------+|| +||| Value ||| +||+---------------------------------------------------------------------------------+|| +||| 172.20.33.58 ||| +||+---------------------------------------------------------------------------------+|| +|| ResourceRecordSets || +|+------------------------------------------------------------------+-------+--------+| +|| Name | TTL | Type || +|+------------------------------------------------------------------+-------+--------+| +|| etcd-b.internal.mycluster01.kopsclustertest.kopeio.org. | 60 | A || +|+------------------------------------------------------------------+-------+--------+| +||| ResourceRecords ||| +||+---------------------------------------------------------------------------------+|| +||| Value ||| +||+---------------------------------------------------------------------------------+|| +||| 172.20.64.116 ||| +||+---------------------------------------------------------------------------------+|| +|| ResourceRecordSets || +|+------------------------------------------------------------------+-------+--------+| +|| Name | TTL | Type || +|+------------------------------------------------------------------+-------+--------+| +|| etcd-c.internal.mycluster01.kopsclustertest.kopeio.org. | 60 | A || +|+------------------------------------------------------------------+-------+--------+| +||| ResourceRecords ||| +||+---------------------------------------------------------------------------------+|| +||| Value ||| +||+---------------------------------------------------------------------------------+|| +||| 172.20.125.42 ||| +||+---------------------------------------------------------------------------------+|| +|| ResourceRecordSets || +|+-------------------------------------------------------------------+------+--------+| +|| Name | TTL | Type || +|+-------------------------------------------------------------------+------+--------+| +|| etcd-events-a.internal.mycluster01.kopsclustertest.kopeio.org. | 60 | A || +|+-------------------------------------------------------------------+------+--------+| +||| ResourceRecords ||| +||+---------------------------------------------------------------------------------+|| +||| Value ||| +||+---------------------------------------------------------------------------------+|| +||| 172.20.33.58 ||| +||+---------------------------------------------------------------------------------+|| +|| ResourceRecordSets || +|+-------------------------------------------------------------------+------+--------+| +|| Name | TTL | Type || +|+-------------------------------------------------------------------+------+--------+| +|| etcd-events-b.internal.mycluster01.kopsclustertest.kopeio.org. | 60 | A || +|+-------------------------------------------------------------------+------+--------+| +||| ResourceRecords ||| +||+---------------------------------------------------------------------------------+|| +||| Value ||| +||+---------------------------------------------------------------------------------+|| +||| 172.20.64.116 ||| +||+---------------------------------------------------------------------------------+|| +|| ResourceRecordSets || +|+-------------------------------------------------------------------+------+--------+| +|| Name | TTL | Type || +|+-------------------------------------------------------------------+------+--------+| +|| etcd-events-c.internal.mycluster01.kopsclustertest.kopeio.org. | 60 | A || +|+-------------------------------------------------------------------+------+--------+| +||| ResourceRecords ||| +||+---------------------------------------------------------------------------------+|| +||| Value ||| +||+---------------------------------------------------------------------------------+|| +||| 172.20.125.42 ||| +||+---------------------------------------------------------------------------------+|| +``` + +Maybe with json output and some "jq" parsing: + +```bash +aws route53 list-resource-record-sets --output=json --hosted-zone-id `aws route53 --output=json list-hosted-zones | jq '.HostedZones[] | select(.Name=="kopsclustertest.kopeio.org.") | .Id' | cut -d/ -f3|cut -d\" -f1`|jq .ResourceRecordSets[] +``` + +Output: + +``` +{ + "TTL": 172800, + "Name": "kopsclustertest.kopeio.org.", + "Type": "NS", + "ResourceRecords": [ + { + "Value": "ns-1383.awsdns-44.org." + }, + { + "Value": "ns-829.awsdns-39.net." + }, + { + "Value": "ns-346.awsdns-43.com." + }, + { + "Value": "ns-1973.awsdns-54.co.uk." + } + ] +} +{ + "TTL": 900, + "Name": "kopsclustertest.kopeio.org.", + "Type": "SOA", + "ResourceRecords": [ + { + "Value": "ns-1383.awsdns-44.org. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400" + } + ] +} +{ + "TTL": 60, + "Name": "api.mycluster01.kopsclustertest.kopeio.org.", + "Type": "A", + "ResourceRecords": [ + { + "Value": "34.206.72.126" + }, + { + "Value": "34.228.219.212" + }, + { + "Value": "54.83.144.111" + } + ] +} +{ + "TTL": 60, + "Name": "api.internal.mycluster01.kopsclustertest.kopeio.org.", + "Type": "A", + "ResourceRecords": [ + { + "Value": "172.20.125.42" + }, + { + "Value": "172.20.33.58" + }, + { + "Value": "172.20.64.116" + } + ] +} +{ + "TTL": 60, + "Name": "etcd-a.internal.mycluster01.kopsclustertest.kopeio.org.", + "Type": "A", + "ResourceRecords": [ + { + "Value": "172.20.33.58" + } + ] +} +{ + "TTL": 60, + "Name": "etcd-b.internal.mycluster01.kopsclustertest.kopeio.org.", + "Type": "A", + "ResourceRecords": [ + { + "Value": "172.20.64.116" + } + ] +} +{ + "TTL": 60, + "Name": "etcd-c.internal.mycluster01.kopsclustertest.kopeio.org.", + "Type": "A", + "ResourceRecords": [ + { + "Value": "172.20.125.42" + } + ] +} +{ + "TTL": 60, + "Name": "etcd-events-a.internal.mycluster01.kopsclustertest.kopeio.org.", + "Type": "A", + "ResourceRecords": [ + { + "Value": "172.20.33.58" + } + ] +} +{ + "TTL": 60, + "Name": "etcd-events-b.internal.mycluster01.kopsclustertest.kopeio.org.", + "Type": "A", + "ResourceRecords": [ + { + "Value": "172.20.64.116" + } + ] +} +{ + "TTL": 60, + "Name": "etcd-events-c.internal.mycluster01.kopsclustertest.kopeio.org.", + "Type": "A", + "ResourceRecords": [ + { + "Value": "172.20.125.42" + } + ] +} +``` + +## SCALING-UP YOUR CLUSTER. + +Let's see the following scenario: Our load is increasing and we need to add two more worker nodes. First, let's get our instance group names: + +```bash +kops get instancegroups +Using cluster from kubectl context: mycluster01.kopsclustertest.kopeio.org + +NAME ROLE MACHINETYPE MIN MAX SUBNETS +master-us-east-1a Master t2.micro 1 1 us-east-1a +master-us-east-1b Master t2.micro 1 1 us-east-1b +master-us-east-1c Master t2.micro 1 1 us-east-1c +nodes Node t2.micro 2 2 us-east-1a,us-east-1b,us-east-1c +``` + +We can see here that our workers instance group name is "nodes". Let's edit the group with the command "kops edit ig nodes" + +```bash +kops edit ig nodes +``` + +An editor (whatever you have on the $EDITOR shell variable) will open with the following text: + +``` +apiVersion: kops/v1alpha2 +kind: InstanceGroup +metadata: + creationTimestamp: 2017-09-06T13:40:39Z + labels: + kops.k8s.io/cluster: mycluster01.kopsclustertest.kopeio.org + name: nodes +spec: + image: kope.io/k8s-1.7-debian-jessie-amd64-hvm-ebs-2017-07-28 + machineType: t2.micro + maxSize: 2 + minSize: 2 + role: Node + subnets: + - us-east-1a + - us-east-1b + - us-east-1c +``` + +Let's change minSize and maxSize to "3" + +``` +apiVersion: kops/v1alpha2 +kind: InstanceGroup +metadata: + creationTimestamp: 2017-09-06T13:40:39Z + labels: + kops.k8s.io/cluster: mycluster01.kopsclustertest.kopeio.org + name: nodes +spec: + image: kope.io/k8s-1.7-debian-jessie-amd64-hvm-ebs-2017-07-28 + machineType: t2.micro + maxSize: 3 + minSize: 3 + role: Node + subnets: + - us-east-1a + - us-east-1b + - us-east-1c +``` + +Save it and review with `kops update cluster $NAME`: + +```bash +kops update cluster $NAME +``` + +The last command will output: + +```bash +I0906 10:16:30.619321 13607 executor.go:91] Tasks: 0 done / 75 total; 38 can run +I0906 10:16:32.703865 13607 executor.go:91] Tasks: 38 done / 75 total; 14 can run +I0906 10:16:33.592807 13607 executor.go:91] Tasks: 52 done / 75 total; 19 can run +I0906 10:16:35.009432 13607 executor.go:91] Tasks: 71 done / 75 total; 4 can run +I0906 10:16:35.320078 13607 executor.go:91] Tasks: 75 done / 75 total; 0 can run +Will modify resources: + AutoscalingGroup/nodes.mycluster01.kopsclustertest.kopeio.org + MinSize 2 -> 3 + MaxSize 2 -> 3 + +Must specify --yes to apply changes +``` + +Now, let's apply the change: + +```bash +kops update cluster $NAME --yes +``` + +Go for another coffee (or maybe a tee) and after some minutes check your cluster again with "kops validate cluster" + +```bash +kops validate cluster + +Using cluster from kubectl context: mycluster01.kopsclustertest.kopeio.org + +Validating cluster mycluster01.kopsclustertest.kopeio.org + +INSTANCE GROUPS +NAME ROLE MACHINETYPE MIN MAX SUBNETS +master-us-east-1a Master t2.micro 1 1 us-east-1a +master-us-east-1b Master t2.micro 1 1 us-east-1b +master-us-east-1c Master t2.micro 1 1 us-east-1c +nodes Node t2.micro 3 3 us-east-1a,us-east-1b,us-east-1c + +NODE STATUS +NAME ROLE READY +ip-172-20-103-68.ec2.internal node True +ip-172-20-125-42.ec2.internal master True +ip-172-20-33-58.ec2.internal master True +ip-172-20-43-160.ec2.internal node True +ip-172-20-64-116.ec2.internal master True +ip-172-20-68-15.ec2.internal node True + +Your cluster mycluster01.kopsclustertest.kopeio.org is ready + +``` + +You can see how your cluster scaled up to 3 nodes. + +**SCALING RECOMMENDATIONS:** +- Always think ahead. If you want to ensure to have the capability to scale-up to all available zones in the region, ensure to add them to the "--zones=" argument when using the "kops create cluster" command. Example: --zones=us-east-1a,us-east-1b,us-east-1c,us-east-1d,us-east-1e. That will make things simpler later. +- For the masters, always consider "odd" numbers starting from 3. Like many other cluster, odd numbers starting from "3" are the proper way to create a fully redundant multi-master solution. In the specific case of "kops", you add masters by adding zones to the "--master-zones" argument on "kops create command". + +## DELETING OUR CLUSTER AND CHECKING OUR DNS SUBDOMAIN: + +If we don't need our cluster anymore, let's use a kops command in order to delete it: + +```bash +kops delete cluster ${NAME} --yes +``` + +After a short while, you'll see the following message: + +``` +Deleted kubectl config for mycluster01.kopsclustertest.kopeio.org + +Deleted cluster: "mycluster01.kopsclustertest.kopeio.org" +``` + +Now, let's check our DNS records: + +```bash +aws route53 list-resource-record-sets \ +--output=table \ +--hosted-zone-id `aws route53 --output=json list-hosted-zones | jq '.HostedZones[] | select(.Name=="kopsclustertest.kopeio.org.") | .Id' | cut -d/ -f3|cut -d\" -f1` +``` + +The output: + +``` +--------------------------------------------------------------------------------------- +| ListResourceRecordSets | ++-------------------------------------------------------------------------------------+ +|| ResourceRecordSets || +|+----------------------------------------------------+----------------+-------------+| +|| Name | TTL | Type || +|+----------------------------------------------------+----------------+-------------+| +|| kopsclustertest.kopeio.org. | 172800 | NS || +|+----------------------------------------------------+----------------+-------------+| +||| ResourceRecords ||| +||+---------------------------------------------------------------------------------+|| +||| Value ||| +||+---------------------------------------------------------------------------------+|| +||| ns-1383.awsdns-44.org. ||| +||| ns-829.awsdns-39.net. ||| +||| ns-346.awsdns-43.com. ||| +||| ns-1973.awsdns-54.co.uk. ||| +||+---------------------------------------------------------------------------------+|| +|| ResourceRecordSets || +|+-------------------------------------------------------+------------+--------------+| +|| Name | TTL | Type || +|+-------------------------------------------------------+------------+--------------+| +|| kopsclustertest.kopeio.org. | 900 | SOA || +|+-------------------------------------------------------+------------+--------------+| +||| ResourceRecords ||| +||+---------------------------------------------------------------------------------+|| +||| Value ||| +||+---------------------------------------------------------------------------------+|| +||| ns-1383.awsdns-44.org. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400 ||| +||+---------------------------------------------------------------------------------+|| +``` + +All kops-created resource records are deleted too. Only the NS records added by us are still there. + +END.- \ No newline at end of file From cf66ee40c13d78e0e094e78868aa68fc45230a46 Mon Sep 17 00:00:00 2001 From: Reinaldo Martinez Date: Mon, 25 Sep 2017 10:02:38 -0400 Subject: [PATCH 3/3] Changes on examples --- docs/examples/README.md | 11 ++ docs/examples/basic-requirements.md | 59 +++++++ .../examples/coreos-kops-tests-multimaster.md | 36 ++-- docs/examples/kops-test-route53-subdomain.md | 158 ++++++++---------- .../kops-tests-private-net-bastion-host.md | 33 +--- 5 files changed, 160 insertions(+), 137 deletions(-) create mode 100644 docs/examples/README.md create mode 100644 docs/examples/basic-requirements.md diff --git a/docs/examples/README.md b/docs/examples/README.md new file mode 100644 index 0000000000000..d12ffbd071887 --- /dev/null +++ b/docs/examples/README.md @@ -0,0 +1,11 @@ +# KOPS CASE-USE EXAMPLES AND LABORATORY EXERCISES. + +This section of our documentation contains typical use-cases for Kops. We'll cover here from the most basic things to very advanced use cases with a lot of technical detail. You can and will be able to reproduce all exercises (if you first read and understand what we did and why we did it) providing you have access to the proper resources. + +All exercises will need you to prepare your base environment (with kops and kubectl). You can see the ["basic requirements"](basic-requirements.md) document that is a common set of procedures for all our exercises. Please note that all the exercises covered here are production-oriented. + +Our exercises are divided on "chapters". Each chapter covers a specific use-case for Kops: + +- Chapter I: [USING KOPS WITH COREOS - A MULTI-MASTER/MULTI-NODE PRACTICAL EXAMPLE](coreos-kops-tests-multimaster.md). +- Chapter II: [USING KOPS WITH PRIVATE NETWORKING AND A BASTION HOST IN A HIGLY-AVAILABLE SETUP](kops-tests-private-net-bastion-host.md). +- Chapter III: [USING KOPS WITH A ROUTE53 BASED SUBDOMAIN AND SCALING UP THE CLUSTER](kops-test-route53-subdomain.md). \ No newline at end of file diff --git a/docs/examples/basic-requirements.md b/docs/examples/basic-requirements.md new file mode 100644 index 0000000000000..38d1bdfcf32fa --- /dev/null +++ b/docs/examples/basic-requirements.md @@ -0,0 +1,59 @@ +# COMMON BASIC REQUIREMENTS FOR KOPS-RELATED LABS. PRE-FLIGHT CHECK: + +Before rushing in to replicate any of the exercises, please ensure your basic environment is correctly setup. See the [KOPS AWS tutorial for more information](../docs/aws.md). + +Ensure that the following points are covered and working in your environment: + +- AWS cli fully configured (aws account already with proper permissions/roles needed for kops). Depending on your distro, you can setup directly from packages, or if you want the most updated version, use "pip" and install awscli by issuing a "pip install awscli" command. Your choice! +- Local ssh key ready on ~/.ssh/id_rsa / id_rsa.pub. You can generate it using "ssh-keygen" command if you dont' have one already: `ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ""`. +- Region set to us-east-1 (az's: us-east-1a, us-east-1b, us-east-1c, us-east-1d and us-east-1e). For most of our exercises we'll deploy our clusters in "us-east-1". For real HA at kubernetes master level, you need 3 masters. If you want to ensure that each master is deployed on a different availability zone, then a region with "at least" 3 availabity zones is required here. You can still deploy a multi-master kubernetes setup on regions with just 2 az's or even 1 az but this mean that two or all your masters will be deployed on a single az and if this az goes offline then you'll lose two or all your masters. If possible, always pick a region with at least 3 different availability zones for real H.A. You always can check amazon regions and az's on the link: [AWS Global Infrastructure](https://aws.amazon.com/about-aws/global-infrastructure/). Remember: The masters are Kubernetes control plane. If your masters die, you loose control of your Kubernetes cluster. +- kubectl and kops installed. For this last part, you can do this with using following commnads. Next commands asume you are running a amd64/x86_64 linux distro: + +As root (either ssh directly to root, local root console, or by using "sudo su -" previouslly): + +```bash +cd ~ +curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl +curl -LO https://github.com/kubernetes/kops/releases/download/1.7.0/kops-linux-amd64 +chmod 755 kubectl kops-linux-amd64 +mv kops-linux-amd64 kops +mv kubectl kops /usr/local/bin +``` + +If you are not root and/or do you want to keep the kops/kubectl utilities in your own account: + +```bash +cd ~ +curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl +curl -LO https://github.com/kubernetes/kops/releases/download/1.7.0/kops-linux-amd64 +chmod 755 kubectl kops-linux-amd64 +mv kops-linux-amd64 kops +mkdir ~/bin +export PATH=$PATH:~/bin +mv kubectl kops ~/bin +``` + +Finally, some of our exercises use the "jq" utility that is available on modern linux distributions. Please ensure to install it too. Some examples of how to do it: + +**Centos 7:** + +```bash +yum -y install epel-release +yum -y install jq +``` + +**Debian7/Debian8/Debian9/Ubuntu1404lts/Ubuntu1604lts:** + +```bash +apt-get -y update +apt-get -y install jq +``` + +Also, if you are using **OS X** you can install jq using ["Homebrew"](https://brew.sh): + +```bash +brew install jq +``` + +More information about "jq" on the following site: [https://stedolan.github.io/jq/download/](https://stedolan.github.io/jq/download/) + diff --git a/docs/examples/coreos-kops-tests-multimaster.md b/docs/examples/coreos-kops-tests-multimaster.md index 23a0109d6cce6..74f189e212477 100644 --- a/docs/examples/coreos-kops-tests-multimaster.md +++ b/docs/examples/coreos-kops-tests-multimaster.md @@ -1,8 +1,8 @@ # USING KOPS WITH COREOS - A MULTI-MASTER/MULTI-NODE PRACTICAL EXAMPLE -## WHAT WE WANT TO ACOMPLISH HERE ?. +## WHAT WE WANT TO ACOMPLISH HERE? -The exercise described on this document will focus on the following goals: +The exercise described in this document will focus on the following goals: - Demonstrate how to use a production-setup with 3 masters and multiple working nodes (two). - Change our default base-distro (Debian 8) for CoreOS stable, available too as an AMI on AWS. @@ -12,28 +12,12 @@ The exercise described on this document will focus on the following goals: ## PRE-FLIGHT CHECK: -Before rushing in to replicate this exercise, please ensure your basic environment is correctly setup. See the [KOPS AWS tutorial for more information](https://github.com/kubernetes/kops/blob/master/docs/aws.md). - -Ensure that the following points are covered and working in your environment: - -- AWS cli fully configured (aws account already with proper permissions/roles needed for kops). Depending on your distro, you can setup directly from packages, or if you want the most updated version, use "pip" and install awscli by issuing a "pip install awscli" command. Your choice !. -- Local ssh key ready on ~/.ssh/id_rsa / id_rsa.pub. You can generate it using "ssh-keygen" command: `ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ""` -- Region set to us-east-1 (az's: us-east-1a, us-east-1b, us-east-1c, us-east-1d and us-east-1e). For this exercise we'll deploy our cluster on US-EAST-1. For real HA at kubernetes master level, you need 3 masters. If you want to ensure that each master is deployed on a different availability zone, then a region with "at least" 3 availabity zones is required here. You can still deploy a multi-master kubenetes setup on regions with just 2 az's, but this mean that two masters will be deployed on a single az, and of this az goes offline then you'll lose two master !. If possible, always pick a region with at least 3 different availability zones for real H.A. You always can check amazon regions and az's on the link: [AWS Global Infrastructure](https://aws.amazon.com/about-aws/global-infrastructure/) -- kubectl and kops installed. For this last part, you can do this with using following commnads (do this as root please). Next commands asume you are running a amd64/x86_64 linux distro: - -```bash -cd ~ -curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl -wget https://github.com/kubernetes/kops/releases/download/1.7.0/kops-linux-amd64 -chmod 755 kubectl kops-linux-amd64 -mv kops-linux-amd64 kops -mv kubectl kops /usr/local/bin -``` +Please follow our [basic-requirements document](basic-requirements.md) that is common for all our exercises. Ensure the basic requirements are covered before continuing. ## AWS/KOPS ENVIRONMENT INFORMATION SETUP: -First, using some scripting and asuming you already configured your "aws" environment on your linux system, use the following commands in order to export your AWS access/secret (this will work if you are using the default profile): +First, using some scripting and assuming you already configured your "aws" environment on your linux system, use the following commands in order to export your AWS access/secret (this will work if you are using the default profile): ```bash export AWS_ACCESS_KEY_ID=`grep aws_access_key_id ~/.aws/credentials|awk '{print $3}'` @@ -50,7 +34,7 @@ export AWS_PROFILE=name_of_your_profile Create a bucket (if you don't already have one) for your cluster state: ```bash -aws s3 mb s3://my-kops-s3-bucket-for-cluster-state --region us-east-1 +aws s3api create-bucket --bucket my-kops-s3-bucket-for-cluster-state --region us-east-1 ``` Then export the name of your cluster along with the "S3" URL of your bucket: @@ -70,7 +54,7 @@ Some things to note from here: CoreOS webpage includes a "json" with the updated list of lattest images: [https://coreos.com/dist/aws/aws-stable.json](https://coreos.com/dist/aws/aws-stable.json) -If you install the "jq" utility (available on most distros) you can obtain the "ami" for a specific region (change the region "-" for "_" in the following command): +By using "jq" you can obtain the "ami" for a specific region (change the region "-" for "_" in the following command): ```bash @@ -135,7 +119,7 @@ aws ec2 describe-images --region=us-east-1 --owner=595879546273 \ --query 'sort_by(Images,&CreationDate)[-1].{id:ImageLocation}' \ --output table - + --------------------------------------------------- | DescribeImages | +----+--------------------------------------------+ @@ -143,9 +127,9 @@ aws ec2 describe-images --region=us-east-1 --owner=595879546273 \ +----+--------------------------------------------+ ``` -Then, our image for CoreOS, in "AMI" format is "ami-32705b49", or in owner/name format "595879546273/CoreOS-stable-1409.8.0-hvm". Note that KOPS default image is a debian-jessie based one (more specifically: "kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-07-28" at the moment we are writing this document). +Then, our image for CoreOS, in "AMI" format is "ami-32705b49", or in owner/name format "595879546273/CoreOS-stable-1409.8.0-hvm". Note that KOPS default image is a debian-jessie based one (more specifically: "kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-05-02" at the moment we are writing this document). -**NOTE:** Always obtain the latest image before deploying KOPS. CoreOS updates it's AWS image very often. Don't rely on the versions included on this document. Always check first ! +**NOTE:** Always obtain the latest image before deploying KOPS. CoreOS updates it's AWS image very often. Don't rely on the versions included on this document. Always check first. ## KOPS CLUSTER CREATION AND MODIFICATION: @@ -289,7 +273,7 @@ curl http://54.210.119.98 curl http://34.200.247.63

It works!

-``` +``` **NOTE:** If you are replicating this exercise in a production environment, use a "real" load balancer in order to expose your replicated services. We are here just testing things so we really don't care right now about that, but, if you are doing this for a "real" production environment, either use an AWS ELB service, or an nginx ingress controller as described in our documentation: [NGINX Based ingress controller](https://github.com/kubernetes/kops/tree/master/addons/ingress-nginx). diff --git a/docs/examples/kops-test-route53-subdomain.md b/docs/examples/kops-test-route53-subdomain.md index f2497d55eb5b1..994010c06adee 100644 --- a/docs/examples/kops-test-route53-subdomain.md +++ b/docs/examples/kops-test-route53-subdomain.md @@ -1,8 +1,8 @@ # USING KOPS WITH A ROUTE53 BASED SUBDOMAIN AND SCALING UP THE CLUSTER -## WHAT WE WANT TO ACOMPLISH HERE ?. +## WHAT WE WANT TO ACOMPLISH HERE/ -The exercise described on this document will focus on the following goals: +The exercise described in this document will focus on the following goals: - Demonstrate how to use a production-setup with 3 masters and two workers in different availability zones. - Ensure our masters are deployed on 3 different AWS availability zones. @@ -13,25 +13,8 @@ The exercise described on this document will focus on the following goals: ## PRE-FLIGHT CHECK: -Before rushing in to replicate this exercise, please ensure your basic environment is correctly setup. See the [KOPS AWS tutorial for more information](https://github.com/kubernetes/kops/blob/master/docs/aws.md). +Please follow our [basic-requirements document](basic-requirements.md) that is common for all our exercises. Ensure the basic requirements are covered before continuing. -Ensure that the following points are covered and working in your environment: - -- "jq" utility installed (this is available on most linux distributions). If you are running on Centos, you'll need to add "epel" repository with `yum -y install epel-release` then install jq with `yum -y install jq`. -- "dig" utility installed (this is also available on most linux distributions). We'll need "dig" in order to tests our DNS subdomain. On "centos/rhel" distros, this utility is part of the "bind-utils" package. -- AWS cli fully configured (aws account already with proper permissions/roles needed for kops). Depending on your distro, you can setup directly from packages, or if you want the most updated version, use "pip" and install awscli by issuing a "pip install awscli" command. Your choice !. -- Local ssh key ready on ~/.ssh/id_rsa / id_rsa.pub. You can generate it using "ssh-keygen" command: `ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ""` -- Region set to us-east-1 (az's: us-east-1a, us-east-1b, us-east-1c, us-east-1d and us-east-1e). For this exercise we'll deploy our cluster on US-EAST-1. For real HA at kubernetes master level, you need 3 masters. If you want to ensure that each master is deployed on a different availability zone, then a region with "at least" 3 availabity zones is required here. You can still deploy a multi-master kubenetes setup on regions with just 2 az's, but this mean that two masters will be deployed on a single az, and of this az goes offline then you'll lose two master !. If possible, always pick a region with at least 3 different availability zones for real H.A. You always can check amazon regions and az's on the link: [AWS Global Infrastructure](https://aws.amazon.com/about-aws/global-infrastructure/) -- kubectl and kops installed. For this last part, you can do this with using following commnads (do this as root please). Next commands asume you are running a amd64/x86_64 linux distro: - -```bash -cd ~ -curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl -wget https://github.com/kubernetes/kops/releases/download/1.7.0/kops-linux-amd64 -chmod 755 kubectl kops-linux-amd64 -mv kops-linux-amd64 kops -mv kubectl kops /usr/local/bin -``` ## DNS Setup - AWS Route53 @@ -46,7 +29,7 @@ For our setup we already have a hosted DNS domain in AWS: |+---------------------------------------+-----------------------------+--------------+-------------------------+| || CallerReference | Id | Name | ResourceRecordSetCount || |+---------------------------------------+-----------------------------+--------------+-------------------------+| -|| C0461665-01D8-463B-BF2D-62F1747A16DB | /hostedzone/ZTKK4EXR1EWR5 | kopeio.org. | 2 || +|| C0461665-01D8-463B-BF2D-62F1747A16DB | /hostedzone/ZTKK4EXR1EWR5 | example.org. | 2 || |+---------------------------------------+-----------------------------+--------------+-------------------------+| ||| Config ||| ||+-------------------------------------------------------------------+----------------------------------------+|| @@ -54,14 +37,14 @@ For our setup we already have a hosted DNS domain in AWS: ||+-------------------------------------------------------------------+----------------------------------------+|| ``` -We can also check our that our domain is reacheable from the Internet using "dig": +We can also check our that our domain is reacheable from the Internet using "dig". You can use other "dns" tools too, but we recommend to use dig (available on all modern linux distributions and other unix-like operating systems. Normally, dig is part of bind-tools and other bind-related packages): ```bash -dig +short kopeio.org soa +dig +short example.org soa ns-656.awsdns-18.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400 -dig +short kopeio.org ns +dig +short example.org ns ns-1056.awsdns-04.org. ns-656.awsdns-18.net. @@ -69,7 +52,7 @@ ns-9.awsdns-01.com. ns-1642.awsdns-13.co.uk. ``` -If both the "soa" and "ns" queries anwers OK, and with the data pointing to amazon, we are set and we can continue. Please always check that your router53 hosted DNS zone is working before doing anything else !. +If both the "soa" and "ns" queries anwers OK, and with the data pointing to amazon, we are set and we can continue. Please always check that your Route53 hosted DNS zone is working before doing anything else. Now, let's create a subdomain that we'll use for our cluster: @@ -81,7 +64,7 @@ ae852c68-78b3-41af-85ee-997fc470fd1c aws route53 \ create-hosted-zone \ --output=json \ ---name kopsclustertest.kopeio.org \ +--name kopsclustertest.example.org \ --caller-reference $ID | \ jq .DelegationSet.NameServers @@ -107,13 +90,13 @@ Note that the last command (`aws route53 create-hosted-zone`) will output your n We need the zone parent ID too. We can obtain it with the following command: ```bash -aws route53 --output=json list-hosted-zones | jq '.HostedZones[] | select(.Name=="kopeio.org.") | .Id' | cut -d/ -f3|cut -d\" -f1 +aws route53 --output=json list-hosted-zones | jq '.HostedZones[] | select(.Name=="example.org.") | .Id' | cut -d/ -f3|cut -d\" -f1 ``` It's a good idea if we export this ID as a shell variable by using the following command: ```bash -export parentzoneid=`aws route53 --output=json list-hosted-zones | jq '.HostedZones[] | select(.Name=="kopeio.org.") | .Id' | cut -d/ -f3|cut -d\" -f1` +export parentzoneid=`aws route53 --output=json list-hosted-zones | jq '.HostedZones[] | select(.Name=="example.org.") | .Id' | cut -d/ -f3|cut -d\" -f1` ``` Let's check the var: @@ -126,14 +109,14 @@ ZTKK4EXR1EWR5 With the name servers obtained above, we need to construct a "json" file that we'll pass to amazon for our subdomain: ```bash -cat<~/kopsclustertest.kopeio.org.json +cat<~/kopsclustertest.example.org.json { "Comment": "Create a subdomain NS record in the parent domain", "Changes": [ { "Action": "CREATE", "ResourceRecordSet": { - "Name": "kopsclustertest.kopeio.org", + "Name": "kopsclustertest.example.org", "Type": "NS", "TTL": 300, "ResourceRecords": [ @@ -166,7 +149,7 @@ With the json file ready, and the parent zone ID exported in the "$parentzoneid" aws route53 change-resource-record-sets \ --output=table \ --hosted-zone-id $parentzoneid \ ---change-batch file://~/kopsclustertest.kopeio.org.json +--change-batch file://~/kopsclustertest.example.org.json ``` The output of the last command will be something like: @@ -188,7 +171,7 @@ Finally, check your records with the following command: ```bash aws route53 list-resource-record-sets \ --output=table \ ---hosted-zone-id `aws route53 --output=json list-hosted-zones | jq '.HostedZones[] | select(.Name=="kopsclustertest.kopeio.org.") | .Id' | cut -d/ -f3|cut -d\" -f1` +--hosted-zone-id `aws route53 --output=json list-hosted-zones | jq '.HostedZones[] | select(.Name=="kopsclustertest.example.org.") | .Id' | cut -d/ -f3|cut -d\" -f1` ``` The last command will output the following info: @@ -201,7 +184,7 @@ The last command will output the following info: |+----------------------------------------------------+----------------+-------------+| || Name | TTL | Type || |+----------------------------------------------------+----------------+-------------+| -|| kopsclustertest.kopeio.org. | 172800 | NS || +|| kopsclustertest.example.org. | 172800 | NS || |+----------------------------------------------------+----------------+-------------+| ||| ResourceRecords ||| ||+---------------------------------------------------------------------------------+|| @@ -216,7 +199,7 @@ The last command will output the following info: |+-------------------------------------------------------+------------+--------------+| || Name | TTL | Type || |+-------------------------------------------------------+------------+--------------+| -|| kopsclustertest.kopeio.org. | 900 | SOA || +|| kopsclustertest.example.org. | 900 | SOA || |+-------------------------------------------------------+------------+--------------+| ||| ResourceRecords ||| ||+---------------------------------------------------------------------------------+|| @@ -229,11 +212,11 @@ The last command will output the following info: Also, do a "dig" test in order to check the zone availability on the Internet: ```bash -dig +short kopsclustertest.kopeio.org soa +dig +short kopsclustertest.example.org soa ns-1383.awsdns-44.org. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400 -dig +short kopsclustertest.kopeio.org ns +dig +short kopsclustertest.example.org ns ns-1383.awsdns-44.org. ns-829.awsdns-39.net. @@ -246,7 +229,7 @@ If both your SOA and NS records are there, then your subdomain is ready to be us ## AWS/KOPS ENVIRONMENT INFORMATION SETUP: -First, using some scripting and asuming you already configured your "aws" environment on your linux system, use the following commands in order to export your AWS access/secret (this will work if you are using the default profile): +First, using some scripting and assuming you already configured your "aws" environment on your linux system, use the following commands in order to export your AWS access/secret (this will work if you are using the default profile): ```bash export AWS_ACCESS_KEY_ID=`grep aws_access_key_id ~/.aws/credentials|awk '{print $3}'` @@ -269,16 +252,16 @@ aws s3api create-bucket --bucket my-kops-s3-bucket-for-cluster-state --region us Then export the name of your cluster along with the "S3" URL of your bucket. Add your cluster name to the full subdomain: ```bash -export NAME=mycluster01.kopsclustertest.kopeio.org +export NAME=mycluster01.kopsclustertest.example.org export KOPS_STATE_STORE=s3://my-kops-s3-bucket-for-cluster-state ``` Some things to note from here: -- "NAME" will be an environment variable that we'll use from now in order to refer to our cluster name. For this practical exercise, our cluster name is "mycluster01.kopsclustertest.kopeio.org". +- "NAME" will be an environment variable that we'll use from now in order to refer to our cluster name. For this practical exercise, our cluster name will be "mycluster01.kopsclustertest.example.org". -## KOPS PRIVATE CLUSTER CREATION: +## KOPS CLUSTER CREATION: Let's first create our cluster ensuring a multi-master setup with 3 masters in a multi-az setup, two worker nodes also in a multi-az setup, and using both private networking and a bastion server: @@ -295,11 +278,12 @@ ${NAME} A few things to note here: -- The environment variable ${NAME} was previously exported with our cluster name: mycluster01.kopsclustertest.kopeio.org. +- The environment variable ${NAME} was previously exported with our cluster name: mycluster01.kopsclustertest.example.org. - "--cloud=aws": As kops grows and begin to support more clouds, we need to tell the command to use the specific cloud we want for our deployment. In this case: amazon web services (aws). - For true HA at the master level, we need to pick a region with at least 3 availability zones. For this practical exercise, we are using "us-east-1" AWS region which contains 5 availability zones (az's for short): us-east-1a, us-east-1b, us-east-1c, us-east-1d and us-east-1e. - The "--master-zones=us-east-1a,us-east-1b,us-east-1c" KOPS argument will actually enforce that we want 3 masters here. "--node-count=2" only applies to the worker nodes (not the masters). - We are including the arguments "--node-size" and "master-size" to specify the "instance types" for both our masters and worker nodes. +- Because we are just doing a simple LAB, we are using "t2.micro" machines. Please DONT USE t2.micro on real production systems. Start with "t2.medium" as a minimun realistic/workable machine type. With those points clarified, let's deploy our cluster: @@ -320,30 +304,30 @@ I0906 09:42:13.398070 13538 vfs_castore.go:422] Issuing new certificate: "kube I0906 09:42:13.636134 13538 vfs_castore.go:422] Issuing new certificate: "kubecfg" I0906 09:42:14.684945 13538 executor.go:91] Tasks: 38 done / 75 total; 14 can run I0906 09:42:15.997588 13538 executor.go:91] Tasks: 52 done / 75 total; 19 can run -I0906 09:42:17.855959 13538 launchconfiguration.go:327] waiting for IAM instance profile "masters.mycluster01.kopsclustertest.kopeio.org" to be ready -I0906 09:42:17.932515 13538 launchconfiguration.go:327] waiting for IAM instance profile "nodes.mycluster01.kopsclustertest.kopeio.org" to be ready -I0906 09:42:18.602180 13538 launchconfiguration.go:327] waiting for IAM instance profile "masters.mycluster01.kopsclustertest.kopeio.org" to be ready -I0906 09:42:18.682038 13538 launchconfiguration.go:327] waiting for IAM instance profile "masters.mycluster01.kopsclustertest.kopeio.org" to be ready +I0906 09:42:17.855959 13538 launchconfiguration.go:327] waiting for IAM instance profile "masters.mycluster01.kopsclustertest.example.org" to be ready +I0906 09:42:17.932515 13538 launchconfiguration.go:327] waiting for IAM instance profile "nodes.mycluster01.kopsclustertest.example.org" to be ready +I0906 09:42:18.602180 13538 launchconfiguration.go:327] waiting for IAM instance profile "masters.mycluster01.kopsclustertest.example.org" to be ready +I0906 09:42:18.682038 13538 launchconfiguration.go:327] waiting for IAM instance profile "masters.mycluster01.kopsclustertest.example.org" to be ready I0906 09:42:29.215995 13538 executor.go:91] Tasks: 71 done / 75 total; 4 can run I0906 09:42:30.073417 13538 executor.go:91] Tasks: 75 done / 75 total; 0 can run I0906 09:42:30.073471 13538 dns.go:152] Pre-creating DNS records I0906 09:42:32.403909 13538 update_cluster.go:247] Exporting kubecfg for cluster -Kops has set your kubectl context to mycluster01.kopsclustertest.kopeio.org +Kops has set your kubectl context to mycluster01.kopsclustertest.example.org Cluster is starting. It should be ready in a few minutes. Suggestions: * validate cluster: kops validate cluster * list nodes: kubectl get nodes --show-labels - * ssh to the master: ssh -i ~/.ssh/id_rsa admin@api.mycluster01.kopsclustertest.kopeio.org + * ssh to the master: ssh -i ~/.ssh/id_rsa admin@api.mycluster01.kopsclustertest.example.org The admin user is specific to Debian. If not using Debian please use the appropriate user based on your OS. * read about installing addons: https://github.com/kubernetes/kops/blob/master/docs/addons.md ``` -Note that KOPS will create a DNS record for your API: api.mycluster01.kopsclustertest.kopeio.org. You can check this record with the following "dig" command: +Note that KOPS will create a DNS record for your API: api.mycluster01.kopsclustertest.example.org. You can check this record with the following "dig" command: ```bash -dig +short api.mycluster01.kopsclustertest.kopeio.org A +dig +short api.mycluster01.kopsclustertest.example.org A 34.228.219.212 34.206.72.126 54.83.144.111 @@ -356,9 +340,9 @@ After about 10~15 minutes (depending on how fast or how slow are amazon services ```bash kops validate cluster -Using cluster from kubectl context: mycluster01.kopsclustertest.kopeio.org +Using cluster from kubectl context: mycluster01.kopsclustertest.example.org -Validating cluster mycluster01.kopsclustertest.kopeio.org +Validating cluster mycluster01.kopsclustertest.example.org INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS @@ -375,7 +359,7 @@ ip-172-20-43-160.ec2.internal node True ip-172-20-64-116.ec2.internal master True ip-172-20-68-15.ec2.internal node True -Your cluster mycluster01.kopsclustertest.kopeio.org is ready +Your cluster mycluster01.kopsclustertest.example.org is ready ``` @@ -395,7 +379,7 @@ ip-172-20-68-15.ec2.internal Ready 5m v1.7.2 Let's try to send a command to our masters using "ssh": ```bash -ssh -i ~/.ssh/id_rsa admin@api.mycluster01.kopsclustertest.kopeio.org "ec2metadata --public-ipv4" +ssh -i ~/.ssh/id_rsa admin@api.mycluster01.kopsclustertest.example.org "ec2metadata --public-ipv4" 34.206.72.126 ``` @@ -408,7 +392,7 @@ Let's do a fast review (using aws cli tools) of the resource records created by ```bash aws route53 list-resource-record-sets \ --output=table \ ---hosted-zone-id `aws route53 --output=json list-hosted-zones | jq '.HostedZones[] | select(.Name=="kopsclustertest.kopeio.org.") | .Id' | cut -d/ -f3|cut -d\" -f1` +--hosted-zone-id `aws route53 --output=json list-hosted-zones | jq '.HostedZones[] | select(.Name=="kopsclustertest.example.org.") | .Id' | cut -d/ -f3|cut -d\" -f1` ``` The output: @@ -421,7 +405,7 @@ The output: |+----------------------------------------------------+----------------+-------------+| || Name | TTL | Type || |+----------------------------------------------------+----------------+-------------+| -|| kopsclustertest.kopeio.org. | 172800 | NS || +|| kopsclustertest.example.org. | 172800 | NS || |+----------------------------------------------------+----------------+-------------+| ||| ResourceRecords ||| ||+---------------------------------------------------------------------------------+|| @@ -436,7 +420,7 @@ The output: |+-------------------------------------------------------+------------+--------------+| || Name | TTL | Type || |+-------------------------------------------------------+------------+--------------+| -|| kopsclustertest.kopeio.org. | 900 | SOA || +|| kopsclustertest.example.org. | 900 | SOA || |+-------------------------------------------------------+------------+--------------+| ||| ResourceRecords ||| ||+---------------------------------------------------------------------------------+|| @@ -448,7 +432,7 @@ The output: |+--------------------------------------------------------------+---------+----------+| || Name | TTL | Type || |+--------------------------------------------------------------+---------+----------+| -|| api.mycluster01.kopsclustertest.kopeio.org. | 60 | A || +|| api.mycluster01.kopsclustertest.example.org. | 60 | A || |+--------------------------------------------------------------+---------+----------+| ||| ResourceRecords ||| ||+---------------------------------------------------------------------------------+|| @@ -462,7 +446,7 @@ The output: |+-----------------------------------------------------------------+-------+---------+| || Name | TTL | Type || |+-----------------------------------------------------------------+-------+---------+| -|| api.internal.mycluster01.kopsclustertest.kopeio.org. | 60 | A || +|| api.internal.mycluster01.kopsclustertest.example.org. | 60 | A || |+-----------------------------------------------------------------+-------+---------+| ||| ResourceRecords ||| ||+---------------------------------------------------------------------------------+|| @@ -476,7 +460,7 @@ The output: |+------------------------------------------------------------------+-------+--------+| || Name | TTL | Type || |+------------------------------------------------------------------+-------+--------+| -|| etcd-a.internal.mycluster01.kopsclustertest.kopeio.org. | 60 | A || +|| etcd-a.internal.mycluster01.kopsclustertest.example.org. | 60 | A || |+------------------------------------------------------------------+-------+--------+| ||| ResourceRecords ||| ||+---------------------------------------------------------------------------------+|| @@ -488,7 +472,7 @@ The output: |+------------------------------------------------------------------+-------+--------+| || Name | TTL | Type || |+------------------------------------------------------------------+-------+--------+| -|| etcd-b.internal.mycluster01.kopsclustertest.kopeio.org. | 60 | A || +|| etcd-b.internal.mycluster01.kopsclustertest.example.org. | 60 | A || |+------------------------------------------------------------------+-------+--------+| ||| ResourceRecords ||| ||+---------------------------------------------------------------------------------+|| @@ -500,7 +484,7 @@ The output: |+------------------------------------------------------------------+-------+--------+| || Name | TTL | Type || |+------------------------------------------------------------------+-------+--------+| -|| etcd-c.internal.mycluster01.kopsclustertest.kopeio.org. | 60 | A || +|| etcd-c.internal.mycluster01.kopsclustertest.example.org. | 60 | A || |+------------------------------------------------------------------+-------+--------+| ||| ResourceRecords ||| ||+---------------------------------------------------------------------------------+|| @@ -512,7 +496,7 @@ The output: |+-------------------------------------------------------------------+------+--------+| || Name | TTL | Type || |+-------------------------------------------------------------------+------+--------+| -|| etcd-events-a.internal.mycluster01.kopsclustertest.kopeio.org. | 60 | A || +|| etcd-events-a.internal.mycluster01.kopsclustertest.example.org. | 60 | A || |+-------------------------------------------------------------------+------+--------+| ||| ResourceRecords ||| ||+---------------------------------------------------------------------------------+|| @@ -524,7 +508,7 @@ The output: |+-------------------------------------------------------------------+------+--------+| || Name | TTL | Type || |+-------------------------------------------------------------------+------+--------+| -|| etcd-events-b.internal.mycluster01.kopsclustertest.kopeio.org. | 60 | A || +|| etcd-events-b.internal.mycluster01.kopsclustertest.example.org. | 60 | A || |+-------------------------------------------------------------------+------+--------+| ||| ResourceRecords ||| ||+---------------------------------------------------------------------------------+|| @@ -536,7 +520,7 @@ The output: |+-------------------------------------------------------------------+------+--------+| || Name | TTL | Type || |+-------------------------------------------------------------------+------+--------+| -|| etcd-events-c.internal.mycluster01.kopsclustertest.kopeio.org. | 60 | A || +|| etcd-events-c.internal.mycluster01.kopsclustertest.example.org. | 60 | A || |+-------------------------------------------------------------------+------+--------+| ||| ResourceRecords ||| ||+---------------------------------------------------------------------------------+|| @@ -549,7 +533,7 @@ The output: Maybe with json output and some "jq" parsing: ```bash -aws route53 list-resource-record-sets --output=json --hosted-zone-id `aws route53 --output=json list-hosted-zones | jq '.HostedZones[] | select(.Name=="kopsclustertest.kopeio.org.") | .Id' | cut -d/ -f3|cut -d\" -f1`|jq .ResourceRecordSets[] +aws route53 list-resource-record-sets --output=json --hosted-zone-id `aws route53 --output=json list-hosted-zones | jq '.HostedZones[] | select(.Name=="kopsclustertest.example.org.") | .Id' | cut -d/ -f3|cut -d\" -f1`|jq .ResourceRecordSets[] ``` Output: @@ -557,7 +541,7 @@ Output: ``` { "TTL": 172800, - "Name": "kopsclustertest.kopeio.org.", + "Name": "kopsclustertest.example.org.", "Type": "NS", "ResourceRecords": [ { @@ -576,7 +560,7 @@ Output: } { "TTL": 900, - "Name": "kopsclustertest.kopeio.org.", + "Name": "kopsclustertest.example.org.", "Type": "SOA", "ResourceRecords": [ { @@ -586,7 +570,7 @@ Output: } { "TTL": 60, - "Name": "api.mycluster01.kopsclustertest.kopeio.org.", + "Name": "api.mycluster01.kopsclustertest.example.org.", "Type": "A", "ResourceRecords": [ { @@ -602,7 +586,7 @@ Output: } { "TTL": 60, - "Name": "api.internal.mycluster01.kopsclustertest.kopeio.org.", + "Name": "api.internal.mycluster01.kopsclustertest.example.org.", "Type": "A", "ResourceRecords": [ { @@ -618,7 +602,7 @@ Output: } { "TTL": 60, - "Name": "etcd-a.internal.mycluster01.kopsclustertest.kopeio.org.", + "Name": "etcd-a.internal.mycluster01.kopsclustertest.example.org.", "Type": "A", "ResourceRecords": [ { @@ -628,7 +612,7 @@ Output: } { "TTL": 60, - "Name": "etcd-b.internal.mycluster01.kopsclustertest.kopeio.org.", + "Name": "etcd-b.internal.mycluster01.kopsclustertest.example.org.", "Type": "A", "ResourceRecords": [ { @@ -638,7 +622,7 @@ Output: } { "TTL": 60, - "Name": "etcd-c.internal.mycluster01.kopsclustertest.kopeio.org.", + "Name": "etcd-c.internal.mycluster01.kopsclustertest.example.org.", "Type": "A", "ResourceRecords": [ { @@ -648,7 +632,7 @@ Output: } { "TTL": 60, - "Name": "etcd-events-a.internal.mycluster01.kopsclustertest.kopeio.org.", + "Name": "etcd-events-a.internal.mycluster01.kopsclustertest.example.org.", "Type": "A", "ResourceRecords": [ { @@ -658,7 +642,7 @@ Output: } { "TTL": 60, - "Name": "etcd-events-b.internal.mycluster01.kopsclustertest.kopeio.org.", + "Name": "etcd-events-b.internal.mycluster01.kopsclustertest.example.org.", "Type": "A", "ResourceRecords": [ { @@ -668,7 +652,7 @@ Output: } { "TTL": 60, - "Name": "etcd-events-c.internal.mycluster01.kopsclustertest.kopeio.org.", + "Name": "etcd-events-c.internal.mycluster01.kopsclustertest.example.org.", "Type": "A", "ResourceRecords": [ { @@ -684,7 +668,7 @@ Let's see the following scenario: Our load is increasing and we need to add two ```bash kops get instancegroups -Using cluster from kubectl context: mycluster01.kopsclustertest.kopeio.org +Using cluster from kubectl context: mycluster01.kopsclustertest.example.org NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master t2.micro 1 1 us-east-1a @@ -707,7 +691,7 @@ kind: InstanceGroup metadata: creationTimestamp: 2017-09-06T13:40:39Z labels: - kops.k8s.io/cluster: mycluster01.kopsclustertest.kopeio.org + kops.k8s.io/cluster: mycluster01.kopsclustertest.example.org name: nodes spec: image: kope.io/k8s-1.7-debian-jessie-amd64-hvm-ebs-2017-07-28 @@ -729,7 +713,7 @@ kind: InstanceGroup metadata: creationTimestamp: 2017-09-06T13:40:39Z labels: - kops.k8s.io/cluster: mycluster01.kopsclustertest.kopeio.org + kops.k8s.io/cluster: mycluster01.kopsclustertest.example.org name: nodes spec: image: kope.io/k8s-1.7-debian-jessie-amd64-hvm-ebs-2017-07-28 @@ -758,7 +742,7 @@ I0906 10:16:33.592807 13607 executor.go:91] Tasks: 52 done / 75 total; 19 can I0906 10:16:35.009432 13607 executor.go:91] Tasks: 71 done / 75 total; 4 can run I0906 10:16:35.320078 13607 executor.go:91] Tasks: 75 done / 75 total; 0 can run Will modify resources: - AutoscalingGroup/nodes.mycluster01.kopsclustertest.kopeio.org + AutoscalingGroup/nodes.mycluster01.kopsclustertest.example.org MinSize 2 -> 3 MaxSize 2 -> 3 @@ -776,9 +760,9 @@ Go for another coffee (or maybe a tee) and after some minutes check your cluster ```bash kops validate cluster -Using cluster from kubectl context: mycluster01.kopsclustertest.kopeio.org +Using cluster from kubectl context: mycluster01.kopsclustertest.example.org -Validating cluster mycluster01.kopsclustertest.kopeio.org +Validating cluster mycluster01.kopsclustertest.example.org INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS @@ -796,7 +780,7 @@ ip-172-20-43-160.ec2.internal node True ip-172-20-64-116.ec2.internal master True ip-172-20-68-15.ec2.internal node True -Your cluster mycluster01.kopsclustertest.kopeio.org is ready +Your cluster mycluster01.kopsclustertest.example.org is ready ``` @@ -817,9 +801,9 @@ kops delete cluster ${NAME} --yes After a short while, you'll see the following message: ``` -Deleted kubectl config for mycluster01.kopsclustertest.kopeio.org +Deleted kubectl config for mycluster01.kopsclustertest.example.org -Deleted cluster: "mycluster01.kopsclustertest.kopeio.org" +Deleted cluster: "mycluster01.kopsclustertest.example.org" ``` Now, let's check our DNS records: @@ -827,7 +811,7 @@ Now, let's check our DNS records: ```bash aws route53 list-resource-record-sets \ --output=table \ ---hosted-zone-id `aws route53 --output=json list-hosted-zones | jq '.HostedZones[] | select(.Name=="kopsclustertest.kopeio.org.") | .Id' | cut -d/ -f3|cut -d\" -f1` +--hosted-zone-id `aws route53 --output=json list-hosted-zones | jq '.HostedZones[] | select(.Name=="kopsclustertest.example.org.") | .Id' | cut -d/ -f3|cut -d\" -f1` ``` The output: @@ -840,7 +824,7 @@ The output: |+----------------------------------------------------+----------------+-------------+| || Name | TTL | Type || |+----------------------------------------------------+----------------+-------------+| -|| kopsclustertest.kopeio.org. | 172800 | NS || +|| kopsclustertest.example.org. | 172800 | NS || |+----------------------------------------------------+----------------+-------------+| ||| ResourceRecords ||| ||+---------------------------------------------------------------------------------+|| @@ -855,7 +839,7 @@ The output: |+-------------------------------------------------------+------------+--------------+| || Name | TTL | Type || |+-------------------------------------------------------+------------+--------------+| -|| kopsclustertest.kopeio.org. | 900 | SOA || +|| kopsclustertest.example.org. | 900 | SOA || |+-------------------------------------------------------+------------+--------------+| ||| ResourceRecords ||| ||+---------------------------------------------------------------------------------+|| diff --git a/docs/examples/kops-tests-private-net-bastion-host.md b/docs/examples/kops-tests-private-net-bastion-host.md index 9d53d4c71ab25..aa5ba2ed7f62d 100644 --- a/docs/examples/kops-tests-private-net-bastion-host.md +++ b/docs/examples/kops-tests-private-net-bastion-host.md @@ -1,8 +1,8 @@ # USING KOPS WITH PRIVATE NETWORKING AND A BASTION HOST IN A HIGLY-AVAILABLE SETUP -## WHAT WE WANT TO ACOMPLISH HERE ?. +## WHAT WE WANT TO ACOMPLISH HERE? -The exercise described on this document will focus on the following goals: +The exercise described in this document will focus on the following goals: - Demonstrate how to use a production-setup with 3 masters and two workers in different availability zones. - Demonstrate how to use a private networking setup with a bastion host. @@ -13,28 +13,12 @@ The exercise described on this document will focus on the following goals: ## PRE-FLIGHT CHECK: -Before rushing in to replicate this exercise, please ensure your basic environment is correctly setup. See the [KOPS AWS tutorial for more information](https://github.com/kubernetes/kops/blob/master/docs/aws.md). - -Ensure that the following points are covered and working in your environment: - -- AWS cli fully configured (aws account already with proper permissions/roles needed for kops). Depending on your distro, you can setup directly from packages, or if you want the most updated version, use "pip" and install awscli by issuing a "pip install awscli" command. Your choice !. -- Local ssh key ready on ~/.ssh/id_rsa / id_rsa.pub. You can generate it using "ssh-keygen" command: `ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ""` -- Region set to us-east-1 (az's: us-east-1a, us-east-1b, us-east-1c, us-east-1d and us-east-1e). For this exercise we'll deploy our cluster on US-EAST-1. For real HA at kubernetes master level, you need 3 masters. If you want to ensure that each master is deployed on a different availability zone, then a region with "at least" 3 availabity zones is required here. You can still deploy a multi-master kubenetes setup on regions with just 2 az's, but this mean that two masters will be deployed on a single az, and of this az goes offline then you'll lose two master !. If possible, always pick a region with at least 3 different availability zones for real H.A. You always can check amazon regions and az's on the link: [AWS Global Infrastructure](https://aws.amazon.com/about-aws/global-infrastructure/) -- kubectl and kops installed. For this last part, you can do this with using following commnads (do this as root please). Next commands asume you are running a amd64/x86_64 linux distro: - -```bash -cd ~ -curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl -wget https://github.com/kubernetes/kops/releases/download/1.7.0/kops-linux-amd64 -chmod 755 kubectl kops-linux-amd64 -mv kops-linux-amd64 kops -mv kubectl kops /usr/local/bin -``` +Please follow our [basic-requirements document](basic-requirements.md) that is common for all our exercises. Ensure the basic requirements are covered before continuing. ## AWS/KOPS ENVIRONMENT SETUP: -First, using some scripting and asuming you already configured your "aws" environment on your linux system, use the following commands in order to export your AWS access/secret (this will work if you are using the default profile): +First, using some scripting and assuming you already configured your "aws" environment on your linux system, use the following commands in order to export your AWS access/secret (this will work if you are using the default profile): ```bash export AWS_ACCESS_KEY_ID=`grep aws_access_key_id ~/.aws/credentials|awk '{print $3}'` @@ -88,10 +72,11 @@ A few things to note here: - The environment variable ${NAME} was previously exported with our cluster name: privatekopscluster.k8s.local. - "--cloud=aws": As kops grows and begin to support more clouds, we need to tell the command to use the specific cloud we want for our deployment. In this case: amazon web services (aws). -- For true HA at the master level, we need to pick a region with at least 3 availability zones. For this practical exercise, we are using "us-east-1" AWS region which contains 5 availability zones (az's for short): us-east-1a, us-east-1b, us-east-1c, us-east-1d and us-east-1e. -- The "--master-zones=us-east-1a,us-east-1b,us-east-1c" KOPS argument will actually enforce that we want 3 masters here. "--node-count=2" only applies to the worker nodes (not the masters). +- For true HA (high availability) at the master level, we need to pick a region with 3 availability zones. For this practical exercise, we are using "us-east-1" AWS region which contains 5 availability zones (az's for short): us-east-1a, us-east-1b, us-east-1c, us-east-1d and us-east-1e. We used "us-east-1a,us-east-1b,us-east-1c" for our masters. +- The "--master-zones=us-east-1a,us-east-1b,us-east-1c" KOPS argument will actually enforce we want 3 masters here. "--node-count=2" only applies to the worker nodes (not the masters). Again, real "HA" on Kubernetes control plane requires 3 masters. - The "--topology private" argument will ensure that all our instances will have private IP's and no public IP's from amazon. - We are including the arguments "--node-size" and "master-size" to specify the "instance types" for both our masters and worker nodes. +- Because we are just doing a simple LAB, we are using "t2.micro" machines. Please DONT USE t2.micro on real production systems. Start with "t2.medium" as a minimun realistic/workable machine type. - And finally, the "--networking kopeio-vxlan" argument. With the private networking model, we need to tell kops which networking subsystem to use. More information about kops supported networking models can be obtained from the [KOPS Kubernetes Networking Documentation](https://github.com/kubernetes/kops/blob/master/docs/networking.md). For this exercise we'll use "kopeio-vxlan" (or "kopeio" for short). **NOTE**: You can add the "--bastion" argument here if you are not using "gossip dns" and create the bastion from start, but if you are using "gossip-dns" this will make this cluster to fail (this is a bug we are correcting now). For the moment don't use "--bastion" when using gossip DNS. We'll show you how to get around this by first creating the private cluster, then creation the bastion instance group once the cluster is running. @@ -324,7 +309,7 @@ Your cluster privatekopscluster.k8s.local is ready ## MAKING THE BASTION LAYER "HIGLY AVAILABLE". -If for any reason "godzilla" decides to destroy the amazon AZ that contains our bastion, we'll basically be unable to enter to our instances. Let's add some H.A. to our bastion layer and force amazon to deploy additional bastion instances on other availability zones. +If for any reason any "legendary monster from the comics" decides to destroy the amazon AZ that contains our bastion, we'll basically be unable to enter to our instances. Let's add some H.A. to our bastion layer and force amazon to deploy additional bastion instances on other availability zones. First, let's edit our "bastions" instance group: @@ -376,4 +361,4 @@ Finally, let's destroy our cluster: ```bash kops delete cluster ${NAME} --yes -``` +``` \ No newline at end of file