Skip to content

Commit

Permalink
Changes on CoreOS related documentation.
Browse files Browse the repository at this point in the history
- Changed "experimental" to "production-ready" status on CoreOS
  images.
- Added a complete tutorial/exercise using a multi-master setup
  with CoreOS
  • Loading branch information
tigerlinux committed Aug 15, 2017
1 parent 74d0e21 commit 4164c82
Show file tree
Hide file tree
Showing 2 changed files with 315 additions and 3 deletions.
310 changes: 310 additions & 0 deletions docs/examples/coreos-kops-tests-multimaster.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,310 @@
# USING KOPS WITH COREOS - A MULTI-MASTER/MULTI-NODE PRACTICAL EXAMPLE

## WHAT WE WANT TO ACOMPLISH HERE ?.

The exercise described on this document will focus on the following goals:

- Demonstrate how to use a production-setup with 3 masters and multiple working nodes (two).
- Change our default base-distro (Debian 8) for CoreOS stable, available too as an AMI on AWS.
- Ensure our masters are deployed on 3 different AWS availability zones.
- Ensure our nodes are deployed on 2 different AWS availability zones.


## PRE-FLIGHT CHECK:

Before rushing in to replicate this exercise, please ensure your basic environment is correctly setup. See the KOPS tutorial for more information: [KOPS Tutorial](https://github.com/kubernetes/kops/blob/master/docs/aws.md). Ensure that the following is already working in your environment:

- AWS cli fully configured (aws account already with proper permissions/roles needed for kops). Depending on your distro, you can setup directly from packages, or if you want the most updated version, use "pip" and installed by issuing a "pip install awscli" command. Your choice !.
- Local ssh key ready on ~/.ssh/id_rsa / id_rsa.pub. You can generate it using "ssh-keygen" command: `ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ""`
- Region set to us-east-1 (az's: us-east-1a, us-east-1b, us-east-1c, us-east-1d and us-east-1e). For this exercise we deployed our cluster on US-EAST-1. If you want to use another region, feel free to change it. Just ensure to have at least 3 availability zones.
- kubectl and kops installed. For this last part, you can do this with the following commnads (do this as root please). Next commands asume you are running a amd64/x86_64 linux distro:

```bash
cd ~
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
wget https://github.com/kubernetes/kops/releases/download/1.7.0/kops-linux-amd64
chmod 755 kubectl kops-linux-amd64
mv kops-linux-amd64 kops
mv kubectl kops /usr/local/bin
```


## AWS/KOPS ENVIRONMENT INFORMATION SETUP:

First, using some scripting and asuming you already configured your "aws" environment on your linux system, use the following commands in order to export your AWS access/secret:

```bash
export AWS_ACCESS_KEY_ID=`grep aws_access_key_id ~/.aws/credentials|awk '{print $3}'`
export AWS_SECRET_ACCESS_KEY=`grep aws_secret_access_key ~/.aws/credentials|awk '{print $3}'`
echo "$AWS_ACCESS_KEY_ID $AWS_SECRET_ACCESS_KEY"
```

Create a bucket (if you don't already have one) for your cluster state:

```bash
aws s3api create-bucket --bucket kops-tigerlinux-cluster-state --region us-east-1
```

Then export the name of your cluster along to the "S3" URL of your bucket:

```bash
export NAME=coreosbasedkopscluster.k8s.local
export KOPS_STATE_STORE=s3://kops-tigerlinux-cluster-state
```


## COREOS IMAGE INFORMATION:

CoreOS webpage includes a "json" with the updated list of lattest images: https://coreos.com/dist/aws/aws-stable.json

If you install the "jq" utility (available on most distros) you can obtain the "ami" for a specific region:


```bash
curl -s https://coreos.com/dist/aws/aws-stable.json|sed -r 's/-/_/g'|jq '.us_east_1.hvm'|sed -r 's/_/-/g'
"ami-32705b49"
```

Then, our ami is: "ami-32705b49". More info about the image can be obtained by using the following command:

```bash
aws ec2 describe-images --image-id ami-32705b49 --output table

--------------------------------------------------------------------------
| DescribeImages |
+------------------------------------------------------------------------+
|| Images ||
|+---------------------+------------------------------------------------+|
|| Architecture | x86_64 ||
|| CreationDate | 2017-08-10T02:07:16.000Z ||
|| Description | CoreOS Container Linux stable 1409.8.0 (HVM) ||
|| EnaSupport | True ||
|| Hypervisor | xen ||
|| ImageId | ami-32705b49 ||
|| ImageLocation | 595879546273/CoreOS-stable-1409.8.0-hvm ||
|| ImageType | machine ||
|| Name | CoreOS-stable-1409.8.0-hvm ||
|| OwnerId | 595879546273 ||
|| Public | True ||
|| RootDeviceName | /dev/xvda ||
|| RootDeviceType | ebs ||
|| SriovNetSupport | simple ||
|| State | available ||
|| VirtualizationType | hvm ||
|+---------------------+------------------------------------------------+|
||| BlockDeviceMappings |||
||+-----------------------------------+--------------------------------+||
||| DeviceName | /dev/xvda |||
||| VirtualName | |||
||+-----------------------------------+--------------------------------+||
|||| Ebs ||||
|||+------------------------------+-----------------------------------+|||
|||| DeleteOnTermination | True ||||
|||| Encrypted | False ||||
|||| SnapshotId | snap-00d2949d7084cd408 ||||
|||| VolumeSize | 8 ||||
|||| VolumeType | standard ||||
|||+------------------------------+-----------------------------------+|||
||| BlockDeviceMappings |||
||+----------------------------------+---------------------------------+||
||| DeviceName | /dev/xvdb |||
||| VirtualName | ephemeral0 |||
||+----------------------------------+---------------------------------+||
```

Also, you can obtaing the owner/name using the command (that is what we really need for KOPS):

```bash
aws ec2 describe-images --region=us-east-1 --owner=595879546273 \
--filters "Name=virtualization-type,Values=hvm" "Name=name,Values=CoreOS-stable*" \
--query 'sort_by(Images,&CreationDate)[-1].{id:ImageLocation}' \
--output table


---------------------------------------------------
| DescribeImages |
+----+--------------------------------------------+
| id| 595879546273/CoreOS-stable-1409.8.0-hvm |
+----+--------------------------------------------+
```

This "image id" (595879546273/CoreOS-stable-1409.8.0-hvm) is the one we'll need later in order to change KOPS default (kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-05-02) for the one running CoreOS.


## KOPS CLUSTER CREATION AND MODIFICATION:

Let's first create our cluster:

```bash
kops create cluster \
--master-zones=us-east-1a,us-east-1b,us-east-1c \
--zones=us-east-1a,us-east-1b,us-east-1c \
--node-count=2 \
${NAME}
```

NOTE: Remember ${NAME} was exported with our cluster name: coreosbasedkopscluster.k8s.local

Now, let's edit the nodes in order to change the base image (by default debian) to the one we want to use:

```bash
kops edit ig --name=${NAME} nodes
```

```bash
kops edit ig --name=${NAME} master-us-east-1a
kops edit ig --name=${NAME} master-us-east-1b
kops edit ig --name=${NAME} master-us-east-1c
```

What you need to change on all your instance groups (masters and nodes) is this:

```bash
image: kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-05-02
```

And set it to this:

```bash
image: 595879546273/CoreOS-stable-1409.8.0-hvm
```

After your changes are done, create the cluster (deploy) with the following command:

```bash
kops update cluster ${NAME} --yes
```

Go for a coffee or just take a 10~15 minutes walk. After that, the cluster will be up-and-running. We can check this with the following commands:

```bash
kops validate cluster

Using cluster from kubectl context: coreosbasedkopscluster.k8s.local

Validating cluster coreosbasedkopscluster.k8s.local

INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-east-1a Master m3.medium 1 1 us-east-1a
master-us-east-1b Master c4.large 1 1 us-east-1b
master-us-east-1c Master m3.medium 1 1 us-east-1c
nodes Node t2.medium 2 2 us-east-1a,us-east-1b,us-east-1c

NODE STATUS
NAME ROLE READY
ip-172-20-125-216.ec2.internal node True
ip-172-20-125-90.ec2.internal master True
ip-172-20-48-12.ec2.internal master True
ip-172-20-79-203.ec2.internal master True
ip-172-20-92-185.ec2.internal node True

Your cluster coreosbasedkopscluster.k8s.local is ready

```

```bash
kubectl get nodes --show-labels

NAME STATUS AGE VERSION LABELS
ip-172-20-125-216.ec2.internal Ready 6m v1.7.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=t2.medium,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-east-1,failure-domain.beta.kubernetes.io/zone=us-east-1c,kubernetes.io/hostname=ip-172-20-125-216.ec2.internal,kubernetes.io/role=node,node-role.kubernetes.io/node=
ip-172-20-125-90.ec2.internal Ready 7m v1.7.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m3.medium,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-east-1,failure-domain.beta.kubernetes.io/zone=us-east-1c,kubernetes.io/hostname=ip-172-20-125-90.ec2.internal,kubernetes.io/role=master,node-role.kubernetes.io/master=
ip-172-20-48-12.ec2.internal Ready 3m v1.7.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m3.medium,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-east-1,failure-domain.beta.kubernetes.io/zone=us-east-1a,kubernetes.io/hostname=ip-172-20-48-12.ec2.internal,kubernetes.io/role=master,node-role.kubernetes.io/master=
ip-172-20-79-203.ec2.internal Ready 7m v1.7.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=c4.large,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-east-1,failure-domain.beta.kubernetes.io/zone=us-east-1b,kubernetes.io/hostname=ip-172-20-79-203.ec2.internal,kubernetes.io/role=master,node-role.kubernetes.io/master=
ip-172-20-92-185.ec2.internal Ready 6m v1.7.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=t2.medium,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-east-1,failure-domain.beta.kubernetes.io/zone=us-east-1b,kubernetes.io/hostname=ip-172-20-92-185.ec2.internal,kubernetes.io/role=node,node-role.kubernetes.io/node=
```

```bash
kubectl -n kube-system get pods

NAME READY STATUS RESTARTS AGE
dns-controller-3497129722-rt4nv 1/1 Running 0 7m
etcd-server-events-ip-172-20-125-90.ec2.internal 1/1 Running 0 7m
etcd-server-events-ip-172-20-48-12.ec2.internal 1/1 Running 0 3m
etcd-server-events-ip-172-20-79-203.ec2.internal 1/1 Running 0 7m
etcd-server-ip-172-20-125-90.ec2.internal 1/1 Running 0 7m
etcd-server-ip-172-20-48-12.ec2.internal 1/1 Running 0 3m
etcd-server-ip-172-20-79-203.ec2.internal 1/1 Running 0 7m
kube-apiserver-ip-172-20-125-90.ec2.internal 1/1 Running 0 7m
kube-apiserver-ip-172-20-48-12.ec2.internal 1/1 Running 0 3m
kube-apiserver-ip-172-20-79-203.ec2.internal 1/1 Running 0 7m
kube-controller-manager-ip-172-20-125-90.ec2.internal 1/1 Running 0 7m
kube-controller-manager-ip-172-20-48-12.ec2.internal 1/1 Running 0 3m
kube-controller-manager-ip-172-20-79-203.ec2.internal 1/1 Running 0 7m
kube-dns-479524115-28zqc 3/3 Running 0 8m
kube-dns-479524115-7xv6b 3/3 Running 0 6m
kube-dns-autoscaler-1818915203-zf0gd 1/1 Running 0 8m
kube-proxy-ip-172-20-125-216.ec2.internal 1/1 Running 0 6m
kube-proxy-ip-172-20-125-90.ec2.internal 1/1 Running 0 7m
kube-proxy-ip-172-20-48-12.ec2.internal 1/1 Running 0 3m
kube-proxy-ip-172-20-79-203.ec2.internal 1/1 Running 0 7m
kube-proxy-ip-172-20-92-185.ec2.internal 1/1 Running 0 7m
kube-scheduler-ip-172-20-125-90.ec2.internal 1/1 Running 0 7m
kube-scheduler-ip-172-20-48-12.ec2.internal 1/1 Running 0 3m
kube-scheduler-ip-172-20-79-203.ec2.internal 1/1 Running 0 8m

```


## LAUNCH A SIMPLE REPLICATED APP ON THE CLUSTER.

Before doing the tasks ahead, we created a simple "webservers" security group inside our KOPS's cluster VPC (using the AWS WEB-UI) allowing inbound port 80 and applied it to our two nodes (not the masters). Then, with the following command we proceed to create a simple replicated app in our coreos-based kops-launched cluster:

```
kubectl run apache-simple-replicated \
--image=httpd:2.4-alpine \
--replicas=2 \
--port=80 \
--hostport=80
```

Then check it:

```bash
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
apache-simple-replicated-1977341696-3hxxx 1/1 Running 0 31s 100.96.2.3 ip-172-20-92-185.ec2.internal
apache-simple-replicated-1977341696-zv4fn 1/1 Running 0 31s 100.96.3.4 ip-172-20-125-216.ec2.internal
```

Using our public IP's (the ones from our kube nodes, again, not the masters):

```bash
curl http://54.210.119.98
<html><body><h1>It works!</h1></body></html>

curl http://34.200.247.63
<html><body><h1>It works!</h1></body></html>

```

Now, let's delete our recently-created deployment:

```bash
kubectl delete deployment apache-simple-replicated
```

NOTE: In the AWS Gui, we also deleted our "webservers" security group (after removing it from out instance nodes).

Check again:

```bash
kubectl get pods -o wide
No resources found.
```

Finally, let's destroy our cluster:

```
kops delete cluster ${NAME} --yes
```

After a brief time, your cluster will be fully deleted on AWS and you'll see the following output:

```bash
Deleted cluster: "coreosbasedkopscluster.k8s.local"
```

NOTE: Before destroying the cluster, "really ensure" any extra security group "not created" directly by KOPS has been removed by you.


8 changes: 5 additions & 3 deletions docs/images.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ Be aware of the following limitations:

## CoreOS

CoreOS support is highly experimental. Please report any issues.
CoreOS has been tested enough to be considered ready for production with kops, but if you encounter any problem please report it to us.

The following steps are known:

Expand All @@ -105,6 +105,8 @@ aws ec2 describe-images --region=us-east-1 --owner=595879546273 \
--query 'sort_by(Images,&CreationDate)[-1].{id:ImageLocation}'
```

* You can specify the name using the `coreos.com` owner alias, for example `coreos.com/CoreOS-stable-1353.8.0-hvm`
* You can specify the name using the `coreos.com` owner alias, for example `coreos.com/CoreOS-stable-1409.8.0-hvm` or leave it at `595879546273/CoreOS-stable-1409.8.0-hvm` if you prefer to do so.

> Note: SSH username will be `core`
As part of our documentation, you will find a practical exercise using CoreOS with KOPS. See the file "coreos-kops-tests-multimaster.md" in the "examples" directory. This exercise covers not only using kops with CoreOS, but also a practical view of KOPS with a multi-master kubernetes setup.

> Note: SSH username for CoreOS based instances will be `core`

0 comments on commit 4164c82

Please sign in to comment.