diff --git a/content/ko/docs/setup/_index.md b/content/ko/docs/setup/_index.md
new file mode 100644
index 000000000..57723f9cf
--- /dev/null
+++ b/content/ko/docs/setup/_index.md
@@ -0,0 +1,10 @@
+---
+no_issue: true
+title: Setup
+main_menu: true
+weight: 30
+---
+
+This section provides instructions for installing Kubernetes and setting
+up a Kubernetes cluster. For an overview of the different options, see
+[Picking the Right Solution](/docs/setup/pick-right-solution/).
diff --git a/content/ko/docs/setup/building-from-source.md b/content/ko/docs/setup/building-from-source.md
new file mode 100644
index 000000000..dd31482fb
--- /dev/null
+++ b/content/ko/docs/setup/building-from-source.md
@@ -0,0 +1,21 @@
+---
+title: Building from Source
+---
+
+You can either build a release from source or download a pre-built release. If you do not plan on developing Kubernetes itself, we suggest using a pre-built version of the current release, which can be found in the [Release Notes](/docs/imported/release/notes/).
+
+The Kubernetes source code can be downloaded from the [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) repo.
+
+## Building from source
+
+If you are simply building a release from source there is no need to set up a full golang environment as all building happens in a Docker container.
+
+Building a release is simple.
+
+```shell
+git clone https://github.com/kubernetes/kubernetes.git
+cd kubernetes
+make release
+```
+
+For more details on the release process see the kubernetes/kubernetes [`build`](http://releases.k8s.io/{{< param "githubbranch" >}}/build/) directory.
diff --git a/content/ko/docs/setup/cluster-large.md b/content/ko/docs/setup/cluster-large.md
new file mode 100644
index 000000000..54667db0d
--- /dev/null
+++ b/content/ko/docs/setup/cluster-large.md
@@ -0,0 +1,127 @@
+---
+title: Building Large Clusters
+---
+
+## Support
+
+At {{< param "version" >}}, Kubernetes supports clusters with up to 5000 nodes. More specifically, we support configurations that meet *all* of the following criteria:
+
+* No more than 5000 nodes
+* No more than 150000 total pods
+* No more than 300000 total containers
+* No more than 100 pods per node
+
+
+
+{{< toc >}}
+
+## Setup
+
+A cluster is a set of nodes (physical or virtual machines) running Kubernetes agents, managed by a "master" (the cluster-level control plane).
+
+Normally the number of nodes in a cluster is controlled by the value `NUM_NODES` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/gce/config-default.sh)).
+
+Simply changing that value to something very large, however, may cause the setup script to fail for many cloud providers. A GCE deployment, for example, will run in to quota issues and fail to bring the cluster up.
+
+When setting up a large Kubernetes cluster, the following issues must be considered.
+
+### Quota Issues
+
+To avoid running into cloud provider quota issues, when creating a cluster with many nodes, consider:
+
+* Increase the quota for things like CPU, IPs, etc.
+ * In [GCE, for example,](https://cloud.google.com/compute/docs/resource-quotas) you'll want to increase the quota for:
+ * CPUs
+ * VM instances
+ * Total persistent disk reserved
+ * In-use IP addresses
+ * Firewall Rules
+ * Forwarding rules
+ * Routes
+ * Target pools
+* Gating the setup script so that it brings up new node VMs in smaller batches with waits in between, because some cloud providers rate limit the creation of VMs.
+
+### Etcd storage
+
+To improve performance of large clusters, we store events in a separate dedicated etcd instance.
+
+When creating a cluster, existing salt scripts:
+
+* start and configure additional etcd instance
+* configure api-server to use it for storing events
+
+### Size of master and master components
+
+On GCE/Google Kubernetes Engine, and AWS, `kube-up` automatically configures the proper VM size for your master depending on the number of nodes
+in your cluster. On other providers, you will need to configure it manually. For reference, the sizes we use on GCE are
+
+* 1-5 nodes: n1-standard-1
+* 6-10 nodes: n1-standard-2
+* 11-100 nodes: n1-standard-4
+* 101-250 nodes: n1-standard-8
+* 251-500 nodes: n1-standard-16
+* more than 500 nodes: n1-standard-32
+
+And the sizes we use on AWS are
+
+* 1-5 nodes: m3.medium
+* 6-10 nodes: m3.large
+* 11-100 nodes: m3.xlarge
+* 101-250 nodes: m3.2xlarge
+* 251-500 nodes: c4.4xlarge
+* more than 500 nodes: c4.8xlarge
+
+{{< note >}}
+On Google Kubernetes Engine, the size of the master node adjusts automatically based on the size of your cluster. For more information, see [this blog post](https://cloudplatform.googleblog.com/2017/11/Cutting-Cluster-Management-Fees-on-Google-Kubernetes-Engine.html).
+
+On AWS, master node sizes are currently set at cluster startup time and do not change, even if you later scale your cluster up or down by manually removing or adding nodes or using a cluster autoscaler.
+{{< /note >}}
+
+### Addon Resources
+
+To prevent memory leaks or other resource issues in [cluster addons](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons) from consuming all the resources available on a node, Kubernetes sets resource limits on addon containers to limit the CPU and Memory resources they can consume (See PR [#10653](http://pr.k8s.io/10653/files) and [#10778](http://pr.k8s.io/10778/files)).
+
+For example:
+
+```yaml
+ containers:
+ - name: fluentd-cloud-logging
+ image: k8s.gcr.io/fluentd-gcp:1.16
+ resources:
+ limits:
+ cpu: 100m
+ memory: 200Mi
+```
+
+Except for Heapster, these limits are static and are based on data we collected from addons running on 4-node clusters (see [#10335](http://issue.k8s.io/10335#issuecomment-117861225)). The addons consume a lot more resources when running on large deployment clusters (see [#5880](http://issue.k8s.io/5880#issuecomment-113984085)). So, if a large cluster is deployed without adjusting these values, the addons may continuously get killed because they keep hitting the limits.
+
+To avoid running into cluster addon resource issues, when creating a cluster with many nodes, consider the following:
+
+* Scale memory and CPU limits for each of the following addons, if used, as you scale up the size of cluster (there is one replica of each handling the entire cluster so memory and CPU usage tends to grow proportionally with size/load on cluster):
+ * [InfluxDB and Grafana](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml)
+ * [kubedns, dnsmasq, and sidecar](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/kube-dns/kube-dns.yaml.in)
+ * [Kibana](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/kibana-deployment.yaml)
+* Scale number of replicas for the following addons, if used, along with the size of cluster (there are multiple replicas of each so increasing replicas should help handle increased load, but, since load per replica also increases slightly, also consider increasing CPU/memory limits):
+ * [elasticsearch](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/es-statefulset.yaml)
+* Increase memory and CPU limits slightly for each of the following addons, if used, along with the size of cluster (there is one replica per node but CPU/memory usage increases slightly along with cluster load/size as well):
+ * [FluentD with ElasticSearch Plugin](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/fluentd-es-ds.yaml)
+ * [FluentD with GCP Plugin](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-gcp/fluentd-gcp-ds.yaml)
+
+Heapster's resource limits are set dynamically based on the initial size of your cluster (see [#16185](http://issue.k8s.io/16185)
+and [#22940](http://issue.k8s.io/22940)). If you find that Heapster is running
+out of resources, you should adjust the formulas that compute heapster memory request (see those PRs for details).
+
+For directions on how to detect if addon containers are hitting resource limits, see the [Troubleshooting section of Compute Resources](/docs/concepts/configuration/manage-compute-resources-container/#troubleshooting).
+
+In the [future](http://issue.k8s.io/13048), we anticipate to set all cluster addon resource limits based on cluster size, and to dynamically adjust them if you grow or shrink your cluster.
+We welcome PRs that implement those features.
+
+### Allowing minor node failure at startup
+
+For various reasons (see [#18969](https://github.com/kubernetes/kubernetes/issues/18969) for more details) running
+`kube-up.sh` with a very large `NUM_NODES` may fail due to a very small number of nodes not coming up properly.
+Currently you have two choices: restart the cluster (`kube-down.sh` and then `kube-up.sh` again), or before
+running `kube-up.sh` set the environment variable `ALLOWED_NOTREADY_NODES` to whatever value you feel comfortable
+with. This will allow `kube-up.sh` to succeed with fewer than `NUM_NODES` coming up. Depending on the
+reason for the failure, those additional nodes may join later or the cluster may remain at a size of
+`NUM_NODES - ALLOWED_NOTREADY_NODES`.
diff --git a/content/ko/docs/setup/custom-cloud/_index.md b/content/ko/docs/setup/custom-cloud/_index.md
new file mode 100644
index 000000000..b31ce1aca
--- /dev/null
+++ b/content/ko/docs/setup/custom-cloud/_index.md
@@ -0,0 +1,3 @@
+---
+title: Custom Cloud Solutions
+---
diff --git a/content/ko/docs/setup/custom-cloud/coreos.md b/content/ko/docs/setup/custom-cloud/coreos.md
new file mode 100644
index 000000000..4e911f963
--- /dev/null
+++ b/content/ko/docs/setup/custom-cloud/coreos.md
@@ -0,0 +1,91 @@
+---
+title: CoreOS on AWS or GCE
+---
+
+{{< toc >}}
+
+There are multiple guides on running Kubernetes with [CoreOS](https://coreos.com/kubernetes/docs/latest/):
+
+### Official CoreOS Guides
+
+These guides are maintained by CoreOS and deploy Kubernetes the "CoreOS Way" with full TLS, the DNS add-on, and more. These guides pass Kubernetes conformance testing and we encourage you to [test this yourself](https://coreos.com/kubernetes/docs/latest/conformance-tests.html).
+
+[**AWS Multi-Node**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html)
+
+Guide and CLI tool for setting up a multi-node cluster on AWS. CloudFormation is used to set up a master and multiple workers in auto-scaling groups.
+
+
+
+[**Bare Metal Multi-Node**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-baremetal.html#automated-provisioning)
+
+Guide and HTTP/API service for PXE booting and provisioning a multi-node cluster on bare metal. [Ignition](https://coreos.com/ignition/docs/latest/) is used to provision a master and multiple workers on the first boot from disk.
+
+[**Vagrant Multi-Node**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html)
+
+Guide to setting up a multi-node cluster on Vagrant. The deployer can independently configure the number of etcd nodes, master nodes, and worker nodes to bring up a fully HA control plane.
+
+
+
+[**Vagrant Single-Node**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant-single.html)
+
+The quickest way to set up a Kubernetes development environment locally. As easy as `git clone`, `vagrant up` and configuring `kubectl`.
+
+
+
+[**Full Step by Step Guide**](https://coreos.com/kubernetes/docs/latest/getting-started.html)
+
+A generic guide to setting up an HA cluster on any cloud or bare metal, with full TLS. Repeat the master or worker steps to configure more machines of that role.
+
+### Community Guides
+
+These guides are maintained by community members, cover specific platforms and use cases, and experiment with different ways of configuring Kubernetes on CoreOS.
+
+[**Easy Multi-node Cluster on Google Compute Engine**](https://github.com/rimusz/coreos-multi-node-k8s-gce/blob/master/README.md)
+
+Scripted installation of a single master, multi-worker cluster on GCE. Kubernetes components are managed by [fleet](https://github.com/coreos/fleet).
+
+
+
+[**Multi-node cluster using cloud-config and Weave on Vagrant**](https://github.com/errordeveloper/weave-demos/blob/master/poseidon/README.md)
+
+Configure a Vagrant-based cluster of 3 machines with networking provided by Weave.
+
+
+
+[**Multi-node cluster using cloud-config and Vagrant**](https://github.com/pires/kubernetes-vagrant-coreos-cluster/blob/master/README.md)
+
+Configure a single master, multi-worker cluster locally, running on your choice of hypervisor: VirtualBox, Parallels, or VMware
+
+
+
+[**Single-node cluster using a small macOS App**](https://github.com/rimusz/kube-solo-osx/blob/master/README.md)
+
+Guide to running a solo cluster (master + worker) controlled by an macOS menubar application. Uses xhyve + CoreOS under the hood.
+
+
+
+[**Multi-node cluster with Vagrant and fleet units using a small macOS App**](https://github.com/rimusz/coreos-osx-gui-kubernetes-cluster/blob/master/README.md)
+
+Guide to running a single master, multi-worker cluster controlled by an macOS menubar application. Uses Vagrant under the hood.
+
+
+
+[**Multi-node cluster using cloud-config, CoreOS and VMware ESXi**](https://github.com/xavierbaude/VMware-coreos-multi-nodes-Kubernetes)
+
+Configure a single master, single worker cluster on VMware ESXi.
+
+
+
+[**Single/Multi-node cluster using cloud-config, CoreOS and Foreman**](https://github.com/johscheuer/theforeman-coreos-kubernetes)
+
+Configure a standalone Kubernetes or a Kubernetes cluster with [Foreman](https://theforeman.org).
+
+## Support Level
+
+
+IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
+-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
+GCE | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos) | | Community ([@pires](https://github.com/pires))
+Vagrant | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos) | | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles))
+
+For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
diff --git a/content/ko/docs/setup/custom-cloud/kops.md b/content/ko/docs/setup/custom-cloud/kops.md
new file mode 100644
index 000000000..324ad80dc
--- /dev/null
+++ b/content/ko/docs/setup/custom-cloud/kops.md
@@ -0,0 +1,165 @@
+---
+title: Installing Kubernetes on AWS with kops
+---
+
+## Overview
+
+This quickstart shows you how to easily install a Kubernetes cluster on AWS.
+It uses a tool called [`kops`](https://github.com/kubernetes/kops).
+
+kops is an opinionated provisioning system:
+
+* Fully automated installation
+* Uses DNS to identify clusters
+* Self-healing: everything runs in Auto-Scaling Groups
+* Limited OS support (Debian preferred, Ubuntu 16.04 supported, early support for CentOS & RHEL)
+* High-Availability support
+* Can directly provision, or generate terraform manifests
+
+If your opinions differ from these you may prefer to build your own cluster using [kubeadm](/docs/admin/kubeadm/) as
+a building block. kops builds on the kubeadm work.
+
+## Creating a cluster
+
+### (1/5) Install kops
+
+#### Requirements
+
+You must have [kubectl](/docs/tasks/tools/install-kubectl/) installed in order for kops to work.
+
+#### Installation
+
+Download kops from the [releases page](https://github.com/kubernetes/kops/releases) (it is also easy to build from source):
+
+On macOS:
+
+```
+curl -OL https://github.com/kubernetes/kops/releases/download/1.8.0/kops-darwin-amd64
+chmod +x kops-darwin-amd64
+mv kops-darwin-amd64 /usr/local/bin/kops
+# you can also install using Homebrew
+brew update && brew install kops
+```
+
+On Linux:
+
+```
+wget https://github.com/kubernetes/kops/releases/download/1.8.0/kops-linux-amd64
+chmod +x kops-linux-amd64
+mv kops-linux-amd64 /usr/local/bin/kops
+```
+
+### (2/5) Create a route53 domain for your cluster
+
+kops uses DNS for discovery, both inside the cluster and so that you can reach the kubernetes API server
+from clients.
+
+kops has a strong opinion on the cluster name: it should be a valid DNS name. By doing so you will
+no longer get your clusters confused, you can share clusters with your colleagues unambiguously,
+and you can reach them without relying on remembering an IP address.
+
+You can, and probably should, use subdomains to divide your clusters. As our example we will use
+`useast1.dev.example.com`. The API server endpoint will then be `api.useast1.dev.example.com`.
+
+A Route53 hosted zone can serve subdomains. Your hosted zone could be `useast1.dev.example.com`,
+but also `dev.example.com` or even `example.com`. kops works with any of these, so typically
+you choose for organization reasons (e.g. you are allowed to create records under `dev.example.com`,
+but not under `example.com`).
+
+Let's assume you're using `dev.example.com` as your hosted zone. You create that hosted zone using
+the [normal process](http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/CreatingNewSubdomain.html), or
+with a command such as `aws route53 create-hosted-zone --name dev.example.com --caller-reference 1`.
+
+You must then set up your NS records in the parent domain, so that records in the domain will resolve. Here,
+you would create NS records in `example.com` for `dev`. If it is a root domain name you would configure the NS
+records at your domain registrar (e.g. `example.com` would need to be configured where you bought `example.com`).
+
+This step is easy to mess up (it is the #1 cause of problems!) You can double-check that
+your cluster is configured correctly if you have the dig tool by running:
+
+`dig NS dev.example.com`
+
+You should see the 4 NS records that Route53 assigned your hosted zone.
+
+### (3/5) Create an S3 bucket to store your clusters state
+
+kops lets you manage your clusters even after installation. To do this, it must keep track of the clusters
+that you have created, along with their configuration, the keys they are using etc. This information is stored
+in an S3 bucket. S3 permissions are used to control access to the bucket.
+
+Multiple clusters can use the same S3 bucket, and you can share an S3 bucket between your colleagues that
+administer the same clusters - this is much easier than passing around kubecfg files. But anyone with access
+to the S3 bucket will have administrative access to all your clusters, so you don't want to share it beyond
+the operations team.
+
+So typically you have one S3 bucket for each ops team (and often the name will correspond
+to the name of the hosted zone above!)
+
+In our example, we chose `dev.example.com` as our hosted zone, so let's pick `clusters.dev.example.com` as
+the S3 bucket name.
+
+* Export `AWS_PROFILE` (if you need to select a profile for the AWS CLI to work)
+
+* Create the S3 bucket using `aws s3 mb s3://clusters.dev.example.com`
+
+* You can `export KOPS_STATE_STORE=s3://clusters.dev.example.com` and then kops will use this location by default.
+ We suggest putting this in your bash profile or similar.
+
+
+### (4/5) Build your cluster configuration
+
+Run "kops create cluster" to create your cluster configuration:
+
+`kops create cluster --zones=us-east-1c useast1.dev.example.com`
+
+kops will create the configuration for your cluster. Note that it _only_ creates the configuration, it does
+not actually create the cloud resources - you'll do that in the next step with a `kops update cluster`. This
+give you an opportunity to review the configuration or change it.
+
+It prints commands you can use to explore further:
+
+* List your clusters with: `kops get cluster`
+* Edit this cluster with: `kops edit cluster useast1.dev.example.com`
+* Edit your node instance group: `kops edit ig --name=useast1.dev.example.com nodes`
+* Edit your master instance group: `kops edit ig --name=useast1.dev.example.com master-us-east-1c`
+
+If this is your first time using kops, do spend a few minutes to try those out! An instance group is a
+set of instances, which will be registered as kubernetes nodes. On AWS this is implemented via auto-scaling-groups.
+You can have several instance groups, for example if you wanted nodes that are a mix of spot and on-demand instances, or
+GPU and non-GPU instances.
+
+
+### (5/5) Create the cluster in AWS
+
+Run "kops update cluster" to create your cluster in AWS:
+
+`kops update cluster useast1.dev.example.com --yes`
+
+That takes a few seconds to run, but then your cluster will likely take a few minutes to actually be ready.
+`kops update cluster` will be the tool you'll use whenever you change the configuration of your cluster; it
+applies the changes you have made to the configuration to your cluster - reconfiguring AWS or kubernetes as needed.
+
+For example, after you `kops edit ig nodes`, then `kops update cluster --yes` to apply your configuration, and
+sometimes you will also have to `kops rolling-update cluster` to roll out the configuration immediately.
+
+Without `--yes`, `kops update cluster` will show you a preview of what it is going to do. This is handy
+for production clusters!
+
+### Explore other add-ons
+
+See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to explore other add-ons, including tools for logging, monitoring, network policy, visualization & control of your Kubernetes cluster.
+
+## What's next
+
+* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/user-guide/kubectl-overview/).
+* Learn about `kops` [advanced usage](https://github.com/kubernetes/kops)
+
+## Cleanup
+
+* To delete your cluster: `kops delete cluster useast1.dev.example.com --yes`
+
+## Feedback
+
+* Slack Channel: [#sig-aws](https://kubernetes.slack.com/messages/sig-aws/) has a lot of kops users
+* [GitHub Issues](https://github.com/kubernetes/kops/issues)
+
diff --git a/content/ko/docs/setup/custom-cloud/kubespray.md b/content/ko/docs/setup/custom-cloud/kubespray.md
new file mode 100644
index 000000000..255870822
--- /dev/null
+++ b/content/ko/docs/setup/custom-cloud/kubespray.md
@@ -0,0 +1,104 @@
+---
+title: Installing Kubernetes On-premises/Cloud Providers with Kubespray
+---
+
+## Overview
+
+This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, or Baremetal with [Kubespray](https://github.com/kubernetes-incubator/kubespray).
+
+Kubespray is a composition of [Ansible](http://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/ansible.md), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides:
+
+* a highly available cluster
+* composable attributes
+* support for most popular Linux distributions (CoreOS, Debian Jessie, Ubuntu 16.04, CentOS/RHEL 7, Fedora/CentOS Atomic)
+* continuous integration tests
+
+To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/comparisons.md) to [kubeadm](/docs/admin/kubeadm/) and [kops](../kops).
+
+## Creating a cluster
+
+### (1/5) Meet the underlay [requirements](https://github.com/kubernetes-incubator/kubespray#requirements)
+
+Provision servers with the following requirements:
+
+* `Ansible v2.4` (or newer)
+* `Jinja 2.9` (or newer)
+* `python-netaddr` installed on the machine that running Ansible commands
+* Target servers must have access to the Internet in order to pull docker images
+* Target servers are configured to allow IPv4 forwarding
+* Target servers have SSH connectivity ( tcp/22 ) directly to your nodes or through a bastion host/ssh jump box
+* Target servers have a privileged user
+* Your SSH key must be copied to all the servers that are part of your inventory
+* Firewall rules configured properly to allow Ansible and Kubernetes components to communicate
+* If using a cloud provider, you must have the appropriate credentials available and exported as environment variables
+
+Kubespray provides the following utilities to help provision your environment:
+
+* [Terraform](https://www.terraform.io/) scripts for the following cloud providers:
+ * [AWS](https://github.com/kubernetes-incubator/kubespray/tree/master/contrib/terraform/aws)
+ * [OpenStack](https://github.com/kubernetes-incubator/kubespray/tree/master/contrib/terraform/openstack)
+
+### (2/5) Compose an inventory file
+
+After you provision your servers, create an [inventory file for Ansible](http://docs.ansible.com/ansible/intro_inventory.html). You can do this manually or via a dynamic inventory script. For more information, see "[Building your own inventory](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)".
+
+### (3/5) Plan your cluster deployment
+
+Kubespray provides the ability to customize many aspects of the deployment:
+
+* CNI (networking) plugins
+* DNS configuration
+* Choice of control plane: native/binary or containerized with docker or rkt)
+* Component versions
+* Calico route reflectors
+* Component runtime options
+* Certificate generation methods
+
+Kubespray customizations can be made to a [variable file](http://docs.ansible.com/ansible/playbooks_variables.html). If you are just getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes.
+
+### (4/5) Deploy a Cluster
+
+Next, deploy your cluster:
+
+Cluster deployment using [ansible-playbook](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment).
+```console
+ansible-playbook -i your/inventory/hosts.ini cluster.yml -b -v \
+ --private-key=~/.ssh/private_key
+```
+
+
+Large deployments (100+ nodes) may require [specific adjustments](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/large-deployments.md) for best results.
+
+### (5/5) Verify the deployment
+
+Kubespray provides a way to verify inter-pod connectivity and DNS resolve with [Netchecker](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/netcheck.md). Netchecker ensures the netchecker-agents pods can resolve DNS requests and ping each over within the default namespace. Those pods mimic similar behavior of the rest of the workloads and serve as cluster health indicators.
+
+## Cluster operations
+
+Kubespray provides additional playbooks to manage your cluster: _scale_ and _upgrade_.
+
+### Scale your cluster
+
+You can add worker nodes from your cluster by running the scale playbook. For more information, see "[Adding nodes](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#adding-nodes)".
+You can remove worker nodes from your cluster by running the remove-node playbook. For more information, see "[Remove nodes](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#remove-nodes)".
+
+### Upgrade your cluster
+
+You can upgrade your cluster by running the upgrade-cluster playbook. For more information, see "[Upgrades](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/upgrades.md)".
+
+## What's next
+
+Check out planned work on Kubespray's [roadmap](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/roadmap.md).
+
+## Cleanup
+
+You can reset your nodes and wipe out all components installed with Kubespray via the [reset playbook](https://github.com/kubernetes-incubator/kubespray/blob/master/reset.yml).
+
+{{< caution >}}
+**Caution:** When running the reset playbook, be sure not to accidentally target your production cluster!
+{{< /caution >}}
+
+## Feedback
+
+* Slack Channel: [#kubespray](https://kubernetes.slack.com/messages/kubespray/)
+* [GitHub Issues](https://github.com/kubernetes-incubator/kubespray/issues)
diff --git a/content/ko/docs/setup/custom-cloud/master.yaml b/content/ko/docs/setup/custom-cloud/master.yaml
new file mode 100644
index 000000000..8fc45040d
--- /dev/null
+++ b/content/ko/docs/setup/custom-cloud/master.yaml
@@ -0,0 +1,142 @@
+#cloud-config
+
+---
+write-files:
+- path: /etc/conf.d/nfs
+ permissions: '0644'
+ content: |
+ OPTS_RPC_MOUNTD=""
+- path: /opt/bin/wupiao
+ permissions: '0755'
+ content: |
+ #!/bin/bash
+ # [w]ait [u]ntil [p]ort [i]s [a]ctually [o]pen
+ [ -n "$1" ] && \
+ until curl -o /dev/null -sIf http://${1}; do \
+ sleep 1 && echo .;
+ done;
+ exit $?
+
+hostname: master
+coreos:
+ etcd2:
+ name: master
+ listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001
+ advertise-client-urls: http://$private_ipv4:2379,http://$private_ipv4:4001
+ initial-cluster-token: k8s_etcd
+ listen-peer-urls: http://$private_ipv4:2380,http://$private_ipv4:7001
+ initial-advertise-peer-urls: http://$private_ipv4:2380
+ initial-cluster: master=http://$private_ipv4:2380
+ initial-cluster-state: new
+ fleet:
+ metadata: "role=master"
+ units:
+ - name: etcd2.service
+ command: start
+ - name: generate-serviceaccount-key.service
+ command: start
+ content: |
+ [Unit]
+ Description=Generate service-account key file
+
+ [Service]
+ ExecStartPre=-/usr/bin/mkdir -p /opt/bin
+ ExecStart=/bin/openssl genrsa -out /opt/bin/kube-serviceaccount.key 2048 2>/dev/null
+ RemainAfterExit=yes
+ Type=oneshot
+ - name: setup-network-environment.service
+ command: start
+ content: |
+ [Unit]
+ Description=Setup Network Environment
+ Documentation=https://github.com/kelseyhightower/setup-network-environment
+ Requires=network-online.target
+ After=network-online.target
+
+ [Service]
+ ExecStartPre=-/usr/bin/mkdir -p /opt/bin
+ ExecStartPre=/usr/bin/curl -L -o /opt/bin/setup-network-environment -z /opt/bin/setup-network-environment https://github.com/kelseyhightower/setup-network-environment/releases/download/v1.0.0/setup-network-environment
+ ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment
+ ExecStart=/opt/bin/setup-network-environment
+ RemainAfterExit=yes
+ Type=oneshot
+ - name: fleet.service
+ command: start
+ - name: flanneld.service
+ command: start
+ drop-ins:
+ - name: 50-network-config.conf
+ content: |
+ [Unit]
+ Requires=etcd2.service
+ [Service]
+ ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{"Network":"10.244.0.0/16", "Backend": {"Type": "vxlan"}}'
+ - name: docker.service
+ command: start
+ - name: kube-apiserver.service
+ command: start
+ content: |
+ [Unit]
+ Description=Kubernetes API Server
+ Documentation=https://github.com/kubernetes/kubernetes
+ Requires=setup-network-environment.service etcd2.service generate-serviceaccount-key.service
+ After=setup-network-environment.service etcd2.service generate-serviceaccount-key.service
+
+ [Service]
+ EnvironmentFile=/etc/network-environment
+ ExecStartPre=-/usr/bin/mkdir -p /opt/bin
+ ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-apiserver -z /opt/bin/kube-apiserver https://storage.googleapis.com/kubernetes-release/release/v1.1.2/bin/linux/amd64/kube-apiserver
+ ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-apiserver
+ ExecStartPre=/opt/bin/wupiao 127.0.0.1:2379/v2/machines
+ ExecStart=/opt/bin/kube-apiserver \
+ --service-account-key-file=/opt/bin/kube-serviceaccount.key \
+ --service-account-lookup=false \
+ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \
+ --runtime-config=api/v1 \
+ --allow-privileged=true \
+ --insecure-bind-address=0.0.0.0 \
+ --insecure-port=8080 \
+ --kubelet-https=true \
+ --secure-port=6443 \
+ --service-cluster-ip-range=10.100.0.0/16 \
+ --etcd-servers=http://127.0.0.1:2379 \
+ --public-address-override=${DEFAULT_IPV4} \
+ --logtostderr=true
+ Restart=always
+ RestartSec=10
+ - name: kube-controller-manager.service
+ command: start
+ content: |
+ [Unit]
+ Description=Kubernetes Controller Manager
+ Documentation=https://github.com/kubernetes/kubernetes
+ Requires=kube-apiserver.service
+ After=kube-apiserver.service
+
+ [Service]
+ ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-controller-manager -z /opt/bin/kube-controller-manager https://storage.googleapis.com/kubernetes-release/release/v1.1.2/bin/linux/amd64/kube-controller-manager
+ ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-controller-manager
+ ExecStart=/opt/bin/kube-controller-manager \
+ --service-account-private-key-file=/opt/bin/kube-serviceaccount.key \
+ --master=127.0.0.1:8080 \
+ --logtostderr=true
+ Restart=always
+ RestartSec=10
+ - name: kube-scheduler.service
+ command: start
+ content: |
+ [Unit]
+ Description=Kubernetes Scheduler
+ Documentation=https://github.com/kubernetes/kubernetes
+ Requires=kube-apiserver.service
+ After=kube-apiserver.service
+
+ [Service]
+ ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-scheduler -z /opt/bin/kube-scheduler https://storage.googleapis.com/kubernetes-release/release/v1.1.2/bin/linux/amd64/kube-scheduler
+ ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-scheduler
+ ExecStart=/opt/bin/kube-scheduler --master=127.0.0.1:8080
+ Restart=always
+ RestartSec=10
+ update:
+ group: alpha
+ reboot-strategy: off
diff --git a/content/ko/docs/setup/custom-cloud/node.yaml b/content/ko/docs/setup/custom-cloud/node.yaml
new file mode 100644
index 000000000..b5acc29f4
--- /dev/null
+++ b/content/ko/docs/setup/custom-cloud/node.yaml
@@ -0,0 +1,93 @@
+#cloud-config
+write-files:
+- path: /opt/bin/wupiao
+ permissions: '0755'
+ content: |
+ #!/bin/bash
+ # [w]ait [u]ntil [p]ort [i]s [a]ctually [o]pen
+ [ -n "$1" ] && [ -n "$2" ] && while ! curl --output /dev/null \
+ --silent --head --fail \
+ http://${1}:${2}; do sleep 1 && echo -n .; done;
+ exit $?
+coreos:
+ etcd2:
+ listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001
+ advertise-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001
+ initial-cluster: master=http://:2380
+ proxy: on
+ fleet:
+ metadata: "role=node"
+ units:
+ - name: etcd2.service
+ command: start
+ - name: fleet.service
+ command: start
+ - name: flanneld.service
+ command: start
+ - name: docker.service
+ command: start
+ - name: setup-network-environment.service
+ command: start
+ content: |
+ [Unit]
+ Description=Setup Network Environment
+ Documentation=https://github.com/kelseyhightower/setup-network-environment
+ Requires=network-online.target
+ After=network-online.target
+
+ [Service]
+ ExecStartPre=-/usr/bin/mkdir -p /opt/bin
+ ExecStartPre=/usr/bin/curl -L -o /opt/bin/setup-network-environment -z /opt/bin/setup-network-environment https://github.com/kelseyhightower/setup-network-environment/releases/download/v1.0.0/setup-network-environment
+ ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment
+ ExecStart=/opt/bin/setup-network-environment
+ RemainAfterExit=yes
+ Type=oneshot
+ - name: kube-proxy.service
+ command: start
+ content: |
+ [Unit]
+ Description=Kubernetes Proxy
+ Documentation=https://github.com/kubernetes/kubernetes
+ Requires=setup-network-environment.service
+ After=setup-network-environment.service
+
+ [Service]
+ ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-proxy -z /opt/bin/kube-proxy https://storage.googleapis.com/kubernetes-release/release/v1.1.2/bin/linux/amd64/kube-proxy
+ ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-proxy
+ # wait for kubernetes master to be up and ready
+ ExecStartPre=/opt/bin/wupiao 8080
+ ExecStart=/opt/bin/kube-proxy \
+ --master=:8080 \
+ --logtostderr=true
+ Restart=always
+ RestartSec=10
+ - name: kube-kubelet.service
+ command: start
+ content: |
+ [Unit]
+ Description=Kubernetes Kubelet
+ Documentation=https://github.com/kubernetes/kubernetes
+ Requires=setup-network-environment.service
+ After=setup-network-environment.service
+
+ [Service]
+ EnvironmentFile=/etc/network-environment
+ ExecStartPre=/usr/bin/curl -L -o /opt/bin/kubelet -z /opt/bin/kubelet https://storage.googleapis.com/kubernetes-release/release/v1.1.2/bin/linux/amd64/kubelet
+ ExecStartPre=/usr/bin/chmod +x /opt/bin/kubelet
+ # wait for kubernetes master to be up and ready
+ ExecStartPre=/opt/bin/wupiao 8080
+ ExecStart=/opt/bin/kubelet \
+ --address=0.0.0.0 \
+ --port=10250 \
+ --hostname-override=${DEFAULT_IPV4} \
+ --api-servers=:8080 \
+ --allow-privileged=true \
+ --logtostderr=true \
+ --cadvisor-port=4194 \
+ --healthz-bind-address=0.0.0.0 \
+ --healthz-port=10248
+ Restart=always
+ RestartSec=10
+ update:
+ group: alpha
+ reboot-strategy: off
diff --git a/content/ko/docs/setup/independent/_index.md b/content/ko/docs/setup/independent/_index.md
new file mode 100755
index 000000000..d5332e751
--- /dev/null
+++ b/content/ko/docs/setup/independent/_index.md
@@ -0,0 +1,5 @@
+---
+title: "Bootstrapping Clusters with kubeadm"
+weight: 20
+---
+
diff --git a/content/ko/docs/setup/independent/control-plane-flags.md b/content/ko/docs/setup/independent/control-plane-flags.md
new file mode 100644
index 000000000..ae1e941f2
--- /dev/null
+++ b/content/ko/docs/setup/independent/control-plane-flags.md
@@ -0,0 +1,79 @@
+---
+title: Customizing control plane configuration with kubeadm
+content_template: templates/concept
+weight: 40
+---
+
+{{% capture overview %}}
+
+The kubeadm configuration exposes the following fields that can override the default flags passed to control plane components such as the APIServer, ControllerManager and Scheduler:
+
+- `APIServerExtraArgs`
+- `ControllerManagerExtraArgs`
+- `SchedulerExtraArgs`
+
+These fields consist of `key: value` pairs. To override a flag for a control plane component:
+
+1. Add the appropriate field to your configuration.
+2. Add the flags to override to the field.
+
+For more details on each field in the configuration you can navigate to our
+[API reference pages](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#MasterConfiguration).
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## APIServer flags
+
+For details, see the [reference documentation for kube-apiserver](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/).
+
+Example usage:
+```yaml
+apiVersion: kubeadm.k8s.io/v1alpha2
+kind: MasterConfiguration
+kubernetesVersion: v1.11.0
+metadata:
+ name: 1.11-sample
+apiServerExtraArgs:
+ advertise-address: 192.168.0.103
+ anonymous-auth: false
+ enable-admission-plugins: AlwaysPullImages,DefaultStorageClass
+ audit-log-path: /home/johndoe/audit.log
+```
+
+## ControllerManager flags
+
+For details, see the [reference documentation for kube-controller-manager](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/).
+
+Example usage:
+```yaml
+apiVersion: kubeadm.k8s.io/v1alpha2
+kind: MasterConfiguration
+kubernetesVersion: v1.11.0
+metadata:
+ name: 1.11-sample
+controllerManagerExtraArgs:
+ cluster-signing-key-file: /home/johndoe/keys/ca.key
+ bind-address: 0.0.0.0
+ deployment-controller-sync-period: 50
+```
+
+## Scheduler flags
+
+For details, see the [reference documentation for kube-scheduler](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/).
+
+Example usage:
+```yaml
+apiVersion: kubeadm.k8s.io/v1alpha2
+kind: MasterConfiguration
+kubernetesVersion: v1.11.0
+metadata:
+ name: 1.11-sample
+schedulerExtraArgs:
+ address: 0.0.0.0
+ config: /home/johndoe/schedconfig.yaml
+ kubeconfig: /home/johndoe/kubeconfig.yaml
+```
+
+{{% /capture %}}
diff --git a/content/ko/docs/setup/independent/create-cluster-kubeadm.md b/content/ko/docs/setup/independent/create-cluster-kubeadm.md
new file mode 100644
index 000000000..cbec7e541
--- /dev/null
+++ b/content/ko/docs/setup/independent/create-cluster-kubeadm.md
@@ -0,0 +1,590 @@
+---
+title: Creating a single master cluster with kubeadm
+content_template: templates/task
+weight: 30
+---
+
+{{% capture overview %}}
+
+**kubeadm** helps you bootstrap a minimum viable Kubernetes cluster that conforms to best practices. With kubeadm, your cluster should pass [Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification). Kubeadm also supports other cluster
+lifecycle functions, such as upgrades, downgrade, and managing [bootstrap tokens](/docs/reference/access-authn-authz/bootstrap-tokens/).
+
+Because you can install kubeadm on various types of machine (e.g. laptop, server,
+Raspberry Pi, etc.), it's well suited for integration with provisioning systems
+such as Terraform or Ansible.
+
+kubeadm's simplicity means it can serve a wide range of use cases:
+
+- New users can start with kubeadm to try Kubernetes out for the first time.
+- Users familiar with Kubernetes can spin up clusters with kubeadm and test their applications.
+- Larger projects can include kubeadm as a building block in a more complex system that can also include other installer tools.
+
+kubeadm is designed to be a simple way for new users to start trying
+Kubernetes out, possibly for the first time, a way for existing users to
+test their application on and stitch together a cluster easily, and also to be
+a building block in other ecosystem and/or installer tool with a larger
+scope.
+
+You can install _kubeadm_ very easily on operating systems that support
+installing deb or rpm packages. The responsible SIG for kubeadm,
+[SIG Cluster Lifecycle](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle), provides these packages pre-built for you,
+but you may also on other OSes.
+
+
+### kubeadm Maturity
+
+| Area | Maturity Level |
+|---------------------------|--------------- |
+| Command line UX | beta |
+| Implementation | beta |
+| Config file API | alpha |
+| Self-hosting | alpha |
+| kubeadm alpha subcommands | alpha |
+| CoreDNS | GA |
+| DynamicKubeletConfig | alpha |
+
+
+kubeadm's overall feature state is **Beta** and will soon be graduated to
+**General Availability (GA)** during 2018. Some sub-features, like self-hosting
+or the configuration file API are still under active development. The
+implementation of creating the cluster may change slightly as the tool evolves,
+but the overall implementation should be pretty stable. Any commands under
+`kubeadm alpha` are by definition, supported on an alpha level.
+
+
+### Support timeframes
+
+Kubernetes releases are generally supported for nine months, and during that
+period a patch release may be issued from the release branch if a severe bug or
+security issue is found. Here are the latest Kubernetes releases and the support
+timeframe; which also applies to `kubeadm`.
+
+| Kubernetes version | Release month | End-of-life-month |
+|--------------------|----------------|-------------------|
+| v1.6.x | March 2017 | December 2017 |
+| v1.7.x | June 2017 | March 2018 |
+| v1.8.x | September 2017 | June 2018 |
+| v1.9.x | December 2017 | September 2018 |
+| v1.10.x | March 2018 | December 2018 |
+| v1.11.x | June 2018 | March 2019 |
+
+{{% /capture %}}
+
+{{% capture prerequisites %}}
+
+- One or more machines running a deb/rpm-compatible OS, for example Ubuntu or CentOS
+- 2 GB or more of RAM per machine. Any less leaves little room for your
+ apps.
+- 2 CPUs or more on the master
+- Full network connectivity among all machines in the cluster. A public or
+ private network is fine.
+
+{{% /capture %}}
+
+{{% capture steps %}}
+
+## Objectives
+
+* Install a single master Kubernetes cluster or [high availability cluster](https://kubernetes.io/docs/setup/independent/high-availability/)
+* Install a Pod network on the cluster so that your Pods can
+ talk to each other
+
+## Instructions
+
+### Installing kubeadm on your hosts
+
+See ["Installing kubeadm"](/docs/setup/independent/install-kubeadm/).
+
+{{< note >}}
+**Note:** If you have already installed kubeadm, run `apt-get update &&
+apt-get upgrade` or `yum update` to get the latest version of kubeadm.
+
+When you upgrade, the kubelet restarts every few seconds as it waits in a crashloop for
+kubeadm to tell it what to do. This crashloop is expected and normal.
+After you initialize your master, the kubelet runs normally.
+{{< /note >}}
+
+### Initializing your master
+
+The master is the machine where the control plane components run, including
+etcd (the cluster database) and the API server (which the kubectl CLI
+communicates with).
+
+1. Choose a pod network add-on, and verify whether it requires any arguments to
+be passed to kubeadm initialization. Depending on which
+third-party provider you choose, you might need to set the `--pod-network-cidr` to
+a provider-specific value. See [Installing a pod network add-on](#pod-network).
+1. (Optional) Unless otherwise specified, kubeadm uses the network interface associated
+with the default gateway to advertise the master's IP. To use a different
+network interface, specify the `--apiserver-advertise-address=` argument
+to `kubeadm init`. To deploy an IPv6 Kubernetes cluster using IPv6 addressing, you
+must specify an IPv6 address, for example `--apiserver-advertise-address=fd00::101`
+1. (Optional) Run `kubeadm config images pull` prior to `kubeadm init` to verify
+connectivity to gcr.io registries.
+
+Now run:
+
+```bash
+kubeadm init
+```
+
+### More information
+
+For more information about `kubeadm init` arguments, see the [kubeadm reference guide](/docs/reference/setup-tools/kubeadm/kubeadm/).
+
+For a complete list of configuration options, see the [configuration file documentation](/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file).
+
+To customize control plane components, including optional IPv6 assignment to liveness probe for control plane components and etcd server, provide extra arguments to each component as documented in [custom arguments](/docs/admin/kubeadm#custom-args).
+
+To run `kubeadm init` again, you must first [tear down the cluster](#tear-down).
+
+If you join a node with a different architecture to your cluster, create a separate
+Deployment or DaemonSet for `kube-proxy` and `kube-dns` on the node. This is because the Docker images for these
+components do not currently support multi-architecture.
+
+`kubeadm init` first runs a series of prechecks to ensure that the machine
+is ready to run Kubernetes. These prechecks expose warnings and exit on errors. `kubeadm init`
+then downloads and installs the cluster control plane components. This may take several minutes.
+The output should look like:
+
+```none
+[init] Using Kubernetes version: vX.Y.Z
+[preflight] Running pre-flight checks
+[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
+[certificates] Generated ca certificate and key.
+[certificates] Generated apiserver certificate and key.
+[certificates] apiserver serving cert is signed for DNS names [kubeadm-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.138.0.4]
+[certificates] Generated apiserver-kubelet-client certificate and key.
+[certificates] Generated sa key and public key.
+[certificates] Generated front-proxy-ca certificate and key.
+[certificates] Generated front-proxy-client certificate and key.
+[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
+[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
+[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
+[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
+[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
+[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
+[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
+[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
+[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
+[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
+[init] This often takes around a minute; or longer if the control plane images have to be pulled.
+[apiclient] All control plane components are healthy after 39.511972 seconds
+[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
+[markmaster] Will mark node master as master by adding a label and a taint
+[markmaster] Master master tainted and labelled with key/value: node-role.kubernetes.io/master=""
+[bootstraptoken] Using token:
+[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
+[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
+[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
+[addons] Applied essential addon: CoreDNS
+[addons] Applied essential addon: kube-proxy
+
+Your Kubernetes master has initialized successfully!
+
+To start using your cluster, you need to run (as a regular user):
+
+ mkdir -p $HOME/.kube
+ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
+ sudo chown $(id -u):$(id -g) $HOME/.kube/config
+
+You should now deploy a pod network to the cluster.
+Run "kubectl apply -f [podnetwork].yaml" with one of the addon options listed at:
+ http://kubernetes.io/docs/admin/addons/
+
+You can now join any number of machines by running the following on each node
+as root:
+
+ kubeadm join --token : --discovery-token-ca-cert-hash sha256:
+```
+
+To make kubectl work for your non-root user, run these commands, which are
+also part of the `kubeadm init` output:
+
+```bash
+mkdir -p $HOME/.kube
+sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
+sudo chown $(id -u):$(id -g) $HOME/.kube/config
+```
+
+Alternatively, if you are the `root` user, you can run:
+
+```bash
+export KUBECONFIG=/etc/kubernetes/admin.conf
+```
+
+Make a record of the `kubeadm join` command that `kubeadm init` outputs. You
+need this command to [join nodes to your cluster](#join-nodes).
+
+The token is used for mutual authentication between the master and the joining
+nodes. The token included here is secret. Keep it safe, because anyone with this
+token can add authenticated nodes to your cluster. These tokens can be listed,
+created, and deleted with the `kubeadm token` command. See the
+[kubeadm reference guide](/docs/reference/setup-tools/kubeadm/kubeadm-token/).
+
+### Installing a pod network add-on {#pod-network}
+
+{{< caution >}}
+**Caution:** This section contains important information about installation and deployment order. Read it carefully before proceeding.
+{{< /caution >}}
+
+You must install a pod network add-on so that your pods can communicate with
+each other.
+
+**The network must be deployed before any applications. Also, CoreDNS will not start up before a network is installed.
+kubeadm only supports Container Network Interface (CNI) based networks (and does not support kubenet).**
+
+Several projects provide Kubernetes pod networks using CNI, some of which also
+support [Network Policy](/docs/concepts/services-networking/networkpolicies/). See the [add-ons page](/docs/concepts/cluster-administration/addons/) for a complete list of available network add-ons.
+- IPv6 support was added in [CNI v0.6.0](https://github.com/containernetworking/cni/releases/tag/v0.6.0).
+- [CNI bridge](https://github.com/containernetworking/plugins/blob/master/plugins/main/bridge/README.md) and [local-ipam](https://github.com/containernetworking/plugins/blob/master/plugins/ipam/host-local/README.md) are the only supported IPv6 network plugins in Kubernetes version 1.9.
+
+Note that kubeadm sets up a more secure cluster by default and enforces use of [RBAC](/docs/reference/access-authn-authz/rbac/).
+Make sure that your network manifest supports RBAC.
+
+You can install a pod network add-on with the following command:
+
+```bash
+kubectl apply -f
+```
+
+You can install only one pod network per cluster.
+
+{{< tabs name="tabs-pod-install" >}}
+{{% tab name="Choose one..." %}}
+Please select one of the tabs to see installation instructions for the respective third-party Pod Network Provider.
+{{% /tab %}}
+
+{{% tab name="Calico" %}}
+For more information about using Calico, see [Quickstart for Calico on Kubernetes](https://docs.projectcalico.org/latest/getting-started/kubernetes/), [Installing Calico for policy and networking](https://docs.projectcalico.org/latest/getting-started/kubernetes/installation/calico), and other related resources.
+
+In order for Network Policy to work correctly, you need to pass `--pod-network-cidr=192.168.0.0/16` to `kubeadm init`. Note that Calico works on `amd64` only.
+
+```shell
+kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
+kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
+```
+
+{{% /tab %}}
+{{% tab name="Canal" %}}
+Canal uses Calico for policy and Flannel for networking. Refer to the Calico documentation for the [official getting started guide](https://docs.projectcalico.org/latest/getting-started/kubernetes/installation/flannel).
+
+For Canal to work correctly, `--pod-network-cidr=10.244.0.0/16` has to be passed to `kubeadm init`. Note that Canal works on `amd64` only.
+
+```shell
+kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/rbac.yaml
+kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/canal.yaml
+```
+
+{{% /tab %}}
+{{% tab name="Flannel" %}}
+
+For `flannel` to work correctly, `--pod-network-cidr=10.244.0.0/16` has to be passed to `kubeadm init`. Note that `flannel` works on `amd64`, `arm`, `arm64` and `ppc64le`. For it to work on a platform other than
+`amd64`, you must manually download the manifest and replace `amd64` occurrences with your chosen platform.
+
+Set `/proc/sys/net/bridge/bridge-nf-call-iptables` to `1` by running `sysctl net.bridge.bridge-nf-call-iptables=1`
+to pass bridged IPv4 traffic to iptables' chains. This is a requirement for some CNI plugins to work, for more information
+please see [here](https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements).
+
+```shell
+kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
+```
+
+For more information about `flannel`, see [the CoreOS flannel repository on GitHub
+](https://github.com/coreos/flannel).
+{{% /tab %}}
+
+{{% tab name="Kube-router" %}}
+Set `/proc/sys/net/bridge/bridge-nf-call-iptables` to `1` by running `sysctl net.bridge.bridge-nf-call-iptables=1`
+to pass bridged IPv4 traffic to iptables' chains. This is a requirement for some CNI plugins to work, for more information
+please see [here](https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements).
+
+Kube-router relies on kube-controller-manager to allocate pod CIDR for the nodes. Therefore, use `kubeadm init` with the `--pod-network-cidr` flag.
+
+Kube-router provides pod networking, network policy, and high-performing IP Virtual Server(IPVS)/Linux Virtual Server(LVS) based service proxy.
+
+For information on setting up Kubernetes cluster with Kube-router using kubeadm, please see official [setup guide](https://github.com/cloudnativelabs/kube-router/blob/master/docs/kubeadm.md).
+{{% /tab %}}
+
+{{% tab name="Romana" %}}
+Set `/proc/sys/net/bridge/bridge-nf-call-iptables` to `1` by running `sysctl net.bridge.bridge-nf-call-iptables=1`
+to pass bridged IPv4 traffic to iptables' chains. This is a requirement for some CNI plugins to work, for more information
+please see [here](https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements).
+
+The official Romana set-up guide is [here](https://github.com/romana/romana/tree/master/containerize#using-kubeadm).
+
+Romana works on `amd64` only.
+
+```shell
+kubectl apply -f https://raw.githubusercontent.com/romana/romana/master/containerize/specs/romana-kubeadm.yml
+```
+{{% /tab %}}
+
+{{% tab name="Weave Net" %}}
+Set `/proc/sys/net/bridge/bridge-nf-call-iptables` to `1` by running `sysctl net.bridge.bridge-nf-call-iptables=1`
+to pass bridged IPv4 traffic to iptables' chains. This is a requirement for some CNI plugins to work, for more information
+please see [here](https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements).
+
+The official Weave Net set-up guide is [here](https://www.weave.works/docs/net/latest/kube-addon/).
+
+Weave Net works on `amd64`, `arm`, `arm64` and `ppc64le` without any extra action required.
+Weave Net sets hairpin mode by default. This allows Pods to access themselves via their Service IP address
+if they don't know their PodIP.
+
+```shell
+kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
+```
+{{% /tab %}}
+
+{{% tab name="JuniperContrail/TungstenFabric" %}}
+Provides overlay SDN solution, delivering multicloud networking, hybrid cloud networking,
+simultaneous overlay-underlay support, network policy enforcement, network isolation,
+service chaining and flexible load balancing.
+
+There are multiple, flexible ways to install JuniperContrail/TungstenFabric CNI.
+
+Kindly refer to this quickstart: [TungstenFabric](https://tungstenfabric.github.io/website/)
+{{% /tab %}}
+{{< /tabs >}}
+
+
+Once a pod network has been installed, you can confirm that it is working by
+checking that the CoreDNS pod is Running in the output of `kubectl get pods --all-namespaces`.
+And once the CoreDNS pod is up and running, you can continue by joining your nodes.
+
+If your network is not working or CoreDNS is not in the Running state, check
+out our [troubleshooting docs](/docs/setup/independent/troubleshooting-kubeadm/).
+
+### Master Isolation
+
+By default, your cluster will not schedule pods on the master for security
+reasons. If you want to be able to schedule pods on the master, e.g. for a
+single-machine Kubernetes cluster for development, run:
+
+```bash
+kubectl taint nodes --all node-role.kubernetes.io/master-
+```
+
+With output looking something like:
+
+```
+node "test-01" untainted
+taint "node-role.kubernetes.io/master:" not found
+taint "node-role.kubernetes.io/master:" not found
+```
+
+This will remove the `node-role.kubernetes.io/master` taint from any nodes that
+have it, including the master node, meaning that the scheduler will then be able
+to schedule pods everywhere.
+
+### Joining your nodes {#join-nodes}
+
+The nodes are where your workloads (containers and pods, etc) run. To add new nodes to your cluster do the following for each machine:
+
+* SSH to the machine
+* Become root (e.g. `sudo su -`)
+* Run the command that was output by `kubeadm init`. For example:
+
+``` bash
+kubeadm join --token : --discovery-token-ca-cert-hash sha256:
+```
+
+If you do not have the token, you can get it by running the following command on the master node:
+
+``` bash
+kubeadm token list
+```
+
+The output is similar to this:
+
+``` console
+TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
+8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system:
+ signing token generated by bootstrappers:
+ 'kubeadm init'. kubeadm:
+ default-node-token
+```
+
+By default, tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired,
+you can create a new token by running the following command on the master node:
+
+``` bash
+kubeadm token create
+```
+
+The output is similar to this:
+
+``` console
+5didvk.d09sbcov8ph2amjw
+```
+
+If you don't have the value of `--discovery-token-ca-cert-hash`, you can get it by running the following command chain on the master node:
+
+``` bash
+openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
+ openssl dgst -sha256 -hex | sed 's/^.* //'
+```
+
+The output is similar to this:
+
+``` console
+8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78
+```
+
+{{< note >}}
+**Note:** To specify an IPv6 tuple for `:`, IPv6 address must be enclosed in square brackets, for example: `[fd00::101]:2073`.
+{{< /note >}}
+
+The output should look something like:
+
+```
+[preflight] Running pre-flight checks
+
+... (log output of join workflow) ...
+
+Node join complete:
+* Certificate signing request sent to master and response
+ received.
+* Kubelet informed of new secure connection details.
+
+Run 'kubectl get nodes' on the master to see this machine join.
+```
+
+A few seconds later, you should notice this node in the output from `kubectl get
+nodes` when run on the master.
+
+### (Optional) Controlling your cluster from machines other than the master
+
+In order to get a kubectl on some other computer (e.g. laptop) to talk to your
+cluster, you need to copy the administrator kubeconfig file from your master
+to your workstation like this:
+
+``` bash
+scp root@:/etc/kubernetes/admin.conf .
+kubectl --kubeconfig ./admin.conf get nodes
+```
+
+{{< note >}}
+**Note:** The example above assumes SSH access is enabled for root. If that is not the
+case, you can copy the `admin.conf` file to be accessible by some other user
+and `scp` using that other user instead.
+
+The `admin.conf` file gives the user _superuser_ privileges over the cluster.
+This file should be used sparingly. For normal users, it's recommended to
+generate an unique credential to which you whitelist privileges. You can do
+this with the `kubeadm alpha phase kubeconfig user --client-name `
+command. That command will print out a KubeConfig file to STDOUT which you
+should save to a file and distribute to your user. After that, whitelist
+privileges by using `kubectl create (cluster)rolebinding`.
+{{< /note >}}
+
+### (Optional) Proxying API Server to localhost
+
+If you want to connect to the API Server from outside the cluster you can use
+`kubectl proxy`:
+
+```bash
+scp root@:/etc/kubernetes/admin.conf .
+kubectl --kubeconfig ./admin.conf proxy
+```
+
+You can now access the API Server locally at `http://localhost:8001/api/v1`
+
+## Tear down {#tear-down}
+
+To undo what kubeadm did, you should first [drain the
+node](/docs/reference/generated/kubectl/kubectl-commands#drain) and make
+sure that the node is empty before shutting it down.
+
+Talking to the master with the appropriate credentials, run:
+
+```bash
+kubectl drain --delete-local-data --force --ignore-daemonsets
+kubectl delete node
+```
+
+Then, on the node being removed, reset all kubeadm installed state:
+
+```bash
+kubeadm reset
+```
+
+If you wish to start over simply run `kubeadm init` or `kubeadm join` with the
+appropriate arguments.
+
+More options and information about the
+[`kubeadm reset command`](/docs/reference/setup-tools/kubeadm/kubeadm-reset/).
+
+## Maintaining a cluster {#lifecycle}
+
+Instructions for maintaining kubeadm clusters (e.g. upgrades,downgrades, etc.) can be found [here.](/docs/tasks/administer-cluster/kubeadm)
+
+## Explore other add-ons {#other-addons}
+
+See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to explore other add-ons,
+including tools for logging, monitoring, network policy, visualization &
+control of your Kubernetes cluster.
+
+## What's next {#whats-next}
+
+* Verify that your cluster is running properly with [Sonobuoy](https://github.com/heptio/sonobuoy)
+* Learn about kubeadm's advanced usage in the [kubeadm reference documentation](/docs/reference/setup-tools/kubeadm/kubeadm)
+* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/user-guide/kubectl-overview/).
+* Configure log rotation. You can use **logrotate** for that. When using Docker, you can specify log rotation options for Docker daemon, for example `--log-driver=json-file --log-opt=max-size=10m --log-opt=max-file=5`. See [Configure and troubleshoot the Docker daemon](https://docs.docker.com/engine/admin/) for more details.
+
+## Feedback {#feedback}
+
+* For bugs, visit [kubeadm Github issue tracker](https://github.com/kubernetes/kubeadm/issues)
+* For support, visit kubeadm Slack Channel:
+ [#kubeadm](https://kubernetes.slack.com/messages/kubeadm/)
+* General SIG Cluster Lifecycle Development Slack Channel:
+ [#sig-cluster-lifecycle](https://kubernetes.slack.com/messages/sig-cluster-lifecycle/)
+* SIG Cluster Lifecycle [SIG information](#TODO)
+* SIG Cluster Lifecycle Mailing List:
+ [kubernetes-sig-cluster-lifecycle](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)
+
+## Version skew policy {#version-skew-policy}
+
+The kubeadm CLI tool of version vX.Y may deploy clusters with a control plane of version vX.Y or vX.(Y-1).
+kubeadm CLI vX.Y can also upgrade an existing kubeadm-created cluster of version vX.(Y-1).
+
+Due to that we can't see into the future, kubeadm CLI vX.Y may or may not be able to deploy vX.(Y+1) clusters.
+
+Example: kubeadm v1.8 can deploy both v1.7 and v1.8 clusters and upgrade v1.7 kubeadm-created clusters to
+v1.8.
+
+Please also check our [installation guide](/docs/setup/independent/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl)
+for more information on the version skew between kubelets and the control plane.
+
+## kubeadm works on multiple platforms {#multi-platform}
+
+kubeadm deb/rpm packages and binaries are built for amd64, arm (32-bit), arm64, ppc64le, and s390x
+following the [multi-platform
+proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/multi-platform.md).
+
+Only some of the network providers offer solutions for all platforms. Please consult the list of
+network providers above or the documentation from each provider to figure out whether the provider
+supports your chosen platform.
+
+## Limitations {#limitations}
+
+Please note: kubeadm is a work in progress and these limitations will be
+addressed in due course.
+
+1. The cluster created here has a single master, with a single etcd database
+ running on it. This means that if the master fails, your cluster may lose
+ data and may need to be recreated from scratch. Adding HA support
+ (multiple etcd servers, multiple API servers, etc) to kubeadm is
+ still a work-in-progress.
+
+ Workaround: regularly
+ [back up etcd](https://coreos.com/etcd/docs/latest/admin_guide.html). The
+ etcd data directory configured by kubeadm is at `/var/lib/etcd` on the master.
+
+## Troubleshooting {#troubleshooting}
+
+If you are running into difficulties with kubeadm, please consult our [troubleshooting docs](/docs/setup/independent/troubleshooting-kubeadm/).
+
+
+
+
diff --git a/content/ko/docs/setup/independent/high-availability.md b/content/ko/docs/setup/independent/high-availability.md
new file mode 100644
index 000000000..2c4cb5c74
--- /dev/null
+++ b/content/ko/docs/setup/independent/high-availability.md
@@ -0,0 +1,523 @@
+---
+title: Creating Highly Available Clusters with kubeadm
+content_template: templates/task
+weight: 50
+---
+
+{{% capture overview %}}
+
+This page explains two different approaches to setting up a highly available Kubernetes
+cluster using kubeadm:
+
+- With stacked masters. This approach requires less infrastructure. etcd members
+and control plane nodes are co-located.
+- With an external etcd cluster. This approach requires more infrastructure. The
+control plane nodes and etcd members are separated.
+
+Your clusters must run Kubernetes version 1.11 or later. You should also be aware that
+setting up HA clusters with kubeadm is still experimental. You might encounter issues
+with upgrading your clusters, for example. We encourage you to try either approach,
+and provide feedback.
+
+{{< caution >}}
+**Caution**: This page does not address running your cluster on a cloud provider.
+In a cloud environment, neither approach documented here works with Service objects
+of type LoadBalancer, or with dynamic PersistentVolumes.
+{{< /caution >}}
+
+{{% /capture %}}
+
+{{% capture prerequisites %}}
+
+For both methods you need this infrastructure:
+
+- Three machines that meet [kubeadm's minimum
+ requirements](/docs/setup/independent/install-kubeadm/#before-you-begin) for
+ the masters
+- Three machines that meet [kubeadm's minimum
+ requirements](/docs/setup/independent/install-kubeadm/#before-you-begin) for
+ the workers
+- Full network connectivity between all machines in the cluster (public or
+ private network is fine)
+- SSH access from one device to all nodes in the system
+- sudo privileges on all machines
+
+For the external etcd cluster only, you also need:
+
+- Three additional machines for etcd members
+
+{{< note >}}
+**Note**: The following examples run Calico as the Pod networking provider. If
+you run another networking provider, make sure to replace any default values as
+needed.
+{{< /note >}}
+
+{{% /capture %}}
+
+{{% capture steps %}}
+
+## First steps for both methods
+
+{{< note >}}
+**Note**: All commands in this guide on any control plane or etcd node should be
+run as root.
+{{< /note >}}
+
+- Find your pod CIDR. For details, see [the CNI network
+ documentation](/docs/setup/independent/create-cluster-kubeadm/#pod-network).
+ The example uses Calico, so the pod CIDR is `192.168.0.0/16`.
+
+### Configure SSH
+
+1. Enable ssh-agent on your main device that has access to all other nodes in
+ the system:
+
+ ```
+ eval $(ssh-agent)
+ ```
+
+1. Add your SSH identity to the session:
+
+ ```
+ ssh-add ~/.ssh/path_to_private_key
+ ```
+
+1. SSH between nodes to check that the connection is working correctly.
+
+ - When you SSH to any node, make sure to add the `-A` flag:
+
+ ```
+ ssh -A 10.0.0.7
+ ```
+
+ - When using sudo on any node, make sure to preserve the environment so SSH
+ forwarding works:
+
+ ```
+ sudo -E -s
+ ```
+
+### Create load balancer for kube-apiserver
+
+{{< note >}}
+**Note**: There are many configurations for load balancers. The following
+example is only one option. Your cluster requirements may need a
+different configuration.
+{{< /note >}}
+
+1. Create a kube-apiserver load balancer with a name that resolves to DNS.
+
+ - In a cloud environment you should place your control plane nodes behind a TCP
+ forwarding load balancer. This load balancer distributes traffic to all
+ healthy control plane nodes in its target list. The health check for
+ an apiserver is a TCP check on the port the kube-apiserver listens on
+ (default value `:6443`).
+
+ - It is not recommended to use an IP address directly in a cloud environment.
+
+ - The load balancer must be able to communicate with all control plane nodes
+ on the apiserver port. It must also allow incoming traffic on its
+ listening port.
+
+1. Add the first control plane nodes to the load balancer and test the
+ connection:
+
+ ```sh
+ nc -v LOAD_BALANCER_IP PORT
+ ```
+
+ - A connection refused error is expected because the apiserver is not yet
+ running. A timeout, however, means the load balancer cannot communicate
+ with the control plane node. If a timeout occurs, reconfigure the load
+ balancer to communicate with the control plane node.
+
+1. Add the remaining control plane nodes to the load balancer target group.
+
+## Stacked control plane nodes
+
+### Bootstrap the first stacked control plane node
+
+1. Create a `kubeadm-config.yaml` template file:
+
+ apiVersion: kubeadm.k8s.io/v1alpha2
+ kind: MasterConfiguration
+ kubernetesVersion: v1.11.0
+ apiServerCertSANs:
+ - "LOAD_BALANCER_DNS"
+ api:
+ controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
+ etcd:
+ local:
+ extraArgs:
+ listen-client-urls: "https://127.0.0.1:2379,https://CP0_IP:2379"
+ advertise-client-urls: "https://CP0_IP:2379"
+ listen-peer-urls: "https://CP0_IP:2380"
+ initial-advertise-peer-urls: "https://CP0_IP:2380"
+ initial-cluster: "CP0_HOSTNAME=https://CP0_IP:2380"
+ serverCertSANs:
+ - CP0_HOSTNAME
+ - CP0_IP
+ peerCertSANs:
+ - CP0_HOSTNAME
+ - CP0_IP
+ networking:
+ # This CIDR is a Calico default. Substitute or remove for your CNI provider.
+ podSubnet: "192.168.0.0/16"
+
+
+1. Replace the following variables in the template with the appropriate
+ values for your cluster:
+
+ * `LOAD_BALANCER_DNS`
+ * `LOAD_BALANCER_PORT`
+ * `CP0_HOSTNAME`
+ * `CP0_IP`
+
+1. Run `kubeadm init --config kubeadm-config.yaml`
+
+### Copy required files to other control plane nodes
+
+The following certificates and other required files were created when you ran `kubeadm init`.
+Copy these files to your other control plane nodes:
+
+- `/etc/kubernetes/pki/ca.crt`
+- `/etc/kubernetes/pki/ca.key`
+- `/etc/kubernetes/pki/sa.key`
+- `/etc/kubernetes/pki/sa.pub`
+- `/etc/kubernetes/pki/front-proxy-ca.crt`
+- `/etc/kubernetes/pki/front-proxy-ca.key`
+- `/etc/kubernetes/pki/etcd/ca.crt`
+- `/etc/kubernetes/pki/etcd/ca.key`
+
+Copy the admin kubeconfig to the other control plane nodes:
+
+- `/etc/kubernetes/admin.conf`
+
+In the following example, replace
+`CONTROL_PLANE_IPS` with the IP addresses of the other control plane nodes.
+
+```sh
+USER=ubuntu # customizable
+CONTROL_PLANE_IPS="10.0.0.7 10.0.0.8"
+for host in ${CONTROL_PLANE_IPS}; do
+ scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
+ scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
+ scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
+ scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
+ scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
+ scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
+ scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
+ scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
+ scp /etc/kubernetes/admin.conf "${USER}"@$host:
+done
+```
+
+{{< note >}}
+**Note**: Remember that your config may differ from this example.
+{{< /note >}}
+
+### Add the second stacked control plane node
+
+1. Create a second, different `kubeadm-config.yaml` template file:
+
+ apiVersion: kubeadm.k8s.io/v1alpha2
+ kind: MasterConfiguration
+ kubernetesVersion: v1.11.0
+ apiServerCertSANs:
+ - "LOAD_BALANCER_DNS"
+ api:
+ controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
+ etcd:
+ local:
+ extraArgs:
+ listen-client-urls: "https://127.0.0.1:2379,https://CP1_IP:2379"
+ advertise-client-urls: "https://CP1_IP:2379"
+ listen-peer-urls: "https://CP1_IP:2380"
+ initial-advertise-peer-urls: "https://CP1_IP:2380"
+ initial-cluster: "CP0_HOSTNAME=https://CP0_IP:2380,CP1_HOSTNAME=https://CP1_IP:2380"
+ initial-cluster-state: existing
+ serverCertSANs:
+ - CP1_HOSTNAME
+ - CP1_IP
+ peerCertSANs:
+ - CP1_HOSTNAME
+ - CP1_IP
+ networking:
+ # This CIDR is a calico default. Substitute or remove for your CNI provider.
+ podSubnet: "192.168.0.0/16"
+
+1. Replace the following variables in the template with the appropriate values for your cluster:
+
+ - `LOAD_BALANCER_DNS`
+ - `LOAD_BALANCER_PORT`
+ - `CP0_HOSTNAME`
+ - `CP0_IP`
+ - `CP1_HOSTNAME`
+ - `CP1_IP`
+
+1. Move the copied files to the correct locations:
+
+ ```sh
+ USER=ubuntu # customizable
+ mkdir -p /etc/kubernetes/pki/etcd
+ mv /home/${USER}/ca.crt /etc/kubernetes/pki/
+ mv /home/${USER}/ca.key /etc/kubernetes/pki/
+ mv /home/${USER}/sa.pub /etc/kubernetes/pki/
+ mv /home/${USER}/sa.key /etc/kubernetes/pki/
+ mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
+ mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/
+ mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
+ mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
+ mv /home/${USER}/admin.conf /etc/kubernetes/admin.conf
+ ```
+
+1. Run the kubeadm phase commands to bootstrap the kubelet:
+
+ ```sh
+ kubeadm alpha phase certs all --config kubeadm-config.yaml
+ kubeadm alpha phase kubelet config write-to-disk --config kubeadm-config.yaml
+ kubeadm alpha phase kubelet write-env-file --config kubeadm-config.yaml
+ kubeadm alpha phase kubeconfig kubelet --config kubeadm-config.yaml
+ systemctl start kubelet
+ ```
+
+1. Run the commands to add the node to the etcd cluster:
+
+ ```sh
+ export CP0_IP=10.0.0.7
+ export CP0_HOSTNAME=cp0
+ export CP1_IP=10.0.0.8
+ export CP1_HOSTNAME=cp1
+
+ export KUBECONFIG=/etc/kubernetes/admin.conf
+ kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member add ${CP1_HOSTNAME} https://${CP1_IP}:2380
+ kubeadm alpha phase etcd local --config kubeadm-config.yaml
+ ```
+
+ - This command causes the etcd cluster to become unavailable for a
+ brief period, after the node is added to the running cluster, and before the
+ new node is joined to the etcd cluster.
+
+1. Deploy the control plane components and mark the node as a master:
+
+ ```sh
+ kubeadm alpha phase kubeconfig all --config kubeadm-config.yaml
+ kubeadm alpha phase controlplane all --config kubeadm-config.yaml
+ kubeadm alpha phase mark-master --config kubeadm-config.yaml
+ ```
+
+### Add the third stacked control plane node
+
+1. Create a third, different `kubeadm-config.yaml` template file:
+
+ apiVersion: kubeadm.k8s.io/v1alpha2
+ kind: MasterConfiguration
+ kubernetesVersion: v1.11.0
+ apiServerCertSANs:
+ - "LOAD_BALANCER_DNS"
+ api:
+ controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
+ etcd:
+ local:
+ extraArgs:
+ listen-client-urls: "https://127.0.0.1:2379,https://CP2_IP:2379"
+ advertise-client-urls: "https://CP2_IP:2379"
+ listen-peer-urls: "https://CP2_IP:2380"
+ initial-advertise-peer-urls: "https://CP2_IP:2380"
+ initial-cluster: "CP0_HOSTNAME=https://CP0_IP:2380,CP1_HOSTNAME=https://CP1_IP:2380,CP2_HOSTNAME=https://CP2_IP:2380"
+ initial-cluster-state: existing
+ serverCertSANs:
+ - CP2_HOSTNAME
+ - CP2_IP
+ peerCertSANs:
+ - CP2_HOSTNAME
+ - CP2_IP
+ networking:
+ # This CIDR is a calico default. Substitute or remove for your CNI provider.
+ podSubnet: "192.168.0.0/16"
+
+1. Replace the following variables in the template with the appropriate values for your cluster:
+
+ - `LOAD_BALANCER_DNS`
+ - `LOAD_BALANCER_PORT`
+ - `CP0_HOSTNAME`
+ - `CP0_IP`
+ - `CP1_HOSTNAME`
+ - `CP1_IP`
+ - `CP2_HOSTNAME`
+ - `CP2_IP`
+
+1. Move the copied files to the correct locations:
+
+ ```sh
+ USER=ubuntu # customizable
+ mkdir -p /etc/kubernetes/pki/etcd
+ mv /home/${USER}/ca.crt /etc/kubernetes/pki/
+ mv /home/${USER}/ca.key /etc/kubernetes/pki/
+ mv /home/${USER}/sa.pub /etc/kubernetes/pki/
+ mv /home/${USER}/sa.key /etc/kubernetes/pki/
+ mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
+ mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/
+ mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
+ mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
+ mv /home/${USER}/admin.conf /etc/kubernetes/admin.conf
+ ```
+
+1. Run the kubeadm phase commands to bootstrap the kubelet:
+
+ ```sh
+ kubeadm alpha phase certs all --config kubeadm-config.yaml
+ kubeadm alpha phase kubelet config write-to-disk --config kubeadm-config.yaml
+ kubeadm alpha phase kubelet write-env-file --config kubeadm-config.yaml
+ kubeadm alpha phase kubeconfig kubelet --config kubeadm-config.yaml
+ systemctl start kubelet
+ ```
+
+1. Run the commands to add the node to the etcd cluster:
+
+ ```sh
+ export CP0_IP=10.0.0.7
+ export CP0_HOSTNAME=cp0
+ export CP2_IP=10.0.0.9
+ export CP2_HOSTNAME=cp2
+
+ export KUBECONFIG=/etc/kubernetes/admin.conf
+ kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member add ${CP2_HOSTNAME} https://${CP2_IP}:2380
+ kubeadm alpha phase etcd local --config kubeadm-config.yaml
+ ```
+
+1. Deploy the control plane components and mark the node as a master:
+
+ ```sh
+ kubeadm alpha phase kubeconfig all --config kubeadm-config.yaml
+ kubeadm alpha phase controlplane all --config kubeadm-config.yaml
+ kubeadm alpha phase mark-master --config kubeadm-config.yaml
+ ```
+
+## External etcd
+
+### Set up the cluster
+
+- Follow [these instructions](/docs/setup/independent/setup-ha-etcd-with-kubeadm/)
+ to set up the etcd cluster.
+
+### Copy required files to other control plane nodes
+
+The following certificates were created when you created the cluster. Copy them
+to your other control plane nodes:
+
+- `/etc/kubernetes/pki/etcd/ca.crt`
+- `/etc/kubernetes/pki/apiserver-etcd-client.crt`
+- `/etc/kubernetes/pki/apiserver-etcd-client.key`
+
+In the following example, replace `USER` and `CONTROL_PLANE_HOSTS` values with values
+for your environment.
+
+```sh
+USER=ubuntu
+CONTROL_PLANE_HOSTS="10.0.0.7 10.0.0.8 10.0.0.9"
+for host in $CONTROL_PLANE_HOSTS; do
+ scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:
+ scp /etc/kubernetes/pki/apiserver-etcd-client.crt "${USER}"@$host:
+ scp /etc/kubernetes/pki/apiserver-etcd-client.key "${USER}"@$host:
+done
+```
+
+### Set up the first control plane node
+
+1. Create a `kubeadm-config.yaml` template file:
+
+ apiVersion: kubeadm.k8s.io/v1alpha2
+ kind: MasterConfiguration
+ kubernetesVersion: v1.11.0
+ apiServerCertSANs:
+ - "LOAD_BALANCER_DNS"
+ api:
+ controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
+ etcd:
+ external:
+ endpoints:
+ - https://ETCD_0_IP:2379
+ - https://ETCD_1_IP:2379
+ - https://ETCD_2_IP:2379
+ caFile: /etc/kubernetes/pki/etcd/ca.crt
+ certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
+ keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
+ networking:
+ # This CIDR is a calico default. Substitute or remove for your CNI provider.
+ podSubnet: "192.168.0.0/16"
+
+1. Replace the following variables in the template with the appropriate values for your cluster:
+
+ - `LOAD_BALANCER_DNS`
+ - `LOAD_BALANCER_PORT`
+ - `ETCD_0_IP`
+ - `ETCD_1_IP`
+ - `ETCD_2_IP`
+
+1. Run `kubeadm init --config kubeadm-config.yaml`
+
+### Copy required files to the correct locations
+
+The following certificates and other required files were created when you ran `kubeadm init`.
+Copy these files to your other control plane nodes:
+
+- `/etc/kubernetes/pki/ca.crt`
+- `/etc/kubernetes/pki/ca.key`
+- `/etc/kubernetes/pki/sa.key`
+- `/etc/kubernetes/pki/sa.pub`
+- `/etc/kubernetes/pki/front-proxy-ca.crt`
+- `/etc/kubernetes/pki/front-proxy-ca.key`
+
+In the following example, replace the list of
+`CONTROL_PLANE_IPS` values with the IP addresses of the other control plane nodes.
+
+```sh
+USER=ubuntu # customizable
+CONTROL_PLANE_IPS="10.0.0.7 10.0.0.8"
+for host in ${CONTROL_PLANE_IPS}; do
+ scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
+ scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
+ scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
+ scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
+ scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
+ scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
+done
+```
+
+{{< note >}}
+**Note**: Remember that your config may differ from this example.
+{{< /note >}}
+
+### Set up the other control plane nodes
+
+1. Verify the location of the copied files.
+ Your `/etc/kubernetes` directory should look like this:
+
+ - `/etc/kubernetes/pki/apiserver-etcd-client.crt`
+ - `/etc/kubernetes/pki/apiserver-etcd-client.key`
+ - `/etc/kubernetes/pki/ca.crt`
+ - `/etc/kubernetes/pki/ca.key`
+ - `/etc/kubernetes/pki/front-proxy-ca.crt`
+ - `/etc/kubernetes/pki/front-proxy-ca.key`
+ - `/etc/kubernetes/pki/sa.key`
+ - `/etc/kubernetes/pki/sa.pub`
+ - `/etc/kubernetes/pki/etcd/ca.crt`
+
+1. Run `kubeadm init --config kubeadm-config.yaml` on each control plane node, where
+ `kubeadm-config.yaml` is the file you already created.
+
+## Common tasks after bootstrapping control plane
+
+### Install a pod network
+
+[Follow these instructions](/docs/setup/independent/create-cluster-kubeadm/#pod-network) to install
+the pod network. Make sure this corresponds to whichever pod CIDR you provided
+in the master configuration file.
+
+### Install workers
+
+Each worker node can now be joined to the cluster with the command returned from any of the
+`kubeadm init` commands.
+
+{{% /capture %}}
diff --git a/content/ko/docs/setup/independent/install-kubeadm.md b/content/ko/docs/setup/independent/install-kubeadm.md
new file mode 100644
index 000000000..2ecc270b3
--- /dev/null
+++ b/content/ko/docs/setup/independent/install-kubeadm.md
@@ -0,0 +1,280 @@
+---
+title: Installing kubeadm
+content_template: templates/task
+weight: 20
+---
+
+{{% capture overview %}}
+
+This page shows how to install the `kubeadm` toolbox.
+For information how to create a cluster with kubeadm once you have performed this installation process,
+see the [Using kubeadm to Create a Cluster](/docs/setup/independent/create-cluster-kubeadm/) page.
+
+{{% /capture %}}
+
+{{% capture prerequisites %}}
+
+* One or more machines running one of:
+ - Ubuntu 16.04+
+ - Debian 9
+ - CentOS 7
+ - RHEL 7
+ - Fedora 25/26 (best-effort)
+ - HypriotOS v1.0.1+
+ - Container Linux (tested with 1576.4.0)
+* 2 GB or more of RAM per machine (any less will leave little room for your apps)
+* 2 CPUs or more
+* Full network connectivity between all machines in the cluster (public or private network is fine)
+* Unique hostname, MAC address, and product_uuid for every node. See [here](#verify-the-mac-address-and-product-uuid-are-unique-for-every-node) for more details.
+* Certain ports are open on your machines. See [here](#check-required-ports) for more details.
+* Swap disabled. You **MUST** disable swap in order for the kubelet to work properly.
+
+{{% /capture %}}
+
+{{% capture steps %}}
+
+## Verify the MAC address and product_uuid are unique for every node
+
+* You can get the MAC address of the network interfaces using the command `ip link` or `ifconfig -a`
+* The product_uuid can be checked by using the command `sudo cat /sys/class/dmi/id/product_uuid`
+
+It is very likely that hardware devices will have unique addresses, although some virtual machines may have
+identical values. Kubernetes uses these values to uniquely identify the nodes in the cluster.
+If these values are not unique to each node, the installation process
+may [fail](https://github.com/kubernetes/kubeadm/issues/31).
+
+## Check network adapters
+
+If you have more than one network adapter, and your Kubernetes components are not reachable on the default
+route, we recommend you add IP route(s) so Kubernetes cluster addresses go via the appropriate adapter.
+
+## Check required ports
+
+### Master node(s)
+
+| Protocol | Direction | Port Range | Purpose | Used By |
+|----------|-----------|------------|-------------------------|---------------------------|
+| TCP | Inbound | 6443* | Kubernetes API server | All |
+| TCP | Inbound | 2379-2380 | etcd server client API | kube-apiserver, etcd |
+| TCP | Inbound | 10250 | Kubelet API | Self, Control plane |
+| TCP | Inbound | 10251 | kube-scheduler | Self |
+| TCP | Inbound | 10252 | kube-controller-manager | Self |
+
+### Worker node(s)
+
+| Protocol | Direction | Port Range | Purpose | Used By |
+|----------|-----------|-------------|-----------------------|-------------------------|
+| TCP | Inbound | 10250 | Kubelet API | Self, Control plane |
+| TCP | Inbound | 30000-32767 | NodePort Services** | All |
+
+** Default port range for [NodePort Services](/docs/concepts/services-networking/service/).
+
+Any port numbers marked with * are overridable, so you will need to ensure any
+custom ports you provide are also open.
+
+Although etcd ports are included in master nodes, you can also host your own
+etcd cluster externally or on custom ports.
+
+The pod network plugin you use (see below) may also require certain ports to be
+open. Since this differs with each pod network plugin, please see the
+documentation for the plugins about what port(s) those need.
+
+## Installing Docker
+
+On each of your machines, install Docker.
+Version 17.03 is recommended, but 1.11, 1.12 and 1.13 are known to work as well.
+Versions 17.06+ _might work_, but have not yet been tested and verified by the Kubernetes node team.
+Keep track of the latest verified Docker version in the Kubernetes release notes.
+
+Please proceed with executing the following commands based on your OS as root. You may become the root user by executing `sudo -i` after SSH-ing to each host.
+
+If you already have the required versions of the Docker installed, you can move on to next section.
+If not, you can use the following commands to install Docker on your system:
+
+{{< tabs name="docker_install" >}}
+{{% tab name="Ubuntu, Debian or HypriotOS" %}}
+Install Docker from Ubuntu's repositories:
+
+```bash
+apt-get update
+apt-get install -y docker.io
+```
+
+or install Docker CE 17.03 from Docker's repositories for Ubuntu or Debian:
+
+```bash
+apt-get update
+apt-get install -y apt-transport-https ca-certificates curl software-properties-common
+curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
+add-apt-repository "deb https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") $(lsb_release -cs) stable"
+apt-get update && apt-get install -y docker-ce=$(apt-cache madison docker-ce | grep 17.03 | head -1 | awk '{print $3}')
+```
+{{% /tab %}}
+{{% tab name="CentOS, RHEL or Fedora" %}}
+Install Docker using your operating system's bundled package:
+
+```bash
+yum install -y docker
+systemctl enable docker && systemctl start docker
+```
+{{% /tab %}}
+{{% tab name="Container Linux" %}}
+Enable and start Docker:
+
+```bash
+systemctl enable docker && systemctl start docker
+```
+{{% /tab %}}
+{{< /tabs >}}
+
+
+Refer to the [official Docker installation guides](https://docs.docker.com/engine/installation/)
+for more information.
+
+## Installing kubeadm, kubelet and kubectl
+
+You will install these packages on all of your machines:
+
+* `kubeadm`: the command to bootstrap the cluster.
+
+* `kubelet`: the component that runs on all of the machines in your cluster
+ and does things like starting pods and containers.
+
+* `kubectl`: the command line util to talk to your cluster.
+
+kubeadm **will not** install or manage `kubelet` or `kubectl` for you, so you will
+need to ensure they match the version of the Kubernetes control panel you want
+kubeadm to install for you. If you do not, there is a risk of a version skew occurring that
+can lead to unexpected, buggy behaviour. However, _one_ minor version skew between the
+kubelet and the control plane is supported, but the kubelet version may never exceed the API
+server version. For example, kubelets running 1.7.0 should be fully compatible with a 1.8.0 API server,
+but not vice versa.
+
+{{< warning >}}
+These instructions exclude all Kubernetes packages from any system upgrades.
+This is because kubeadm and Kubernetes require
+[special attention to upgrade](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11/).
+{{ warning >}}
+
+For more information on version skews, please read our
+[version skew policy](/docs/setup/independent/create-cluster-kubeadm/#version-skew-policy).
+
+{{< tabs name="k8s_install" >}}
+{{% tab name="Ubuntu, Debian or HypriotOS" %}}
+```bash
+apt-get update && apt-get install -y apt-transport-https curl
+curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
+cat </etc/apt/sources.list.d/kubernetes.list
+deb http://apt.kubernetes.io/ kubernetes-xenial main
+EOF
+apt-get update
+apt-get install -y kubelet kubeadm kubectl
+apt-mark hold kubelet kubeadm kubectl
+```
+{{% /tab %}}
+{{% tab name="CentOS, RHEL or Fedora" %}}
+```bash
+cat < /etc/yum.repos.d/kubernetes.repo
+[kubernetes]
+name=Kubernetes
+baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
+enabled=1
+gpgcheck=1
+repo_gpgcheck=1
+gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
+exclude=kube*
+EOF
+setenforce 0
+yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
+systemctl enable kubelet && systemctl start kubelet
+```
+
+ **Note:**
+
+ - Disabling SELinux by running `setenforce 0` is required to allow containers to access the host filesystem, which is required by pod networks for example.
+ You have to do this until SELinux support is improved in the kubelet.
+ - Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. You should ensure
+ `net.bridge.bridge-nf-call-iptables` is set to 1 in your `sysctl` config, e.g.
+
+ ```bash
+ cat < /etc/sysctl.d/k8s.conf
+ net.bridge.bridge-nf-call-ip6tables = 1
+ net.bridge.bridge-nf-call-iptables = 1
+ EOF
+ sysctl --system
+ ```
+{{% /tab %}}
+{{% tab name="Container Linux" %}}
+Install CNI plugins (required for most pod network):
+
+```bash
+CNI_VERSION="v0.6.0"
+mkdir -p /opt/cni/bin
+curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-amd64-${CNI_VERSION}.tgz" | tar -C /opt/cni/bin -xz
+```
+
+Install `kubeadm`, `kubelet`, `kubectl` and add a `kubelet` systemd service:
+
+```bash
+RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)"
+
+mkdir -p /opt/bin
+cd /opt/bin
+curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/amd64/{kubeadm,kubelet,kubectl}
+chmod +x {kubeadm,kubelet,kubectl}
+
+curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/kubelet.service" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service
+mkdir -p /etc/systemd/system/kubelet.service.d
+curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/10-kubeadm.conf" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
+```
+
+Enable and start `kubelet`:
+
+```bash
+systemctl enable kubelet && systemctl start kubelet
+```
+{{% /tab %}}
+{{< /tabs >}}
+
+
+The kubelet is now restarting every few seconds, as it waits in a crashloop for
+kubeadm to tell it what to do.
+
+## Configure cgroup driver used by kubelet on Master Node
+
+When using Docker, kubeadm will automatically detect the cgroup driver for the kubelet
+and set it in the `/var/lib/kubelet/kubeadm-flags.env` file during runtime.
+
+If you are using a different CRI, you have to modify the file
+`/etc/default/kubelet` with your `cgroup-driver` value, like so:
+
+```bash
+KUBELET_KUBEADM_EXTRA_ARGS=--cgroup-driver=
+```
+
+This file will be used by `kubeadm init` and `kubeadm join` to source extra
+user defined arguments for the kubelet.
+
+Please mind, that you **only** have to do that if the cgroup driver of your CRI
+is not `cgroupfs`, because that is the default value in the kubelet already.
+
+Restarting the kubelet is required:
+
+```bash
+systemctl daemon-reload
+systemctl restart kubelet
+```
+
+## Troubleshooting
+
+If you are running into difficulties with kubeadm, please consult our [troubleshooting docs](/docs/setup/independent/troubleshooting-kubeadm/).
+
+{{% capture whatsnext %}}
+
+* [Using kubeadm to Create a Cluster](/docs/setup/independent/create-cluster-kubeadm/)
+
+{{% /capture %}}
+
+
+
+
diff --git a/content/ko/docs/setup/independent/setup-ha-etcd-with-kubeadm.md b/content/ko/docs/setup/independent/setup-ha-etcd-with-kubeadm.md
new file mode 100644
index 000000000..b11a24d29
--- /dev/null
+++ b/content/ko/docs/setup/independent/setup-ha-etcd-with-kubeadm.md
@@ -0,0 +1,263 @@
+---
+title: Set up a Highly Availabile etcd Cluster With kubeadm
+content_template: templates/task
+weight: 60
+---
+
+{{% capture overview %}}
+
+Kubeadm defaults to running a single member etcd cluster in a static pod managed
+by the kubelet on the control plane node. This is not a highly available setup
+as the etcd cluster contains only one member and cannot sustain any members
+becoming unavailable. This task walks through the process of creating a highly
+available etcd cluster of three members that can be used as an external etcd
+when using kubeadm to set up a kubernetes cluster.
+
+{{% /capture %}}
+
+{{% capture prerequisites %}}
+
+* Three hosts that can talk to each other over ports 2379 and 2380. This
+ document assumes these default ports. However, they are configurable through
+ the kubeadm config file.
+* Each host must [have docker, kubelet, and kubeadm installed][toolbox].
+* Some infrastructure to copy files between hosts. For example `ssh` and `scp`
+ can satisfy this requirement.
+
+[toolbox]: /docs/setup/independent/install-kubeadm/
+
+{{% /capture %}}
+
+{{% capture steps %}}
+
+## Setting up the cluster
+
+The general approach is to generate all certs on one node and only distribute
+the *necessary* files to the other nodes.
+
+{{< note >}}
+**Note:** kubeadm contains all the necessary crytographic machinery to generate
+the certificates described below; no other cryptographic tooling is required for
+this example.
+{{< /note >}}
+
+
+1. Configure the kubelet to be a service manager for etcd.
+
+ Running etcd is simpler than running kubernetes so you must override the
+ kubeadm-provided kubelet unit file by creating a new one with a higher
+ precedence.
+
+ ```sh
+ cat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf
+ [Service]
+ ExecStart=
+ ExecStart=/usr/bin/kubelet --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true
+ Restart=always
+ EOF
+
+ systemctl daemon-reload
+ systemctl restart kubelet
+ ```
+
+1. Create configuration files for kubeadm.
+
+ Generate one kubeadm configuration file for each host that will have an etcd
+ member running on it using the following script.
+
+ ```sh
+ # Update HOST0, HOST1, and HOST2 with the IPs or resolvable names of your hosts
+ export HOST0=10.0.0.6
+ export HOST1=10.0.0.7
+ export HOST2=10.0.0.8
+
+ # Create temp directories to store files that will end up on other hosts.
+ mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/
+
+ ETCDHOSTS=(${HOST0} ${HOST1} ${HOST2})
+ NAMES=("infra0" "infra1" "infra2")
+
+ for i in "${!ETCDHOSTS[@]}"; do
+ HOST=${ETCDHOSTS[$i]}
+ NAME=${NAMES[$i]}
+ cat << EOF > /tmp/${HOST}/kubeadmcfg.yaml
+ apiVersion: "kubeadm.k8s.io/v1alpha2"
+ kind: MasterConfiguration
+ etcd:
+ localEtcd:
+ serverCertSANs:
+ - "${HOST}"
+ peerCertSANs:
+ - "${HOST}"
+ extraArgs:
+ initial-cluster: infra0=https://${ETCDHOSTS[0]}:2380,infra1=https://${ETCDHOSTS[1]}:2380,infra2=https://${ETCDHOSTS[2]}:2380
+ initial-cluster-state: new
+ name: ${NAME}
+ listen-peer-urls: https://${HOST}:2380
+ listen-client-urls: https://${HOST}:2379
+ advertise-client-urls: https://${HOST}:2379
+ initial-advertise-peer-urls: https://${HOST}:2380
+ EOF
+ done
+ ```
+
+1. Generate the certificate authority
+
+ If you already have a CA then the only action that is copying the CA's `crt` and
+ `key` file to `/etc/kubernetes/pki/etcd/ca.crt` and
+ `/etc/kubernetes/pki/etcd/ca.key`. After those files have been copied, please
+ skip this step.
+
+ If you do not already have a CA then run this command on `$HOST0` (where you
+ generated the configuration files for kubeadm).
+
+ ```
+ kubeadm alpha phase certs etcd-ca
+ ```
+
+ This creates two files
+
+ - `/etc/kubernetes/pki/etcd/ca.crt`
+ - `/etc/kubernetes/pki/etcd/ca.key`
+
+1. Create certificates for each member
+
+ ```sh
+ kubeadm alpha phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml
+ kubeadm alpha phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yaml
+ kubeadm alpha phase certs etcd-healthcheck-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
+ kubeadm alpha phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
+ cp -R /etc/kubernetes/pki /tmp/${HOST2}/
+ # cleanup non-reusable certificates
+ find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
+
+ kubeadm alpha phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml
+ kubeadm alpha phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml
+ kubeadm alpha phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
+ kubeadm alpha phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
+ cp -R /etc/kubernetes/pki /tmp/${HOST1}/
+ find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
+
+ kubeadm alpha phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml
+ kubeadm alpha phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml
+ kubeadm alpha phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
+ kubeadm alpha phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
+ # No need to move the certs because they are for HOST0
+
+ # clean up certs that should not be copied off this host
+ find /tmp/${HOST2} -name ca.key -type f -delete
+ find /tmp/${HOST1} -name ca.key -type f -delete
+ ```
+
+1. Copy certificates and kubeadm configs
+
+ The certificates have been generated and now they must be moved to their
+ respective hosts.
+
+ ```sh
+ USER=ubuntu
+ HOST=${HOST1}
+ scp -r /tmp/${HOST}/* ${USER}@${HOST}:
+ ssh ${USER}@${HOST}
+ USER@HOST $ sudo -Es
+ root@HOST $ chown -R root:root pki
+ root@HOST $ mv pki /etc/kubernetes/
+ ```
+
+1. Ensure all expected files exist
+
+ The complete list of required files on `$HOST0` is:
+
+ ```
+ /tmp/${HOST0}
+ └── kubeadmcfg.yaml
+ ---
+ /etc/kubernetes/pki
+ ├── apiserver-etcd-client.crt
+ ├── apiserver-etcd-client.key
+ └── etcd
+ ├── ca.crt
+ ├── ca.key
+ ├── healthcheck-client.crt
+ ├── healthcheck-client.key
+ ├── peer.crt
+ ├── peer.key
+ ├── server.crt
+ └── server.key
+ ```
+
+ On `$HOST1`:
+
+ ```
+ $HOME
+ └── kubeadmcfg.yaml
+ ---
+ /etc/kubernetes/pki
+ ├── apiserver-etcd-client.crt
+ ├── apiserver-etcd-client.key
+ └── etcd
+ ├── ca.crt
+ ├── healthcheck-client.crt
+ ├── healthcheck-client.key
+ ├── peer.crt
+ ├── peer.key
+ ├── server.crt
+ └── server.key
+ ```
+
+ On `$HOST2`
+
+ ```
+ $HOME
+ └── kubeadmcfg.yaml
+ ---
+ /etc/kubernetes/pki
+ ├── apiserver-etcd-client.crt
+ ├── apiserver-etcd-client.key
+ └── etcd
+ ├── ca.crt
+ ├── healthcheck-client.crt
+ ├── healthcheck-client.key
+ ├── peer.crt
+ ├── peer.key
+ ├── server.crt
+ └── server.key
+ ```
+
+1. Create the static pod manifests
+
+ Now that the certificates and configs are in place it's time to create the
+ manifests. On each host run the `kubeadm` command to generate a static manifest
+ for etcd.
+
+ ```sh
+ root@HOST0 $ kubeadm alpha phase etcd local --config=/tmp/${HOST0}/kubeadmcfg.yaml
+ root@HOST1 $ kubeadm alpha phase etcd local --config=/home/ubuntu/kubeadmcfg.yaml
+ root@HOST2 $ kubeadm alpha phase etcd local --config=/home/ubuntu/kubeadmcfg.yaml
+ ```
+
+1. Optional: Check the cluster health
+
+ ```sh
+ docker run --rm -it \
+ --net host \
+ -v /etc/kubernetes:/etc/kubernetes quay.io/coreos/etcd:v3.2.18 etcdctl \
+ --cert-file /etc/kubernetes/pki/etcd/peer.crt \
+ --key-file /etc/kubernetes/pki/etcd/peer.key \
+ --ca-file /etc/kubernetes/pki/etcd/ca.crt \
+ --endpoints https://${HOST0}:2379 cluster-health
+ ...
+ cluster is healthy
+ ```
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+Once your have a working 3 member etcd cluster, you can continue setting up a
+highly available control plane using the [external etcd method with
+kubeadm](/docs/setup/independent/high-availability/).
+
+{{% /capture %}}
+
+
diff --git a/content/ko/docs/setup/independent/troubleshooting-kubeadm.md b/content/ko/docs/setup/independent/troubleshooting-kubeadm.md
new file mode 100644
index 000000000..63effc467
--- /dev/null
+++ b/content/ko/docs/setup/independent/troubleshooting-kubeadm.md
@@ -0,0 +1,241 @@
+---
+title: Troubleshooting kubeadm
+content_template: templates/concept
+weight: 70
+---
+
+{{% capture overview %}}
+
+As with any program, you might run into an error installing or running kubeadm.
+This page lists some common failure scenarios and have provided steps that can help you understand and fix the problem.
+
+If your problem is not listed below, please follow the following steps:
+
+- If you think your problem is a bug with kubeadm:
+ - Go to [github.com/kubernetes/kubeadm](https://github.com/kubernetes/kubeadm/issues) and search for existing issues.
+ - If no issue exists, please [open one](https://github.com/kubernetes/kubeadm/issues/new) and follow the issue template.
+
+- If you are unsure about how kubeadm works, you can ask on Slack in #kubeadm, or open a question on StackOverflow. Please include
+ relevant tags like `#kubernetes` and `#kubeadm` so folks can help you.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## `ebtables` or some similar executable not found during installation
+
+If you see the following warnings while running `kubeadm init`
+
+```sh
+[preflight] WARNING: ebtables not found in system path
+[preflight] WARNING: ethtool not found in system path
+```
+
+Then you may be missing `ebtables`, `ethtool` or a similar executable on your node. You can install them with the following commands:
+
+- For Ubuntu/Debian users, run `apt install ebtables ethtool`.
+- For CentOS/Fedora users, run `yum install ebtables ethtool`.
+
+## kubeadm blocks waiting for control plane during installation
+
+If you notice that `kubeadm init` hangs after printing out the following line:
+
+```sh
+[apiclient] Created API client, waiting for the control plane to become ready
+```
+
+This may be caused by a number of problems. The most common are:
+
+- network connection problems. Check that your machine has full network connectivity before continuing.
+- the default cgroup driver configuration for the kubelet differs from that used by Docker.
+ Check the system log file (e.g. `/var/log/message`) or examine the output from `journalctl -u kubelet`. If you see something like the following:
+
+ ```shell
+ error: failed to run Kubelet: failed to create kubelet:
+ misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"
+ ```
+
+ There are two common ways to fix the cgroup driver problem:
+
+ 1. Install docker again following instructions
+ [here](/docs/setup/independent/install-kubeadm/#installing-docker).
+ 1. Change the kubelet config to match the Docker cgroup driver manually, you can refer to
+ [Configure cgroup driver used by kubelet on Master Node](/docs/setup/independent/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-master-node)
+ for detailed instructions.
+
+- control plane Docker containers are crashlooping or hanging. You can check this by running `docker ps` and investigating each container by running `docker logs`.
+
+## kubeadm blocks when removing managed containers
+
+The following could happen if Docker halts and does not remove any Kubernetes-managed containers:
+
+```bash
+sudo kubeadm reset
+[preflight] Running pre-flight checks
+[reset] Stopping the kubelet service
+[reset] Unmounting mounted directories in "/var/lib/kubelet"
+[reset] Removing kubernetes-managed containers
+(block)
+```
+
+A possible solution is to restart the Docker service and then re-run `kubeadm reset`:
+
+```bash
+sudo systemctl restart docker.service
+sudo kubeadm reset
+```
+
+Inspecting the logs for docker may also be useful:
+
+```sh
+journalctl -ul docker
+```
+
+## Pods in `RunContainerError`, `CrashLoopBackOff` or `Error` state
+
+Right after `kubeadm init` there should not be any pods in these states.
+
+- If there are pods in one of these states _right after_ `kubeadm init`, please open an
+ issue in the kubeadm repo. `coredns` (or `kube-dns`) should be in the `Pending` state
+ until you have deployed the network solution.
+- If you see Pods in the `RunContainerError`, `CrashLoopBackOff` or `Error` state
+ after deploying the network solution and nothing happens to `coredns` (or `kube-dns`),
+ it's very likely that the Pod Network solution and nothing happens to the DNS server, it's very
+ likely that the Pod Network solution that you installed is somehow broken. You
+ might have to grant it more RBAC privileges or use a newer version. Please file
+ an issue in the Pod Network providers' issue tracker and get the issue triaged there.
+
+## `coredns` (or `kube-dns`) is stuck in the `Pending` state
+
+This is **expected** and part of the design. kubeadm is network provider-agnostic, so the admin
+should [install the pod network solution](/docs/concepts/cluster-administration/addons/)
+of choice. You have to install a Pod Network
+before CoreDNS may deployed fully. Hence the `Pending` state before the network is set up.
+
+## `HostPort` services do not work
+
+The `HostPort` and `HostIP` functionality is available depending on your Pod Network
+provider. Please contact the author of the Pod Network solution to find out whether
+`HostPort` and `HostIP` functionality are available.
+
+Calico, Canal, and Flannel CNI providers are verified to support HostPort.
+
+For more information, see the [CNI portmap documentation](https://github.com/containernetworking/plugins/blob/master/plugins/meta/portmap/README.md).
+
+If your network provider does not support the portmap CNI plugin, you may need to use the [NodePort feature of
+services](/docs/concepts/services-networking/service/#type-nodeport) or use `HostNetwork=true`.
+
+## Pods are not accessible via their Service IP
+
+- Many network add-ons do not yet enable [hairpin mode](https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#a-pod-cannot-reach-itself-via-service-ip)
+ which allows pods to access themselves via their Service IP. This is an issue related to
+ [CNI](https://github.com/containernetworking/cni/issues/476). Please contact the network
+ add-on provider to get the latest status of their support for hairpin mode.
+
+- If you are using VirtualBox (directly or via Vagrant), you will need to
+ ensure that `hostname -i` returns a routable IP address. By default the first
+ interface is connected to a non-routable host-only network. A work around
+ is to modify `/etc/hosts`, see this [Vagrantfile](https://github.com/errordeveloper/k8s-playground/blob/22dd39dfc06111235620e6c4404a96ae146f26fd/Vagrantfile#L11)
+ for an example.
+
+## TLS certificate errors
+
+The following error indicates a possible certificate mismatch.
+
+```none
+# kubectl get pods
+Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
+```
+
+- Verify that the `$HOME/.kube/config` file contains a valid certificate, and
+ regenerate a certificate if necessary. The certificates in a kubeconfig file
+ are base64 encoded. The `base64 -d` command can be used to decode the certificate
+ and `openssl x509 -text -noout` can be used for viewing the certificate information.
+- Another workaround is to overwrite the existing `kubeconfig` for the "admin" user:
+
+ ```sh
+ mv $HOME/.kube $HOME/.kube.bak
+ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
+ sudo chown $(id -u):$(id -g) $HOME/.kube/config
+ ```
+
+## Default NIC When using flannel as the pod network in Vagrant
+
+The following error might indicate that something was wrong in the pod network:
+
+```sh
+Error from server (NotFound): the server could not find the requested resource
+```
+
+- If you're using flannel as the pod network inside Vagrant, then you will have to specify the default interface name for flannel.
+
+ Vagrant typically assigns two interfaces to all VMs. The first, for which all hosts are assigned the IP address `10.0.2.15`, is for external traffic that gets NATed.
+
+ This may lead to problems with flannel, which defaults to the first interface on a host. This leads to all hosts thinking they have the same public IP address. To prevent this, pass the `--iface eth1` flag to flannel so that the second interface is chosen.
+
+## Non-public IP used for containers
+
+In some situations `kubectl logs` and `kubectl run` commands may return with the following errors in an otherwise functional cluster:
+
+```sh
+Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc65b868-glc5m/mysql: dial tcp 10.19.0.41:10250: getsockopt: no route to host
+```
+
+- This may be due to Kubernetes using an IP that can not communicate with other IPs on the seemingly same subnet, possibly by policy of the machine provider.
+- Digital Ocean assigns a public IP to `eth0` as well as a private one to be used internally as anchor for their floating IP feature, yet `kubelet` will pick the latter as the node's `InternalIP` instead of the public one.
+
+ Use `ip addr show` to check for this scenario instead of `ifconfig` because `ifconfig` will not display the offending alias IP address. Alternatively an API endpoint specific to Digital Ocean allows to query for the anchor IP from the droplet:
+
+ ```sh
+ curl http://169.254.169.254/metadata/v1/interfaces/public/0/anchor_ipv4/address
+ ```
+
+ The workaround is to tell `kubelet` which IP to use using `--node-ip`. When using Digital Ocean, it can be the public one (assigned to `eth0`) or the private one (assigned to `eth1`) should you want to use the optional private network. The [KubeletExtraArgs section of the MasterConfiguration file](https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/apis/kubeadm/v1alpha2/types.go#L147) can be used for this.
+
+ Then restart `kubelet`:
+
+ ```sh
+ systemctl daemon-reload
+ systemctl restart kubelet
+ ```
+
+## Services with externalTrafficPolicy=Local are not reachable
+
+On nodes where the hostname for the kubelet is overridden using the `--hostname-override` option, kube-proxy will default to treating 127.0.0.1 as the node IP, which results in rejecting connections for Services configured for `externalTrafficPolicy=Local`. This situation can be verified by checking the output of `kubectl -n kube-system logs `:
+
+```sh
+W0507 22:33:10.372369 1 server.go:586] Failed to retrieve node info: nodes "ip-10-0-23-78" not found
+W0507 22:33:10.372474 1 proxier.go:463] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
+```
+
+A workaround for this is to modify the kube-proxy DaemonSet in the following way:
+
+```sh
+kubectl -n kube-system patch --type json daemonset kube-proxy -p "$(cat <<'EOF'
+[
+ {
+ "op": "add",
+ "path": "/spec/template/spec/containers/0/env",
+ "value": [
+ {
+ "name": "NODE_NAME",
+ "valueFrom": {
+ "fieldRef": {
+ "apiVersion": "v1",
+ "fieldPath": "spec.nodeName"
+ }
+ }
+ }
+ ]
+ },
+ {
+ "op": "add",
+ "path": "/spec/template/spec/containers/0/command/-",
+ "value": "--hostname-override=${NODE_NAME}"
+ }
+]
+EOF
+)"
+
+```
+{{% /capture %}}
\ No newline at end of file
diff --git a/content/ko/docs/setup/minikube.md b/content/ko/docs/setup/minikube.md
new file mode 100644
index 000000000..70956a643
--- /dev/null
+++ b/content/ko/docs/setup/minikube.md
@@ -0,0 +1,360 @@
+---
+title: Running Kubernetes Locally via Minikube
+---
+
+Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.
+
+{{< toc >}}
+
+### Minikube Features
+
+* Minikube supports Kubernetes features such as:
+ * DNS
+ * NodePorts
+ * ConfigMaps and Secrets
+ * Dashboards
+ * Container Runtime: Docker, [rkt](https://github.com/rkt/rkt) and [CRI-O](https://github.com/kubernetes-incubator/cri-o)
+ * Enabling CNI (Container Network Interface)
+ * Ingress
+
+## Installation
+
+See [Installing Minikube](/docs/tasks/tools/install-minikube/).
+
+## Quickstart
+
+Here's a brief demo of minikube usage.
+If you want to change the VM driver add the appropriate `--vm-driver=xxx` flag to `minikube start`. Minikube supports
+the following drivers:
+
+* virtualbox
+* vmwarefusion
+* kvm2 ([driver installation](https://git.k8s.io/minikube/docs/drivers.md#kvm2-driver))
+* kvm ([driver installation](https://git.k8s.io/minikube/docs/drivers.md#kvm-driver))
+* hyperkit ([driver installation](https://git.k8s.io/minikube/docs/drivers.md#hyperkit-driver))
+* xhyve ([driver installation](https://git.k8s.io/minikube/docs/drivers.md#xhyve-driver)) (deprecated)
+
+Note that the IP below is dynamic and can change. It can be retrieved with `minikube ip`.
+
+```shell
+$ minikube start
+Starting local Kubernetes cluster...
+Running pre-create checks...
+Creating machine...
+Starting local Kubernetes cluster...
+
+$ kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.10 --port=8080
+deployment.apps/hello-minikube created
+$ kubectl expose deployment hello-minikube --type=NodePort
+service/hello-minikube exposed
+
+# We have now launched an echoserver pod but we have to wait until the pod is up before curling/accessing it
+# via the exposed service.
+# To check whether the pod is up and running we can use the following:
+$ kubectl get pod
+NAME READY STATUS RESTARTS AGE
+hello-minikube-3383150820-vctvh 0/1 ContainerCreating 0 3s
+# We can see that the pod is still being created from the ContainerCreating status
+$ kubectl get pod
+NAME READY STATUS RESTARTS AGE
+hello-minikube-3383150820-vctvh 1/1 Running 0 13s
+# We can see that the pod is now Running and we will now be able to curl it:
+$ curl $(minikube service hello-minikube --url)
+CLIENT VALUES:
+client_address=192.168.99.1
+command=GET
+real path=/
+...
+$ kubectl delete services hello-minikube
+service "hello-minikube" deleted
+$ kubectl delete deployment hello-minikube
+deployment.extensions "hello-minikube" deleted
+$ minikube stop
+Stopping local Kubernetes cluster...
+Stopping "minikube"...
+```
+
+### Alternative Container Runtimes
+
+#### CRI-O
+
+To use [CRI-O](https://github.com/kubernetes-incubator/cri-o) as the container runtime, run:
+
+```bash
+$ minikube start \
+ --network-plugin=cni \
+ --container-runtime=cri-o \
+ --bootstrapper=kubeadm
+```
+
+Or you can use the extended version:
+
+```bash
+$ minikube start \
+ --network-plugin=cni \
+ --extra-config=kubelet.container-runtime=remote \
+ --extra-config=kubelet.container-runtime-endpoint=/var/run/crio.sock \
+ --extra-config=kubelet.image-service-endpoint=/var/run/crio.sock \
+ --bootstrapper=kubeadm
+```
+
+#### rkt container engine
+
+To use [rkt](https://github.com/rkt/rkt) as the container runtime run:
+
+```shell
+$ minikube start \
+ --network-plugin=cni \
+ --container-runtime=rkt
+```
+
+This will use an alternative minikube ISO image containing both rkt, and Docker, and enable CNI networking.
+
+### Driver plugins
+
+See [DRIVERS](https://git.k8s.io/minikube/docs/drivers.md) for details on supported drivers and how to install
+plugins, if required.
+
+### Reusing the Docker daemon
+
+When using a single VM of Kubernetes, it's really handy to reuse the minikube's built-in Docker daemon; as this means you don't have to build a docker registry on your host machine and push the image into it - you can just build inside the same docker daemon as minikube which speeds up local experiments. Just make sure you tag your Docker image with something other than 'latest' and use that tag while you pull the image. Otherwise, if you do not specify version of your image, it will be assumed as `:latest`, with pull image policy of `Always` correspondingly, which may eventually result in `ErrImagePull` as you may not have any versions of your Docker image out there in the default docker registry (usually DockerHub) yet.
+
+To be able to work with the docker daemon on your mac/linux host use the `docker-env command` in your shell:
+
+```
+eval $(minikube docker-env)
+```
+You should now be able to use docker on the command line on your host mac/linux machine talking to the docker daemon inside the minikube VM:
+
+```
+docker ps
+```
+
+On Centos 7, docker may report the following error:
+
+```
+Could not read CA certificate "/etc/docker/ca.pem": open /etc/docker/ca.pem: no such file or directory
+```
+
+The fix is to update /etc/sysconfig/docker to ensure that minikube's environment changes are respected:
+
+```
+< DOCKER_CERT_PATH=/etc/docker
+---
+> if [ -z "${DOCKER_CERT_PATH}" ]; then
+> DOCKER_CERT_PATH=/etc/docker
+> fi
+```
+
+Remember to turn off the imagePullPolicy:Always, as otherwise Kubernetes won't use images you built locally.
+
+## Managing your Cluster
+
+### Starting a Cluster
+
+The `minikube start` command can be used to start your cluster.
+This command creates and configures a virtual machine that runs a single-node Kubernetes cluster.
+This command also configures your [kubectl](/docs/user-guide/kubectl-overview/) installation to communicate with this cluster.
+
+If you are behind a web proxy, you will need to pass this information in e.g. via
+
+```
+https_proxy= minikube start --docker-env http_proxy= --docker-env https_proxy= --docker-env no_proxy=192.168.99.0/24
+```
+
+Unfortunately just setting the environment variables will not work.
+
+Minikube will also create a "minikube" context, and set it to default in kubectl.
+To switch back to this context later, run this command: `kubectl config use-context minikube`.
+
+#### Specifying the Kubernetes version
+
+Minikube supports running multiple different versions of Kubernetes. You can
+access a list of all available versions via
+
+```
+minikube get-k8s-versions
+```
+
+You can specify the specific version of Kubernetes for Minikube to use by
+adding the `--kubernetes-version` string to the `minikube start` command. For
+example, to run version `v1.7.3`, you would run the following:
+
+```
+minikube start --kubernetes-version v1.7.3
+```
+
+### Configuring Kubernetes
+
+Minikube has a "configurator" feature that allows users to configure the Kubernetes components with arbitrary values.
+To use this feature, you can use the `--extra-config` flag on the `minikube start` command.
+
+This flag is repeated, so you can pass it several times with several different values to set multiple options.
+
+This flag takes a string of the form `component.key=value`, where `component` is one of the strings from the below list, `key` is a value on the
+configuration struct and `value` is the value to set.
+
+Valid keys can be found by examining the documentation for the Kubernetes `componentconfigs` for each component.
+Here is the documentation for each supported configuration:
+
+* [kubelet](https://godoc.org/k8s.io/kubernetes/pkg/kubelet/apis/kubeletconfig#KubeletConfiguration)
+* [apiserver](https://godoc.org/k8s.io/kubernetes/cmd/kube-apiserver/app/options#ServerRunOptions)
+* [proxy](https://godoc.org/k8s.io/kubernetes/pkg/proxy/apis/kubeproxyconfig#KubeProxyConfiguration)
+* [controller-manager](https://godoc.org/k8s.io/kubernetes/pkg/apis/componentconfig#KubeControllerManagerConfiguration)
+* [etcd](https://godoc.org/github.com/coreos/etcd/etcdserver#ServerConfig)
+* [scheduler](https://godoc.org/k8s.io/kubernetes/pkg/apis/componentconfig#KubeSchedulerConfiguration)
+
+#### Examples
+
+To change the `MaxPods` setting to 5 on the Kubelet, pass this flag: `--extra-config=kubelet.MaxPods=5`.
+
+This feature also supports nested structs. To change the `LeaderElection.LeaderElect` setting to `true` on the scheduler, pass this flag: `--extra-config=scheduler.LeaderElection.LeaderElect=true`.
+
+To set the `AuthorizationMode` on the `apiserver` to `RBAC`, you can use: `--extra-config=apiserver.Authorization.Mode=RBAC`.
+
+### Stopping a Cluster
+The `minikube stop` command can be used to stop your cluster.
+This command shuts down the minikube virtual machine, but preserves all cluster state and data.
+Starting the cluster again will restore it to it's previous state.
+
+### Deleting a Cluster
+The `minikube delete` command can be used to delete your cluster.
+This command shuts down and deletes the minikube virtual machine. No data or state is preserved.
+
+## Interacting With your Cluster
+
+### Kubectl
+
+The `minikube start` command creates a "[kubectl context](/docs/reference/generated/kubectl/kubectl-commands/#-em-set-context-em-)" called "minikube".
+This context contains the configuration to communicate with your minikube cluster.
+
+Minikube sets this context to default automatically, but if you need to switch back to it in the future, run:
+
+`kubectl config use-context minikube`,
+
+Or pass the context on each command like this: `kubectl get pods --context=minikube`.
+
+### Dashboard
+
+To access the [Kubernetes Dashboard](/docs/tasks/access-application-cluster/web-ui-dashboard/), run this command in a shell after starting minikube to get the address:
+
+```shell
+minikube dashboard
+```
+
+### Services
+
+To access a service exposed via a node port, run this command in a shell after starting minikube to get the address:
+
+```shell
+minikube service [-n NAMESPACE] [--url] NAME
+```
+
+## Networking
+
+The minikube VM is exposed to the host system via a host-only IP address, that can be obtained with the `minikube ip` command.
+Any services of type `NodePort` can be accessed over that IP address, on the NodePort.
+
+To determine the NodePort for your service, you can use a `kubectl` command like this:
+
+`kubectl get service $SERVICE --output='jsonpath="{.spec.ports[0].nodePort}"'`
+
+## Persistent Volumes
+Minikube supports [PersistentVolumes](/docs/concepts/storage/persistent-volumes/) of type `hostPath`.
+These PersistentVolumes are mapped to a directory inside the minikube VM.
+
+The Minikube VM boots into a tmpfs, so most directories will not be persisted across reboots (`minikube stop`).
+However, Minikube is configured to persist files stored under the following host directories:
+
+* `/data`
+* `/var/lib/localkube`
+* `/var/lib/docker`
+
+Here is an example PersistentVolume config to persist data in the `/data` directory:
+
+```yaml
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+ name: pv0001
+spec:
+ accessModes:
+ - ReadWriteOnce
+ capacity:
+ storage: 5Gi
+ hostPath:
+ path: /data/pv0001/
+```
+
+## Mounted Host Folders
+Some drivers will mount a host folder within the VM so that you can easily share files between the VM and host. These are not configurable at the moment and different for the driver and OS you are using.
+
+**Note:** Host folder sharing is not implemented in the KVM driver yet.
+
+| Driver | OS | HostFolder | VM |
+| --- | --- | --- | --- |
+| VirtualBox | Linux | /home | /hosthome |
+| VirtualBox | macOS | /Users | /Users |
+| VirtualBox | Windows | C://Users | /c/Users |
+| VMware Fusion | macOS | /Users | /Users |
+| Xhyve | macOS | /Users | /Users |
+
+
+## Private Container Registries
+
+To access a private container registry, follow the steps on [this page](/docs/concepts/containers/images/).
+
+We recommend you use `ImagePullSecrets`, but if you would like to configure access on the minikube VM you can place the `.dockercfg` in the `/home/docker` directory or the `config.json` in the `/home/docker/.docker` directory.
+
+## Add-ons
+
+In order to have minikube properly start or restart custom addons,
+place the addons you wish to be launched with minikube in the `~/.minikube/addons`
+directory. Addons in this folder will be moved to the minikube VM and
+launched each time minikube is started or restarted.
+
+## Using Minikube with an HTTP Proxy
+
+Minikube creates a Virtual Machine that includes Kubernetes and a Docker daemon.
+When Kubernetes attempts to schedule containers using Docker, the Docker daemon may require external network access to pull containers.
+
+If you are behind an HTTP proxy, you may need to supply Docker with the proxy settings.
+To do this, pass the required environment variables as flags during `minikube start`.
+
+For example:
+
+```shell
+$ minikube start --docker-env http_proxy=http://$YOURPROXY:PORT \
+ --docker-env https_proxy=https://$YOURPROXY:PORT
+```
+
+If your Virtual Machine address is 192.168.99.100, then chances are your proxy settings will prevent kubectl from directly reaching it.
+To by-pass proxy configuration for this IP address, you should modify your no_proxy settings. You can do so with:
+
+```shell
+$ export no_proxy=$no_proxy,$(minikube ip)
+```
+
+## Known Issues
+* Features that require a Cloud Provider will not work in Minikube. These include:
+ * LoadBalancers
+* Features that require multiple nodes. These include:
+ * Advanced scheduling policies
+
+## Design
+
+Minikube uses [libmachine](https://github.com/docker/machine/tree/master/libmachine) for provisioning VMs, and [localkube](https://git.k8s.io/minikube/pkg/localkube) (originally written and donated to this project by [RedSpread](https://redspread.com/)) for running the cluster.
+
+For more information about minikube, see the [proposal](https://git.k8s.io/community/contributors/design-proposals/cluster-lifecycle/local-cluster-ux.md).
+
+## Additional Links:
+* **Goals and Non-Goals**: For the goals and non-goals of the minikube project, please see our [roadmap](https://git.k8s.io/minikube/docs/contributors/roadmap.md).
+* **Development Guide**: See [CONTRIBUTING.md](https://git.k8s.io/minikube/CONTRIBUTING.md) for an overview of how to send pull requests.
+* **Building Minikube**: For instructions on how to build/test minikube from source, see the [build guide](https://git.k8s.io/minikube/docs/contributors/build_guide.md)
+* **Adding a New Dependency**: For instructions on how to add a new dependency to minikube see the [adding dependencies guide](https://git.k8s.io/minikube/docs/contributors/adding_a_dependency.md)
+* **Adding a New Addon**: For instruction on how to add a new addon for minikube see the [adding an addon guide](https://git.k8s.io/minikube/docs/contributors/adding_an_addon.md)
+* **Updating Kubernetes**: For instructions on how to update kubernetes see the [updating Kubernetes guide](https://git.k8s.io/minikube/docs/contributors/updating_kubernetes.md)
+
+## Community
+
+Contributions, questions, and comments are all welcomed and encouraged! minikube developers hang out on [Slack](https://kubernetes.slack.com) in the #minikube channel (get an invitation [here](http://slack.kubernetes.io/)). We also have the [kubernetes-dev Google Groups mailing list](https://groups.google.com/forum/#!forum/kubernetes-dev). If you are posting to the list please prefix your subject with "minikube: ".
diff --git a/content/ko/docs/setup/multiple-zones.md b/content/ko/docs/setup/multiple-zones.md
new file mode 100644
index 000000000..ffbd5346d
--- /dev/null
+++ b/content/ko/docs/setup/multiple-zones.md
@@ -0,0 +1,328 @@
+---
+title: Running in Multiple Zones
+---
+
+## Introduction
+
+Kubernetes 1.2 adds support for running a single cluster in multiple failure zones
+(GCE calls them simply "zones", AWS calls them "availability zones", here we'll refer to them as "zones").
+This is a lightweight version of a broader Cluster Federation feature (previously referred to by the affectionate
+nickname ["Ubernetes"](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/design-proposals/multicluster/federation.md)).
+Full Cluster Federation allows combining separate
+Kubernetes clusters running in different regions or cloud providers
+(or on-premises data centers). However, many
+users simply want to run a more available Kubernetes cluster in multiple zones
+of their single cloud provider, and this is what the multizone support in 1.2 allows
+(this previously went by the nickname "Ubernetes Lite").
+
+Multizone support is deliberately limited: a single Kubernetes cluster can run
+in multiple zones, but only within the same region (and cloud provider). Only
+GCE and AWS are currently supported automatically (though it is easy to
+add similar support for other clouds or even bare metal, by simply arranging
+for the appropriate labels to be added to nodes and volumes).
+
+
+{{< toc >}}
+
+## Functionality
+
+When nodes are started, the kubelet automatically adds labels to them with
+zone information.
+
+Kubernetes will automatically spread the pods in a replication controller
+or service across nodes in a single-zone cluster (to reduce the impact of
+failures.) With multiple-zone clusters, this spreading behavior is
+extended across zones (to reduce the impact of zone failures.) (This is
+achieved via `SelectorSpreadPriority`). This is a best-effort
+placement, and so if the zones in your cluster are heterogeneous
+(e.g. different numbers of nodes, different types of nodes, or
+different pod resource requirements), this might prevent perfectly
+even spreading of your pods across zones. If desired, you can use
+homogeneous zones (same number and types of nodes) to reduce the
+probability of unequal spreading.
+
+When persistent volumes are created, the `PersistentVolumeLabel`
+admission controller automatically adds zone labels to them. The scheduler (via the
+`VolumeZonePredicate` predicate) will then ensure that pods that claim a
+given volume are only placed into the same zone as that volume, as volumes
+cannot be attached across zones.
+
+## Limitations
+
+There are some important limitations of the multizone support:
+
+* We assume that the different zones are located close to each other in the
+network, so we don't perform any zone-aware routing. In particular, traffic
+that goes via services might cross zones (even if pods in some pods backing that service
+exist in the same zone as the client), and this may incur additional latency and cost.
+
+* Volume zone-affinity will only work with a `PersistentVolume`, and will not
+work if you directly specify an EBS volume in the pod spec (for example).
+
+* Clusters cannot span clouds or regions (this functionality will require full
+federation support).
+
+* Although your nodes are in multiple zones, kube-up currently builds
+a single master node by default. While services are highly
+available and can tolerate the loss of a zone, the control plane is
+located in a single zone. Users that want a highly available control
+plane should follow the [high availability](/docs/admin/high-availability) instructions.
+
+* StatefulSet volume zone spreading when using dynamic provisioning is currently not compatible with
+pod affinity or anti-affinity policies.
+
+* If the name of the StatefulSet contains dashes ("-"), volume zone spreading
+may not provide a uniform distribution of storage across zones.
+
+* When specifying multiple PVCs in a Deployment or Pod spec, the StorageClass
+needs to be configured for a specific, single zone, or the PVs need to be
+statically provisioned in a specific zone. Another workaround is to use a
+StatefulSet, which will ensure that all the volumes for a replica are
+provisioned in the same zone.
+
+
+## Walkthrough
+
+We're now going to walk through setting up and using a multi-zone
+cluster on both GCE & AWS. To do so, you bring up a full cluster
+(specifying `MULTIZONE=true`), and then you add nodes in additional zones
+by running `kube-up` again (specifying `KUBE_USE_EXISTING_MASTER=true`).
+
+### Bringing up your cluster
+
+Create the cluster as normal, but pass MULTIZONE to tell the cluster to manage multiple zones; creating nodes in us-central1-a.
+
+GCE:
+
+```shell
+curl -sS https://get.k8s.io | MULTIZONE=true KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-a NUM_NODES=3 bash
+```
+
+AWS:
+
+```shell
+curl -sS https://get.k8s.io | MULTIZONE=true KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2a NUM_NODES=3 bash
+```
+
+This step brings up a cluster as normal, still running in a single zone
+(but `MULTIZONE=true` has enabled multi-zone capabilities).
+
+### Nodes are labeled
+
+View the nodes; you can see that they are labeled with zone information.
+They are all in `us-central1-a` (GCE) or `us-west-2a` (AWS) so far. The
+labels are `failure-domain.beta.kubernetes.io/region` for the region,
+and `failure-domain.beta.kubernetes.io/zone` for the zone:
+
+```shell
+> kubectl get nodes --show-labels
+
+
+NAME STATUS AGE VERSION LABELS
+kubernetes-master Ready,SchedulingDisabled 6m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-master
+kubernetes-minion-87j9 Ready 6m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-87j9
+kubernetes-minion-9vlv Ready 6m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv
+kubernetes-minion-a12q Ready 6m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-a12q
+```
+
+### Add more nodes in a second zone
+
+Let's add another set of nodes to the existing cluster, reusing the
+existing master, running in a different zone (us-central1-b or us-west-2b).
+We run kube-up again, but by specifying `KUBE_USE_EXISTING_MASTER=true`
+kube-up will not create a new master, but will reuse one that was previously
+created instead.
+
+GCE:
+
+```shell
+KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-b NUM_NODES=3 kubernetes/cluster/kube-up.sh
+```
+
+On AWS we also need to specify the network CIDR for the additional
+subnet, along with the master internal IP address:
+
+```shell
+KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2b NUM_NODES=3 KUBE_SUBNET_CIDR=172.20.1.0/24 MASTER_INTERNAL_IP=172.20.0.9 kubernetes/cluster/kube-up.sh
+```
+
+
+View the nodes again; 3 more nodes should have launched and be tagged
+in us-central1-b:
+
+```shell
+> kubectl get nodes --show-labels
+
+NAME STATUS AGE VERSION LABELS
+kubernetes-master Ready,SchedulingDisabled 16m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-master
+kubernetes-minion-281d Ready 2m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-281d
+kubernetes-minion-87j9 Ready 16m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-87j9
+kubernetes-minion-9vlv Ready 16m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv
+kubernetes-minion-a12q Ready 17m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-a12q
+kubernetes-minion-pp2f Ready 2m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-pp2f
+kubernetes-minion-wf8i Ready 2m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-wf8i
+```
+
+### Volume affinity
+
+Create a volume using the dynamic volume creation (only PersistentVolumes are supported for zone affinity):
+
+```json
+kubectl create -f - < kubectl get pv --show-labels
+NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE LABELS
+pv-gce-mj4gm 5Gi RWO Bound default/claim1 46s failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a
+```
+
+So now we will create a pod that uses the persistent volume claim.
+Because GCE PDs / AWS EBS volumes cannot be attached across zones,
+this means that this pod can only be created in the same zone as the volume:
+
+```yaml
+kubectl create -f - < kubectl describe pod mypod | grep Node
+Node: kubernetes-minion-9vlv/10.240.0.5
+> kubectl get node kubernetes-minion-9vlv --show-labels
+NAME STATUS AGE VERSION LABELS
+kubernetes-minion-9vlv Ready 22m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv
+```
+
+### Pods are spread across zones
+
+Pods in a replication controller or service are automatically spread
+across zones. First, let's launch more nodes in a third zone:
+
+GCE:
+
+```shell
+KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-f NUM_NODES=3 kubernetes/cluster/kube-up.sh
+```
+
+AWS:
+
+```shell
+KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2c NUM_NODES=3 KUBE_SUBNET_CIDR=172.20.2.0/24 MASTER_INTERNAL_IP=172.20.0.9 kubernetes/cluster/kube-up.sh
+```
+
+Verify that you now have nodes in 3 zones:
+
+```shell
+kubectl get nodes --show-labels
+```
+
+Create the guestbook-go example, which includes an RC of size 3, running a simple web app:
+
+```shell
+find kubernetes/examples/guestbook-go/ -name '*.json' | xargs -I {} kubectl create -f {}
+```
+
+The pods should be spread across all 3 zones:
+
+```shell
+> kubectl describe pod -l app=guestbook | grep Node
+Node: kubernetes-minion-9vlv/10.240.0.5
+Node: kubernetes-minion-281d/10.240.0.8
+Node: kubernetes-minion-olsh/10.240.0.11
+
+ > kubectl get node kubernetes-minion-9vlv kubernetes-minion-281d kubernetes-minion-olsh --show-labels
+NAME STATUS AGE VERSION LABELS
+kubernetes-minion-9vlv Ready 34m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv
+kubernetes-minion-281d Ready 20m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-281d
+kubernetes-minion-olsh Ready 3m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-f,kubernetes.io/hostname=kubernetes-minion-olsh
+```
+
+
+Load-balancers span all zones in a cluster; the guestbook-go example
+includes an example load-balanced service:
+
+```shell
+> kubectl describe service guestbook | grep LoadBalancer.Ingress
+LoadBalancer Ingress: 130.211.126.21
+
+> ip=130.211.126.21
+
+> curl -s http://${ip}:3000/env | grep HOSTNAME
+ "HOSTNAME": "guestbook-44sep",
+
+> (for i in `seq 20`; do curl -s http://${ip}:3000/env | grep HOSTNAME; done) | sort | uniq
+ "HOSTNAME": "guestbook-44sep",
+ "HOSTNAME": "guestbook-hum5n",
+ "HOSTNAME": "guestbook-ppm40",
+```
+
+The load balancer correctly targets all the pods, even though they are in multiple zones.
+
+### Shutting down the cluster
+
+When you're done, clean up:
+
+GCE:
+
+```shell
+KUBERNETES_PROVIDER=gce KUBE_USE_EXISTING_MASTER=true KUBE_GCE_ZONE=us-central1-f kubernetes/cluster/kube-down.sh
+KUBERNETES_PROVIDER=gce KUBE_USE_EXISTING_MASTER=true KUBE_GCE_ZONE=us-central1-b kubernetes/cluster/kube-down.sh
+KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-a kubernetes/cluster/kube-down.sh
+```
+
+AWS:
+
+```shell
+KUBERNETES_PROVIDER=aws KUBE_USE_EXISTING_MASTER=true KUBE_AWS_ZONE=us-west-2c kubernetes/cluster/kube-down.sh
+KUBERNETES_PROVIDER=aws KUBE_USE_EXISTING_MASTER=true KUBE_AWS_ZONE=us-west-2b kubernetes/cluster/kube-down.sh
+KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2a kubernetes/cluster/kube-down.sh
+```
diff --git a/content/ko/docs/setup/node-conformance.md b/content/ko/docs/setup/node-conformance.md
new file mode 100644
index 000000000..73717175c
--- /dev/null
+++ b/content/ko/docs/setup/node-conformance.md
@@ -0,0 +1,97 @@
+---
+title: Validate Node Setup
+---
+
+{{< toc >}}
+
+## Node Conformance Test
+
+*Node conformance test* is a containerized test framework that provides a system
+verification and functionality test for a node. The test validates whether the
+node meets the minimum requirements for Kubernetes; a node that passes the test
+is qualified to join a Kubernetes cluster.
+
+## Limitations
+
+In Kubernetes version 1.5, node conformance test has the following limitations:
+
+* Node conformance test only supports Docker as the container runtime.
+
+## Node Prerequisite
+
+To run node conformance test, a node must satisfy the same prerequisites as a
+standard Kubernetes node. At a minimum, the node should have the following
+daemons installed:
+
+* Container Runtime (Docker)
+* Kubelet
+
+## Running Node Conformance Test
+
+To run the node conformance test, perform the following steps:
+
+1. Point your Kubelet to localhost `--api-servers="http://localhost:8080"`,
+because the test framework starts a local master to test Kubelet. There are some
+other Kubelet flags you may care:
+ * `--pod-cidr`: If you are using `kubenet`, you should specify an arbitrary CIDR
+ to Kubelet, for example `--pod-cidr=10.180.0.0/24`.
+ * `--cloud-provider`: If you are using `--cloud-provider=gce`, you should
+ remove the flag to run the test.
+
+2. Run the node conformance test with command:
+
+```shell
+# $CONFIG_DIR is the pod manifest path of your Kubelet.
+# $LOG_DIR is the test output path.
+sudo docker run -it --rm --privileged --net=host \
+ -v /:/rootfs -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \
+ k8s.gcr.io/node-test:0.2
+```
+
+## Running Node Conformance Test for Other Architectures
+
+Kubernetes also provides node conformance test docker images for other
+architectures:
+
+ Arch | Image |
+--------|:-----------------:|
+ amd64 | node-test-amd64 |
+ arm | node-test-arm |
+ arm64 | node-test-arm64 |
+
+## Running Selected Test
+
+To run specific tests, overwrite the environment variable `FOCUS` with the
+regular expression of tests you want to run.
+
+```shell
+sudo docker run -it --rm --privileged --net=host \
+ -v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \
+ -e FOCUS=MirrorPod \ # Only run MirrorPod test
+ k8s.gcr.io/node-test:0.2
+```
+
+To skip specific tests, overwrite the environment variable `SKIP` with the
+regular expression of tests you want to skip.
+
+```shell
+sudo docker run -it --rm --privileged --net=host \
+ -v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \
+ -e SKIP=MirrorPod \ # Run all conformance tests but skip MirrorPod test
+ k8s.gcr.io/node-test:0.2
+```
+
+Node conformance test is a containerized version of [node e2e test](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/devel/e2e-node-tests.md).
+By default, it runs all conformance tests.
+
+Theoretically, you can run any node e2e test if you configure the container and
+mount required volumes properly. But **it is strongly recommended to only run conformance
+test**, because it requires much more complex configuration to run non-conformance test.
+
+## Caveats
+
+* The test leaves some docker images on the node, including the node conformance
+ test image and images of containers used in the functionality
+ test.
+* The test leaves dead containers on the node. These containers are created
+ during the functionality test.
diff --git a/content/ko/docs/setup/on-premises-vm/_index.md b/content/ko/docs/setup/on-premises-vm/_index.md
new file mode 100644
index 000000000..d824259ab
--- /dev/null
+++ b/content/ko/docs/setup/on-premises-vm/_index.md
@@ -0,0 +1,3 @@
+---
+title: On-Premises VMs
+---
diff --git a/content/ko/docs/setup/on-premises-vm/cloudstack.md b/content/ko/docs/setup/on-premises-vm/cloudstack.md
new file mode 100644
index 000000000..fa2924927
--- /dev/null
+++ b/content/ko/docs/setup/on-premises-vm/cloudstack.md
@@ -0,0 +1,120 @@
+---
+title: Cloudstack
+content_template: templates/concept
+---
+
+{{% capture overview %}}
+
+[CloudStack](https://cloudstack.apache.org/) is a software to build public and private clouds based on hardware virtualization principles (traditional IaaS). To deploy Kubernetes on CloudStack there are several possibilities depending on the Cloud being used and what images are made available. CloudStack also has a vagrant plugin available, hence Vagrant could be used to deploy Kubernetes either using the existing shell provisioner or using new Salt based recipes.
+
+[CoreOS](http://coreos.com) templates for CloudStack are built [nightly](http://stable.release.core-os.net/amd64-usr/current/). CloudStack operators need to [register](http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/templates.html) this template in their cloud before proceeding with these Kubernetes deployment instructions.
+
+This guide uses a single [Ansible playbook](https://github.com/apachecloudstack/k8s), which is completely automated and can deploy Kubernetes on a CloudStack based Cloud using CoreOS images. The playbook, creates an ssh key pair, creates a security group and associated rules and finally starts coreOS instances configured via cloud-init.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Prerequisites
+
+```shell
+sudo apt-get install -y python-pip libssl-dev
+sudo pip install cs
+sudo pip install sshpubkeys
+sudo apt-get install software-properties-common
+sudo apt-add-repository ppa:ansible/ansible
+sudo apt-get update
+sudo apt-get install ansible
+```
+
+On CloudStack server you also have to install libselinux-python :
+
+```shell
+yum install libselinux-python
+```
+
+[_cs_](https://github.com/exoscale/cs) is a python module for the CloudStack API.
+
+Set your CloudStack endpoint, API keys and HTTP method used.
+
+You can define them as environment variables: `CLOUDSTACK_ENDPOINT`, `CLOUDSTACK_KEY`, `CLOUDSTACK_SECRET` and `CLOUDSTACK_METHOD`.
+
+Or create a `~/.cloudstack.ini` file:
+
+```none
+[cloudstack]
+endpoint =
+key =
+secret =
+method = post
+```
+
+We need to use the http POST method to pass the _large_ userdata to the coreOS instances.
+
+### Clone the playbook
+
+```shell
+git clone https://github.com/apachecloudstack/k8s
+cd kubernetes-cloudstack
+```
+
+### Create a Kubernetes cluster
+
+You simply need to run the playbook.
+
+```shell
+ansible-playbook k8s.yml
+```
+
+Some variables can be edited in the `k8s.yml` file.
+
+```none
+vars:
+ ssh_key: k8s
+ k8s_num_nodes: 2
+ k8s_security_group_name: k8s
+ k8s_node_prefix: k8s2
+ k8s_template:
+ k8s_instance_type:
+```
+
+This will start a Kubernetes master node and a number of compute nodes (by default 2).
+The `instance_type` and `template` are specific, edit them to specify your CloudStack cloud specific template and instance type (i.e. service offering).
+
+Check the tasks and templates in `roles/k8s` if you want to modify anything.
+
+Once the playbook as finished, it will print out the IP of the Kubernetes master:
+
+```none
+TASK: [k8s | debug msg='k8s master IP is {{ k8s_master.default_ip }}'] ********
+```
+
+SSH to it using the key that was created and using the _core_ user.
+
+```shell
+ssh -i ~/.ssh/id_rsa_k8s core@
+```
+
+And you can list the machines in your cluster:
+
+```shell
+fleetctl list-machines
+```
+
+```none
+MACHINE IP METADATA
+a017c422... role=node
+ad13bf84... role=master
+e9af8293... role=node
+```
+
+## Support Level
+
+
+IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
+-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
+CloudStack | Ansible | CoreOS | flannel | [docs](/docs/setup/on-premises-vm/cloudstack/) | | Community ([@Guiques](https://github.com/ltupin/))
+
+For support level information on all solutions, see the [Table of solutions](/docs/setup/pick-right-solution/#table-of-solutions) chart.
+
+{{% /capture %}}
diff --git a/content/ko/docs/setup/on-premises-vm/dcos.md b/content/ko/docs/setup/on-premises-vm/dcos.md
new file mode 100644
index 000000000..f9cb4177f
--- /dev/null
+++ b/content/ko/docs/setup/on-premises-vm/dcos.md
@@ -0,0 +1,23 @@
+---
+title: Kubernetes on DC/OS
+content_template: templates/concept
+---
+
+{{% capture overview %}}
+
+Mesosphere provides an easy option to provision Kubernetes onto [DC/OS](https://mesosphere.com/product/), offering:
+
+* Pure upstream Kubernetes
+* Single-click cluster provisioning
+* Highly available and secure by default
+* Kubernetes running alongside fast-data platforms (e.g. Akka, Cassandra, Kafka, Spark)
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Official Mesosphere Guide
+
+The canonical source of getting started on DC/OS is located in the [quickstart repo](https://github.com/mesosphere/dcos-kubernetes-quickstart).
+
+{{% /capture %}}
diff --git a/content/ko/docs/setup/on-premises-vm/ovirt.md b/content/ko/docs/setup/on-premises-vm/ovirt.md
new file mode 100644
index 000000000..8f0aa4383
--- /dev/null
+++ b/content/ko/docs/setup/on-premises-vm/ovirt.md
@@ -0,0 +1,70 @@
+---
+title: oVirt
+content_template: templates/concept
+---
+
+{{% capture overview %}}
+
+oVirt is a virtual datacenter manager that delivers powerful management of multiple virtual machines on multiple hosts. Using KVM and libvirt, oVirt can be installed on Fedora, CentOS, or Red Hat Enterprise Linux hosts to set up and manage your virtual data center.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## oVirt Cloud Provider Deployment
+
+The oVirt cloud provider allows to easily discover and automatically add new VM instances as nodes to your Kubernetes cluster.
+At the moment there are no community-supported or pre-loaded VM images including Kubernetes but it is possible to [import] or [install] Project Atomic (or Fedora) in a VM to [generate a template]. Any other distribution that includes Kubernetes may work as well.
+
+It is mandatory to [install the ovirt-guest-agent] in the guests for the VM ip address and hostname to be reported to ovirt-engine and ultimately to Kubernetes.
+
+Once the Kubernetes template is available it is possible to start instantiating VMs that can be discovered by the cloud provider.
+
+[import]: http://ovedou.blogspot.it/2014/03/importing-glance-images-as-ovirt.html
+[install]: https://www.ovirt.org/documentation/quickstart/quickstart-guide/#create-virtual-machines
+[generate a template]: https://www.ovirt.org/documentation/quickstart/quickstart-guide/#using-templates
+[install the ovirt-guest-agent]: http://www.ovirt.org/documentation/how-to/guest-agent/install-the-guest-agent-in-fedora/
+
+## Using the oVirt Cloud Provider
+
+The oVirt Cloud Provider requires access to the oVirt REST-API to gather the proper information, the required credential should be specified in the `ovirt-cloud.conf` file:
+
+```none
+[connection]
+uri = https://localhost:8443/ovirt-engine/api
+username = admin@internal
+password = admin
+```
+
+In the same file it is possible to specify (using the `filters` section) what search query to use to identify the VMs to be reported to Kubernetes:
+
+```none
+[filters]
+# Search query used to find nodes
+vms = tag=kubernetes
+```
+
+In the above example all the VMs tagged with the `kubernetes` label will be reported as nodes to Kubernetes.
+
+The `ovirt-cloud.conf` file then must be specified in kube-controller-manager:
+
+```shell
+kube-controller-manager ... --cloud-provider=ovirt --cloud-config=/path/to/ovirt-cloud.conf ...
+```
+
+## oVirt Cloud Provider Screencast
+
+This short screencast demonstrates how the oVirt Cloud Provider can be used to dynamically add VMs to your Kubernetes cluster.
+
+[![Screencast](https://img.youtube.com/vi/JyyST4ZKne8/0.jpg)](https://www.youtube.com/watch?v=JyyST4ZKne8)
+
+## Support Level
+
+
+IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
+-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
+oVirt | | | | [docs](/docs/setup/on-premises-vm/ovirt/) | | Community ([@simon3z](https://github.com/simon3z))
+
+For support level information on all solutions, see the [Table of solutions](/docs/setup/pick-right-solution/#table-of-solutions) chart.
+
+{{% /capture %}}
diff --git a/content/ko/docs/setup/pick-right-solution.md b/content/ko/docs/setup/pick-right-solution.md
new file mode 100644
index 000000000..809921237
--- /dev/null
+++ b/content/ko/docs/setup/pick-right-solution.md
@@ -0,0 +1,239 @@
+---
+title: Picking the Right Solution
+content_template: templates/concept
+---
+
+{{% capture overview %}}
+
+Kubernetes can run on various platforms: from your laptop, to VMs on a cloud provider, to a rack of
+bare metal servers. The effort required to set up a cluster varies from running a single command to
+crafting your own customized cluster. Use this guide to choose a solution that fits your needs.
+
+If you just want to "kick the tires" on Kubernetes, use the [local Docker-based solutions](#local-machine-solutions).
+
+When you are ready to scale up to more machines and higher availability, a [hosted solution](#hosted-solutions) is the easiest to create and maintain.
+
+[Turnkey cloud solutions](#turnkey-cloud-solutions) require only a few commands to create
+and cover a wide range of cloud providers. [On-Premises turnkey cloud solutions](#on-premises-turnkey-cloud-solutions) have the simplicity of the turnkey cloud solution combined with the security of your own private network.
+
+If you already have a way to configure hosting resources, use [kubeadm](/docs/setup/independent/create-cluster-kubeadm/) to easily bring up a cluster with a single command per machine.
+
+[Custom solutions](#custom-solutions) vary from step-by-step instructions to general advice for setting up
+a Kubernetes cluster from scratch.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Local-machine Solutions
+
+* [Minikube](/docs/setup/minikube/) is the recommended method for creating a local, single-node Kubernetes cluster for development and testing. Setup is completely automated and doesn't require a cloud provider account.
+
+* [Kubeadm-dind](https://github.com/kubernetes-sigs/kubeadm-dind-cluster) is a multi-node (while minikube is single-node) Kubernetes cluster which only requires a docker daemon. It uses docker-in-docker technique to spawn the Kubernetes cluster.
+
+* [Ubuntu on LXD](/docs/getting-started-guides/ubuntu/local/) supports a nine-instance deployment on localhost.
+
+* [IBM Cloud Private-CE (Community Edition)](https://github.com/IBM/deploy-ibm-cloud-private) can use VirtualBox on your machine to deploy Kubernetes to one or more VMs for development and test scenarios. Scales to full multi-node cluster.
+
+* [IBM Cloud Private-CE (Community Edition) on Linux Containers](https://github.com/HSBawa/icp-ce-on-linux-containers) is a Terraform/Packer/BASH based Infrastructure as Code (IaC) scripts to create a seven node (1 Boot, 1 Master, 1 Management, 1 Proxy and 3 Workers) LXD cluster on Linux Host.
+
+## Hosted Solutions
+
+* [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) offers managed Kubernetes clusters.
+
+* [Amazon Elastic Container Service for Kubernetes](https://aws.amazon.com/eks/) offers managed Kubernetes service.
+
+* [Azure Container Service](https://azure.microsoft.com/services/container-service/) offers managed Kubernetes clusters.
+
+* [Stackpoint.io](https://stackpoint.io) provides Kubernetes infrastructure automation and management for multiple public clouds.
+
+* [AppsCode.com](https://appscode.com/products/cloud-deployment/) provides managed Kubernetes clusters for various public clouds, including AWS and Google Cloud Platform.
+
+* [Madcore.Ai](https://madcore.ai) is devops-focused CLI tool for deploying Kubernetes infrastructure in AWS. Master, auto-scaling group nodes with spot-instances, ingress-ssl-lego, Heapster, and Grafana.
+
+* [Platform9](https://platform9.com/products/kubernetes/) offers managed Kubernetes on-premises or on any public cloud, and provides 24/7 health monitoring and alerting. (Kube2go, a web-UI driven Kubernetes cluster deployment service Platform9 released, has been integrated to Platform9 Sandbox.)
+
+* [OpenShift Dedicated](https://www.openshift.com/dedicated/) offers managed Kubernetes clusters powered by OpenShift.
+
+* [OpenShift Online](https://www.openshift.com/features/) provides free hosted access for Kubernetes applications.
+
+* [IBM Cloud Container Service](https://console.bluemix.net/docs/containers/container_index.html) offers managed Kubernetes clusters with isolation choice, operational tools, integrated security insight into images and containers, and integration with Watson, IoT, and data.
+
+* [Giant Swarm](https://giantswarm.io/product/) offers managed Kubernetes clusters in their own datacenter, on-premises, or on public clouds.
+
+* [Kubermatic](https://www.loodse.com) provides managed Kubernetes clusters for various public clouds, including AWS and Digital Ocean, as well as on-premises with OpenStack integration.
+
+* [Pivotal Container Service](https://pivotal.io/platform/pivotal-container-service) provides enterprise-grade Kubernetes for both on-premises and public clouds. PKS enables on-demand provisioning of Kubernetes clusters, multi-tenancy and fully automated day-2 operations.
+
+* [Oracle Container Engine for Kubernetes](https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Concepts/contengoverview.htm) is a fully-managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud.
+
+* [Kublr](https://kublr.com) offers enterprise-grade secure, scalable, highly reliable Kubernetes clusters on AWS, Azure, GCP, and on-premise. It includes out-of-the-box backup and disaster recovery, multi-cluster centralized logging and monitoring, and built-in alerting.
+
+# Turnkey Cloud Solutions
+
+These solutions allow you to create Kubernetes clusters on a range of Cloud IaaS providers with only a
+few commands. These solutions are actively developed and have active community support.
+
+* [Conjure-up Kubernetes with Ubuntu on AWS, Azure, Google Cloud, Oracle Cloud](/docs/getting-started-guides/ubuntu/)
+* [Google Compute Engine (GCE)](/docs/setup/turnkey/gce/)
+* [AWS](/docs/setup/turnkey/aws/)
+* [Azure](/docs/setup/turnkey/azure/)
+* [Tectonic by CoreOS](https://coreos.com/tectonic)
+* [CenturyLink Cloud](/docs/setup/turnkey/clc/)
+* [IBM Cloud](https://github.com/patrocinio/kubernetes-softlayer)
+* [Stackpoint.io](/docs/setup/turnkey/stackpoint/)
+* [Madcore.Ai](https://madcore.ai/)
+* [Kubermatic](https://cloud.kubermatic.io)
+* [Rancher 2.0](https://rancher.com/docs/rancher/v2.x/en/)
+* [Oracle Container Engine for K8s](https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Concepts/contengprerequisites.htm)
+* [Gardener](https://gardener.cloud/)
+* [Kontena Pharos](https://kontena.io/pharos/)
+* [Kublr](https://kublr.com/)
+* [Alibaba Cloud](/docs/setup/turnkey/alibaba-cloud/)
+
+## On-Premises turnkey cloud solutions
+These solutions allow you to create Kubernetes clusters on your internal, secure, cloud network with only a
+few commands.
+
+* [IBM Cloud Private](https://www.ibm.com/cloud-computing/products/ibm-cloud-private/)
+* [Kubermatic](https://www.loodse.com)
+* [SUSE CaaS Platform](https://www.suse.com/products/caas-platform)
+* [SUSE Cloud Application Platform](https://www.suse.com/products/cloud-application-platform/)
+* [Rancher 2.0](https://rancher.com/docs/rancher/v2.x/en/)
+* [Kontena Pharos](https://kontena.io/pharos/)
+* [Kublr](https://kublr.com/)
+
+## Custom Solutions
+
+Kubernetes can run on a wide range of Cloud providers and bare-metal environments, and with many
+base operating systems.
+
+If you can find a guide below that matches your needs, use it. It may be a little out of date, but
+it will be easier than starting from scratch. If you do want to start from scratch, either because you
+have special requirements, or just because you want to understand what is underneath a Kubernetes
+cluster, try the [Getting Started from Scratch](/docs/setup/scratch/) guide.
+
+If you are interested in supporting Kubernetes on a new platform, see
+[Writing a Getting Started Guide](https://git.k8s.io/community/contributors/devel/writing-a-getting-started-guide.md).
+
+### Universal
+
+If you already have a way to configure hosting resources, use
+[kubeadm](/docs/setup/independent/create-cluster-kubeadm/) to easily bring up a cluster
+with a single command per machine.
+
+### Cloud
+
+These solutions are combinations of cloud providers and operating systems not covered by the above solutions.
+
+* [CoreOS on AWS or GCE](/docs/setup/custom-cloud/coreos/)
+* [Kubernetes on Ubuntu](/docs/getting-started-guides/ubuntu/)
+* [Kubespray](/docs/setup/custom-cloud/kubespray/)
+* [Rancher Kubernetes Engine (RKE)](https://github.com/rancher/rke)
+* [Gardener](https://gardener.cloud/)
+* [Kublr](https://kublr.com/)
+
+### On-Premises VMs
+
+* [Vagrant](/docs/setup/custom-cloud/coreos/) (uses CoreOS and flannel)
+* [CloudStack](/docs/setup/on-premises-vm/cloudstack/) (uses Ansible, CoreOS and flannel)
+* [VMware vSphere](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/)
+* [VMware vSphere, OpenStack, or Bare Metal](/docs/getting-started-guides/ubuntu/) (uses Juju, Ubuntu and flannel)
+* [VMware](/docs/setup/custom-cloud/coreos/) (uses CoreOS and flannel)
+* [oVirt](/docs/setup/on-premises-vm/ovirt/)
+* [Fedora (Multi Node)](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) (uses Fedora and flannel)
+
+### Bare Metal
+
+* [Fedora (Single Node)](/docs/getting-started-guides/fedora/fedora_manual_config/)
+* [Fedora (Multi Node)](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/)
+* [Kubernetes on Ubuntu](/docs/getting-started-guides/ubuntu/)
+* [CoreOS](/docs/setup/custom-cloud/coreos/)
+
+### Integrations
+
+These solutions provide integration with third-party schedulers, resource managers, and/or lower level platforms.
+
+* [DCOS](/docs/setup/on-premises-vm/dcos/)
+ * Community Edition DCOS uses AWS
+ * Enterprise Edition DCOS supports cloud hosting, on-premises VMs, and bare metal
+
+## Table of Solutions
+
+Below is a table of all of the solutions listed above.
+
+IaaS Provider | Config. Mgmt. | OS | Networking | Docs | Support Level
+-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ----------------------------
+any | any | multi-support | any CNI | [docs](/docs/setup/independent/create-cluster-kubeadm/) | Project ([SIG-cluster-lifecycle](https://git.k8s.io/community/sig-cluster-lifecycle))
+Google Kubernetes Engine | | | GCE | [docs](https://cloud.google.com/kubernetes-engine/docs/) | Commercial
+Stackpoint.io | | multi-support | multi-support | [docs](https://stackpoint.io/) | Commercial
+AppsCode.com | Saltstack | Debian | multi-support | [docs](https://appscode.com/products/cloud-deployment/) | Commercial
+Madcore.Ai | Jenkins DSL | Ubuntu | flannel | [docs](https://madcore.ai) | Community ([@madcore-ai](https://github.com/madcore-ai))
+Platform9 | | multi-support | multi-support | [docs](https://platform9.com/managed-kubernetes/) | Commercial
+Kublr | custom | multi-support | multi-support | [docs](http://docs.kublr.com/) | Commercial
+Kubermatic | | multi-support | multi-support | [docs](http://docs.kubermatic.io/) | Commercial
+Giant Swarm | | CoreOS | flannel and/or Calico | [docs](https://docs.giantswarm.io/) | Commercial
+GCE | Saltstack | Debian | GCE | [docs](/docs/setup/turnkey/gce/) | Project
+Azure Container Service | | Ubuntu | Azure | [docs](https://azure.microsoft.com/en-us/services/container-service/) | Commercial
+Azure (IaaS) | | Ubuntu | Azure | [docs](/docs/setup/turnkey/azure/) | [Community (Microsoft)](https://github.com/Azure/acs-engine)
+Bare-metal | custom | Fedora | _none_ | [docs](/docs/getting-started-guides/fedora/fedora_manual_config/) | Project
+Bare-metal | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal))
+libvirt | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal))
+KVM | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal))
+DCOS | Marathon | CoreOS/Alpine | custom | [docs](/docs/getting-started-guides/dcos/) | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md))
+AWS | CoreOS | CoreOS | flannel | [docs](/docs/setup/turnkey/aws/) | Community
+GCE | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/) | Community ([@pires](https://github.com/pires))
+Vagrant | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/) | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles))
+CloudStack | Ansible | CoreOS | flannel | [docs](/docs/getting-started-guides/cloudstack/) | Community ([@sebgoa](https://github.com/sebgoa))
+VMware vSphere | any | multi-support | multi-support | [docs](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/) | [Community](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/contactus.html)
+Bare-metal | custom | CentOS | flannel | [docs](/docs/getting-started-guides/centos/centos_manual_config/) | Community ([@coolsvap](https://github.com/coolsvap))
+lxd | Juju | Ubuntu | flannel/canal | [docs](/docs/getting-started-guides/ubuntu/local/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes)
+AWS | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes)
+Azure | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes)
+GCE | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes)
+Oracle Cloud | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes)
+Rackspace | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes)
+VMware vSphere | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes)
+Bare Metal | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes)
+AWS | Saltstack | Debian | AWS | [docs](/docs/setup/turnkey/aws/) | Community ([@justinsb](https://github.com/justinsb))
+AWS | kops | Debian | AWS | [docs](https://github.com/kubernetes/kops/) | Community ([@justinsb](https://github.com/justinsb))
+Bare-metal | custom | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | Community ([@resouer](https://github.com/resouer), [@WIZARD-CXY](https://github.com/WIZARD-CXY))
+oVirt | | | | [docs](/docs/setup/on-premises-vm/ovirt/) | Community ([@simon3z](https://github.com/simon3z))
+any | any | any | any | [docs](/docs/setup/scratch/) | Community ([@erictune](https://github.com/erictune))
+any | any | any | any | [docs](http://docs.projectcalico.org/v2.2/getting-started/kubernetes/installation/) | Commercial and Community
+any | RKE | multi-support | flannel or canal | [docs](https://rancher.com/docs/rancher/v2.x/en/quick-start-guide/) | [Commercial](https://rancher.com/what-is-rancher/overview/) and [Community](https://github.com/rancher/rancher)
+any | [Gardener Cluster-Operator](https://kubernetes.io/blog/2018/05/17/gardener/) | multi-support | multi-support | [docs](https://gardener.cloud) | [Project/Community](https://github.com/gardener) and [Commercial]( https://cloudplatform.sap.com/)
+Alibaba Cloud Container Service For Kubernetes | ROS | CentOS | flannel/Terway | [docs](https://www.aliyun.com/product/containerservice) | Commercial
+
+{{< note >}}
+**Note:** The above table is ordered by version test/used in nodes, followed by support level.
+{{< /note >}}
+
+### Definition of columns
+
+* **IaaS Provider** is the product or organization which provides the virtual or physical machines (nodes) that Kubernetes runs on.
+* **OS** is the base operating system of the nodes.
+* **Config. Mgmt.** is the configuration management system that helps install and maintain Kubernetes on the
+ nodes.
+* **Networking** is what implements the [networking model](/docs/concepts/cluster-administration/networking/). Those with networking type
+ _none_ may not support more than a single node, or may support multiple VM nodes in a single physical node.
+* **Conformance** indicates whether a cluster created with this configuration has passed the project's conformance
+ tests for supporting the API and base features of Kubernetes v1.0.0.
+* **Support Levels**
+ * **Project**: Kubernetes committers regularly use this configuration, so it usually works with the latest release
+ of Kubernetes.
+ * **Commercial**: A commercial offering with its own support arrangements.
+ * **Community**: Actively supported by community contributions. May not work with recent releases of Kubernetes.
+ * **Inactive**: Not actively maintained. Not recommended for first-time Kubernetes users, and may be removed.
+* **Notes** has other relevant information, such as the version of Kubernetes used.
+
+
+
+
+[1]: https://gist.github.com/erictune/4cabc010906afbcc5061
+
+[2]: https://gist.github.com/derekwaynecarr/505e56036cdf010bf6b6
+
+[3]: https://gist.github.com/erictune/2f39b22f72565365e59b
+
+{{% /capture %}}
diff --git a/content/ko/docs/setup/salt.md b/content/ko/docs/setup/salt.md
new file mode 100644
index 000000000..718b6fcfc
--- /dev/null
+++ b/content/ko/docs/setup/salt.md
@@ -0,0 +1,99 @@
+---
+title: Configuring Kubernetes with Salt
+---
+
+The Kubernetes cluster can be configured using Salt.
+
+The Salt scripts are shared across multiple hosting providers and depending on where you host your Kubernetes cluster, you may be using different operating systems and different networking configurations. As a result, it's important to understand some background information before making Salt changes in order to minimize introducing failures for other hosting providers.
+
+## Salt cluster setup
+
+The **salt-master** service runs on the kubernetes-master [(except on the default GCE and OpenStack-Heat setup)](#standalone-salt-configuration-on-gce-and-others).
+
+The **salt-minion** service runs on the kubernetes-master and each kubernetes-node in the cluster.
+
+Each salt-minion service is configured to interact with the **salt-master** service hosted on the kubernetes-master via the **master.conf** file [(except on GCE and OpenStack-Heat)](#standalone-salt-configuration-on-gce-and-others).
+
+```shell
+[root@kubernetes-master] $ cat /etc/salt/minion.d/master.conf
+master: kubernetes-master
+```
+
+The salt-master is contacted by each salt-minion and depending upon the machine information presented, the salt-master will provision the machine as either a kubernetes-master or kubernetes-node with all the required capabilities needed to run Kubernetes.
+
+If you are running the Vagrant based environment, the **salt-api** service is running on the kubernetes-master. It is configured to enable the vagrant user to introspect the salt cluster in order to find out about machines in the Vagrant environment via a REST API.
+
+## Standalone Salt Configuration on GCE and others
+
+On GCE and OpenStack, using the Openstack-Heat provider, the master and nodes are all configured as [standalone minions](http://docs.saltstack.com/en/latest/topics/tutorials/standalone_minion.html). The configuration for each VM is derived from the VM's [instance metadata](https://cloud.google.com/compute/docs/metadata) and then stored in Salt grains (`/etc/salt/minion.d/grains.conf`) and pillars (`/srv/salt-overlay/pillar/cluster-params.sls`) that local Salt uses to enforce state.
+
+All remaining sections that refer to master/minion setups should be ignored for GCE and OpenStack. One fallout of this setup is that the Salt mine doesn't exist - there is no sharing of configuration amongst nodes.
+
+## Salt security
+
+*(Not applicable on default GCE and OpenStack-Heat setup.)*
+
+Security is not enabled on the salt-master, and the salt-master is configured to auto-accept incoming requests from minions. It is not recommended to use this security configuration in production environments without deeper study. (In some environments this isn't as bad as it might sound if the salt master port isn't externally accessible and you trust everyone on your network.)
+
+```shell
+[root@kubernetes-master] $ cat /etc/salt/master.d/auto-accept.conf
+open_mode: True
+auto_accept: True
+```
+
+## Salt minion configuration
+
+Each minion in the salt cluster has an associated configuration that instructs the salt-master how to provision the required resources on the machine.
+
+An example file is presented below using the Vagrant based environment.
+
+```shell
+[root@kubernetes-master] $ cat /etc/salt/minion.d/grains.conf
+grains:
+ etcd_servers: $MASTER_IP
+ cloud: vagrant
+ roles:
+ - kubernetes-master
+```
+
+Each hosting environment has a slightly different grains.conf file that is used to build conditional logic where required in the Salt files.
+
+The following enumerates the set of defined key/value pairs that are supported today. If you add new ones, please make sure to update this list.
+
+Key | Value
+-----------------------------------|----------------------------------------------------------------
+`api_servers` | (Optional) The IP address / host name where a kubelet can get read-only access to kube-apiserver
+`cbr-cidr` | (Optional) The minion IP address range used for the docker container bridge.
+`cloud` | (Optional) Which IaaS platform is used to host Kubernetes, *gce*, *azure*, *aws*, *vagrant*
+`etcd_servers` | (Optional) Comma-delimited list of IP addresses the kube-apiserver and kubelet use to reach etcd. Uses the IP of the first machine in the kubernetes_master role, or 127.0.0.1 on GCE.
+`hostnamef` | (Optional) The full host name of the machine, i.e. uname -n
+`node_ip` | (Optional) The IP address to use to address this node
+`hostname_override` | (Optional) Mapped to the kubelet hostname-override
+`network_mode` | (Optional) Networking model to use among nodes: *openvswitch*
+`networkInterfaceName` | (Optional) Networking interface to use to bind addresses, default value *eth0*
+`publicAddressOverride` | (Optional) The IP address the kube-apiserver should use to bind against for external read-only access
+`roles` | (Required) 1. `kubernetes-master` means this machine is the master in the Kubernetes cluster. 2. `kubernetes-pool` means this machine is a kubernetes-node. Depending on the role, the Salt scripts will provision different resources on the machine.
+
+These keys may be leveraged by the Salt sls files to branch behavior.
+
+In addition, a cluster may be running a Debian based operating system or Red Hat based operating system (Centos, Fedora, RHEL, etc.). As a result, it's important to sometimes distinguish behavior based on operating system using if branches like the following.
+
+```liquid
+
+{% if grains['os_family'] == 'RedHat' %}
+// something specific to a RedHat environment (Centos, Fedora, RHEL) where you may use yum, systemd, etc.
+{% else %}
+// something specific to Debian environment (apt-get, initd)
+{% endif %}
+
+```
+
+## Best Practices
+
+When configuring default arguments for processes, it's best to avoid the use of EnvironmentFiles (Systemd in Red Hat environments) or init.d files (Debian distributions) to hold default values that should be common across operating system environments. This helps keep our Salt template files easy to understand for editors who may not be familiar with the particulars of each distribution.
+
+## Future enhancements (Networking)
+
+Per pod IP configuration is provider-specific, so when making networking changes, it's important to sandbox these as all providers may not use the same mechanisms (iptables, openvswitch, etc.)
+
+We should define a grains.conf key that captures more specifically what network configuration environment is being used to avoid future confusion across providers.
diff --git a/content/ko/docs/setup/scratch.md b/content/ko/docs/setup/scratch.md
new file mode 100644
index 000000000..9c1ef325e
--- /dev/null
+++ b/content/ko/docs/setup/scratch.md
@@ -0,0 +1,879 @@
+---
+title: Creating a Custom Cluster from Scratch
+---
+
+This guide is for people who want to craft a custom Kubernetes cluster. If you
+can find an existing Getting Started Guide that meets your needs on [this
+list](/docs/setup/), then we recommend using it, as you will be able to benefit
+from the experience of others. However, if you have specific IaaS, networking,
+configuration management, or operating system requirements not met by any of
+those guides, then this guide will provide an outline of the steps you need to
+take. Note that it requires considerably more effort than using one of the
+pre-defined guides.
+
+This guide is also useful for those wanting to understand at a high level some of the
+steps that existing cluster setup scripts are making.
+
+{{< toc >}}
+
+## Designing and Preparing
+
+### Learning
+
+ 1. You should be familiar with using Kubernetes already. We suggest you set
+ up a temporary cluster by following one of the other Getting Started Guides.
+ This will help you become familiar with the CLI ([kubectl](/docs/user-guide/kubectl/)) and concepts ([pods](/docs/user-guide/pods/), [services](/docs/concepts/services-networking/service/), etc.) first.
+ 1. You should have `kubectl` installed on your desktop. This will happen as a side
+ effect of completing one of the other Getting Started Guides. If not, follow the instructions
+ [here](/docs/tasks/kubectl/install/).
+
+### Cloud Provider
+
+Kubernetes has the concept of a Cloud Provider, which is a module which provides
+an interface for managing TCP Load Balancers, Nodes (Instances) and Networking Routes.
+The interface is defined in `pkg/cloudprovider/cloud.go`. It is possible to
+create a custom cluster without implementing a cloud provider (for example if using
+bare-metal), and not all parts of the interface need to be implemented, depending
+on how flags are set on various components.
+
+### Nodes
+
+- You can use virtual or physical machines.
+- While you can build a cluster with 1 machine, in order to run all the examples and tests you
+ need at least 4 nodes.
+- Many Getting-started-guides make a distinction between the master node and regular nodes. This
+ is not strictly necessary.
+- Nodes will need to run some version of Linux with the x86_64 architecture. It may be possible
+ to run on other OSes and Architectures, but this guide does not try to assist with that.
+- Apiserver and etcd together are fine on a machine with 1 core and 1GB RAM for clusters with 10s of nodes.
+ Larger or more active clusters may benefit from more cores.
+- Other nodes can have any reasonable amount of memory and any number of cores. They need not
+ have identical configurations.
+
+### Network
+
+#### Network Connectivity
+Kubernetes has a distinctive [networking model](/docs/concepts/cluster-administration/networking/).
+
+Kubernetes allocates an IP address to each pod. When creating a cluster, you
+need to allocate a block of IPs for Kubernetes to use as Pod IPs. The simplest
+approach is to allocate a different block of IPs to each node in the cluster as
+the node is added. A process in one pod should be able to communicate with
+another pod using the IP of the second pod. This connectivity can be
+accomplished in two ways:
+
+- **Using an overlay network**
+ - An overlay network obscures the underlying network architecture from the
+ pod network through traffic encapsulation (for example vxlan).
+ - Encapsulation reduces performance, though exactly how much depends on your solution.
+- **Without an overlay network**
+ - Configure the underlying network fabric (switches, routers, etc.) to be aware of pod IP addresses.
+ - This does not require the encapsulation provided by an overlay, and so can achieve
+ better performance.
+
+Which method you choose depends on your environment and requirements. There are various ways
+to implement one of the above options:
+
+- **Use a network plugin which is called by Kubernetes**
+ - Kubernetes supports the [CNI](https://github.com/containernetworking/cni) network plugin interface.
+ - There are a number of solutions which provide plugins for Kubernetes (listed alphabetically):
+ - [Calico](http://docs.projectcalico.org/)
+ - [Flannel](https://github.com/coreos/flannel)
+ - [Open vSwitch (OVS)](http://openvswitch.org/)
+ - [Romana](http://romana.io/)
+ - [Weave](http://weave.works/)
+ - [More found here](/docs/admin/networking#how-to-achieve-this/)
+ - You can also write your own.
+- **Compile support directly into Kubernetes**
+ - This can be done by implementing the "Routes" interface of a Cloud Provider module.
+ - The Google Compute Engine ([GCE](/docs/setup/turnkey/gce/)) and [AWS](/docs/setup/turnkey/aws/) guides use this approach.
+- **Configure the network external to Kubernetes**
+ - This can be done by manually running commands, or through a set of externally maintained scripts.
+ - You have to implement this yourself, but it can give you an extra degree of flexibility.
+
+You will need to select an address range for the Pod IPs.
+
+- Various approaches:
+ - GCE: each project has its own `10.0.0.0/8`. Carve off a `/16` for each
+ Kubernetes cluster from that space, which leaves room for several clusters.
+ Each node gets a further subdivision of this space.
+ - AWS: use one VPC for whole organization, carve off a chunk for each
+ cluster, or use different VPC for different clusters.
+- Allocate one CIDR subnet for each node's PodIPs, or a single large CIDR
+ from which smaller CIDRs are automatically allocated to each node.
+ - You need max-pods-per-node * max-number-of-nodes IPs in total. A `/24` per
+ node supports 254 pods per machine and is a common choice. If IPs are
+ scarce, a `/26` (62 pods per machine) or even a `/27` (30 pods) may be sufficient.
+ - For example, use `10.10.0.0/16` as the range for the cluster, with up to 256 nodes
+ using `10.10.0.0/24` through `10.10.255.0/24`, respectively.
+ - Need to make these routable or connect with overlay.
+
+Kubernetes also allocates an IP to each [service](/docs/concepts/services-networking/service/). However,
+service IPs do not necessarily need to be routable. The kube-proxy takes care
+of translating Service IPs to Pod IPs before traffic leaves the node. You do
+need to allocate a block of IPs for services. Call this
+`SERVICE_CLUSTER_IP_RANGE`. For example, you could set
+`SERVICE_CLUSTER_IP_RANGE="10.0.0.0/16"`, allowing 65534 distinct services to
+be active at once. Note that you can grow the end of this range, but you
+cannot move it without disrupting the services and pods that already use it.
+
+Also, you need to pick a static IP for master node.
+
+- Call this `MASTER_IP`.
+- Open any firewalls to allow access to the apiserver ports 80 and/or 443.
+- Enable ipv4 forwarding sysctl, `net.ipv4.ip_forward = 1`
+
+#### Network Policy
+
+Kubernetes enables the definition of fine-grained network policy between Pods using the [NetworkPolicy](/docs/concepts/services-networking/network-policies/) resource.
+
+Not all networking providers support the Kubernetes NetworkPolicy API, see [Using Network Policy](/docs/tasks/configure-pod-container/declare-network-policy/) for more information.
+
+### Cluster Naming
+
+You should pick a name for your cluster. Pick a short name for each cluster
+which is unique from future cluster names. This will be used in several ways:
+
+ - by kubectl to distinguish between various clusters you have access to. You will probably want a
+ second one sometime later, such as for testing new Kubernetes releases, running in a different
+region of the world, etc.
+ - Kubernetes clusters can create cloud provider resources (for example, AWS ELBs) and different clusters
+ need to distinguish which resources each created. Call this `CLUSTER_NAME`.
+
+### Software Binaries
+
+You will need binaries for:
+
+ - etcd
+ - A container runner, one of:
+ - docker
+ - rkt
+ - Kubernetes
+ - kubelet
+ - kube-proxy
+ - kube-apiserver
+ - kube-controller-manager
+ - kube-scheduler
+
+#### Downloading and Extracting Kubernetes Binaries
+
+A Kubernetes binary release includes all the Kubernetes binaries as well as the supported release of etcd.
+You can use a Kubernetes binary release (recommended) or build your Kubernetes binaries following the instructions in the
+[Developer Documentation](https://git.k8s.io/community/contributors/devel/). Only using a binary release is covered in this guide.
+
+Download the [latest binary release](https://github.com/kubernetes/kubernetes/releases/latest) and unzip it.
+Server binary tarballs are no longer included in the Kubernetes final tarball, so you will need to locate and run
+`./kubernetes/cluster/get-kube-binaries.sh` to download the client and server binaries.
+Then locate `./kubernetes/server/kubernetes-server-linux-amd64.tar.gz` and unzip *that*.
+Then, within the second set of unzipped files, locate `./kubernetes/server/bin`, which contains
+all the necessary binaries.
+
+#### Selecting Images
+
+You will run docker, kubelet, and kube-proxy outside of a container, the same way you would run any system daemon, so
+you just need the bare binaries. For etcd, kube-apiserver, kube-controller-manager, and kube-scheduler,
+we recommend that you run these as containers, so you need an image to be built.
+
+You have several choices for Kubernetes images:
+
+- Use images hosted on Google Container Registry (GCR):
+ - For example `k8s.gcr.io/hyperkube:$TAG`, where `TAG` is the latest
+ release tag, which can be found on the [latest releases page](https://github.com/kubernetes/kubernetes/releases/latest).
+ - Ensure $TAG is the same tag as the release tag you are using for kubelet and kube-proxy.
+ - The [hyperkube](https://releases.k8s.io/{{< param "githubbranch" >}}/cmd/hyperkube) binary is an all in one binary
+ - `hyperkube kubelet ...` runs the kubelet, `hyperkube apiserver ...` runs an apiserver, etc.
+- Build your own images.
+ - Useful if you are using a private registry.
+ - The release contains files such as `./kubernetes/server/bin/kube-apiserver.tar` which
+ can be converted into docker images using a command like
+ `docker load -i kube-apiserver.tar`
+ - You can verify if the image is loaded successfully with the right repository and tag using
+ command like `docker images`
+
+For etcd, you can:
+
+- Use images hosted on Google Container Registry (GCR), such as `k8s.gcr.io/etcd:2.2.1`
+- Use images hosted on [Docker Hub](https://hub.docker.com/search/?q=etcd) or [Quay.io](https://quay.io/repository/coreos/etcd), such as `quay.io/coreos/etcd:v2.2.1`
+- Use etcd binary included in your OS distro.
+- Build your own image
+ - You can do: `cd kubernetes/cluster/images/etcd; make`
+
+We recommend that you use the etcd version which is provided in the Kubernetes binary distribution. The Kubernetes binaries in the release
+were tested extensively with this version of etcd and not with any other version.
+The recommended version number can also be found as the value of `TAG` in `kubernetes/cluster/images/etcd/Makefile`.
+
+The remainder of the document assumes that the image identifiers have been chosen and stored in corresponding env vars. Examples (replace with latest tags and appropriate registry):
+
+ - `HYPERKUBE_IMAGE=k8s.gcr.io/hyperkube:$TAG`
+ - `ETCD_IMAGE=k8s.gcr.io/etcd:$ETCD_VERSION`
+
+### Security Models
+
+There are two main options for security:
+
+- Access the apiserver using HTTP.
+ - Use a firewall for security.
+ - This is easier to setup.
+- Access the apiserver using HTTPS
+ - Use https with certs, and credentials for user.
+ - This is the recommended approach.
+ - Configuring certs can be tricky.
+
+If following the HTTPS approach, you will need to prepare certs and credentials.
+
+#### Preparing Certs
+
+You need to prepare several certs:
+
+- The master needs a cert to act as an HTTPS server.
+- The kubelets optionally need certs to identify themselves as clients of the master, and when
+ serving its own API over HTTPS.
+
+Unless you plan to have a real CA generate your certs, you will need
+to generate a root cert and use that to sign the master, kubelet, and
+kubectl certs. How to do this is described in the [authentication
+documentation](/docs/concepts/cluster-administration/certificates/).
+
+You will end up with the following files (we will use these variables later on)
+
+- `CA_CERT`
+ - put in on node where apiserver runs, for example in `/srv/kubernetes/ca.crt`.
+- `MASTER_CERT`
+ - signed by CA_CERT
+ - put in on node where apiserver runs, for example in `/srv/kubernetes/server.crt`
+- `MASTER_KEY `
+ - put in on node where apiserver runs, for example in `/srv/kubernetes/server.key`
+- `KUBELET_CERT`
+ - optional
+- `KUBELET_KEY`
+ - optional
+
+#### Preparing Credentials
+
+The admin user (and any users) need:
+
+ - a token or a password to identify them.
+ - tokens are just long alphanumeric strings, 32 chars for example. See
+ - `TOKEN=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/[:space:]" | dd bs=32 count=1 2>/dev/null)`
+
+Your tokens and passwords need to be stored in a file for the apiserver
+to read. This guide uses `/var/lib/kube-apiserver/known_tokens.csv`.
+The format for this file is described in the [authentication documentation](/docs/reference/access-authn-authz/authentication/#static-token-file).
+
+For distributing credentials to clients, the convention in Kubernetes is to put the credentials
+into a [kubeconfig file](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/).
+
+The kubeconfig file for the administrator can be created as follows:
+
+ - If you have already used Kubernetes with a non-custom cluster (for example, used a Getting Started
+ Guide), you will already have a `$HOME/.kube/config` file.
+ - You need to add certs, keys, and the master IP to the kubeconfig file:
+ - If using the firewall-only security option, set the apiserver this way:
+ - `kubectl config set-cluster $CLUSTER_NAME --server=http://$MASTER_IP --insecure-skip-tls-verify=true`
+ - Otherwise, do this to set the apiserver ip, client certs, and user credentials.
+ - `kubectl config set-cluster $CLUSTER_NAME --certificate-authority=$CA_CERT --embed-certs=true --server=https://$MASTER_IP`
+ - `kubectl config set-credentials $USER --client-certificate=$CLI_CERT --client-key=$CLI_KEY --embed-certs=true --token=$TOKEN`
+ - Set your cluster as the default cluster to use:
+ - `kubectl config set-context $CONTEXT_NAME --cluster=$CLUSTER_NAME --user=$USER`
+ - `kubectl config use-context $CONTEXT_NAME`
+
+Next, make a kubeconfig file for the kubelets and kube-proxy. There are a couple of options for how
+many distinct files to make:
+
+ 1. Use the same credential as the admin
+ - This is simplest to setup.
+ 1. One token and kubeconfig file for all kubelets, one for all kube-proxy, one for admin.
+ - This mirrors what is done on GCE today
+ 1. Different credentials for every kubelet, etc.
+ - We are working on this but all the pieces are not ready yet.
+
+You can make the files by copying the `$HOME/.kube/config` or by using the following template:
+
+```yaml
+apiVersion: v1
+kind: Config
+users:
+- name: kubelet
+ user:
+ token: ${KUBELET_TOKEN}
+clusters:
+- name: local
+ cluster:
+ certificate-authority: /srv/kubernetes/ca.crt
+contexts:
+- context:
+ cluster: local
+ user: kubelet
+ name: service-account-context
+current-context: service-account-context
+```
+
+Put the kubeconfig(s) on every node. The examples later in this
+guide assume that there are kubeconfigs in `/var/lib/kube-proxy/kubeconfig` and
+`/var/lib/kubelet/kubeconfig`.
+
+## Configuring and Installing Base Software on Nodes
+
+This section discusses how to configure machines to be Kubernetes nodes.
+
+You should run three daemons on every node:
+
+ - docker or rkt
+ - kubelet
+ - kube-proxy
+
+You will also need to do assorted other configuration on top of a
+base OS install.
+
+Tip: One possible starting point is to setup a cluster using an existing Getting
+Started Guide. After getting a cluster running, you can then copy the init.d scripts or systemd unit files from that
+cluster, and then modify them for use on your custom cluster.
+
+### Docker
+
+The minimum required Docker version will vary as the kubelet version changes. The newest stable release is a good choice. Kubelet will log a warning and refuse to start pods if the version is too old, so pick a version and try it.
+
+If you previously had Docker installed on a node without setting Kubernetes-specific
+options, you may have a Docker-created bridge and iptables rules. You may want to remove these
+as follows before proceeding to configure Docker for Kubernetes.
+
+```shell
+iptables -t nat -F
+ip link set docker0 down
+ip link delete docker0
+```
+
+The way you configure docker will depend in whether you have chosen the routable-vip or overlay-network approaches for your network.
+Some suggested docker options:
+
+ - create your own bridge for the per-node CIDR ranges, call it cbr0, and set `--bridge=cbr0` option on docker.
+ - set `--iptables=false` so docker will not manipulate iptables for host-ports (too coarse on older docker versions, may be fixed in newer versions)
+so that kube-proxy can manage iptables instead of docker.
+ - `--ip-masq=false`
+ - if you have setup PodIPs to be routable, then you want this false, otherwise, docker will
+ rewrite the PodIP source-address to a NodeIP.
+ - some environments (for example GCE) still need you to masquerade out-bound traffic when it leaves the cloud environment. This is very environment specific.
+ - if you are using an overlay network, consult those instructions.
+ - `--mtu=`
+ - may be required when using Flannel, because of the extra packet size due to udp encapsulation
+ - `--insecure-registry $CLUSTER_SUBNET`
+ - to connect to a private registry, if you set one up, without using SSL.
+
+You may want to increase the number of open files for docker:
+
+ - `DOCKER_NOFILE=1000000`
+
+Where this config goes depends on your node OS. For example, GCE's Debian-based distro uses `/etc/default/docker`.
+
+Ensure docker is working correctly on your system before proceeding with the rest of the
+installation, by following examples given in the Docker documentation.
+
+### rkt
+
+[rkt](https://github.com/coreos/rkt) is an alternative to Docker. You only need to install one of Docker or rkt.
+The minimum version required is [v0.5.6](https://github.com/coreos/rkt/releases/tag/v0.5.6).
+
+[systemd](http://www.freedesktop.org/wiki/Software/systemd/) is required on your node to run rkt. The
+minimum version required to match rkt v0.5.6 is
+[systemd 215](http://lists.freedesktop.org/archives/systemd-devel/2014-July/020903.html).
+
+[rkt metadata service](https://github.com/coreos/rkt/blob/master/Documentation/networking/overview.md) is also required
+for rkt networking support. You can start rkt metadata service by using command like
+`sudo systemd-run rkt metadata-service`
+
+Then you need to configure your kubelet with flag:
+
+ - `--container-runtime=rkt`
+
+### kubelet
+
+All nodes should run kubelet. See [Software Binaries](#software-binaries).
+
+Arguments to consider:
+
+ - If following the HTTPS security approach:
+ - `--kubeconfig=/var/lib/kubelet/kubeconfig`
+ - Otherwise, if taking the firewall-based security approach
+ - `--config=/etc/kubernetes/manifests`
+ - `--cluster-dns=` to the address of the DNS server you will setup (see [Starting Cluster Services](#starting-cluster-services).)
+ - `--cluster-domain=` to the dns domain prefix to use for cluster DNS addresses.
+ - `--docker-root=`
+ - `--root-dir=`
+ - `--pod-cidr=` The CIDR to use for pod IP addresses, only used in standalone mode. In cluster mode, this is obtained from the master.
+ - `--register-node` (described in [Node](/docs/admin/node/) documentation.)
+
+### kube-proxy
+
+All nodes should run kube-proxy. (Running kube-proxy on a "master" node is not
+strictly required, but being consistent is easier.) Obtain a binary as described for
+kubelet.
+
+Arguments to consider:
+
+ - If following the HTTPS security approach:
+ - `--master=https://$MASTER_IP`
+ - `--kubeconfig=/var/lib/kube-proxy/kubeconfig`
+ - Otherwise, if taking the firewall-based security approach
+ - `--master=http://$MASTER_IP`
+
+Note that on some Linux platforms, you may need to manually install the
+`conntrack` package which is a dependency of kube-proxy, or else kube-proxy
+cannot be started successfully.
+
+For more details on debugging kube-proxy problems, please refer to
+[Debug Services](/docs/tasks/debug-application-cluster/debug-service/)
+
+### Networking
+
+Each node needs to be allocated its own CIDR range for pod networking.
+Call this `NODE_X_POD_CIDR`.
+
+A bridge called `cbr0` needs to be created on each node. The bridge is explained
+further in the [networking documentation](/docs/concepts/cluster-administration/networking/). The bridge itself
+needs an address from `$NODE_X_POD_CIDR` - by convention the first IP. Call
+this `NODE_X_BRIDGE_ADDR`. For example, if `NODE_X_POD_CIDR` is `10.0.0.0/16`,
+then `NODE_X_BRIDGE_ADDR` is `10.0.0.1/16`. NOTE: this retains the `/16` suffix
+because of how this is used later.
+
+If you have turned off Docker's IP masquerading to allow pods to talk to each
+other, then you may need to do masquerading just for destination IPs outside
+the cluster network. For example:
+
+```shell
+iptables -t nat -A POSTROUTING ! -d ${CLUSTER_SUBNET} -m addrtype ! --dst-type LOCAL -j MASQUERADE
+```
+
+This will rewrite the source address from
+the PodIP to the Node IP for traffic bound outside the cluster, and kernel
+[connection tracking](http://www.iptables.info/en/connection-state.html)
+will ensure that responses destined to the node still reach
+the pod.
+
+NOTE: This is environment specific. Some environments will not need
+any masquerading at all. Others, such as GCE, will not allow pod IPs to send
+traffic to the internet, but have no problem with them inside your GCE Project.
+
+### Other
+
+- Enable auto-upgrades for your OS package manager, if desired.
+- Configure log rotation for all node components (for example using [logrotate](http://linux.die.net/man/8/logrotate)).
+- Setup liveness-monitoring (for example using [supervisord](http://supervisord.org/)).
+- Setup volume plugin support (optional)
+ - Install any client binaries for optional volume types, such as `glusterfs-client` for GlusterFS
+ volumes.
+
+### Using Configuration Management
+
+The previous steps all involved "conventional" system administration techniques for setting up
+machines. You may want to use a Configuration Management system to automate the node configuration
+process. There are examples of [Saltstack](/docs/setup/salt/), Ansible, Juju, and CoreOS Cloud Config in the
+various Getting Started Guides.
+
+## Bootstrapping the Cluster
+
+While the basic node services (kubelet, kube-proxy, docker) are typically started and managed using
+traditional system administration/automation approaches, the remaining *master* components of Kubernetes are
+all configured and managed *by Kubernetes*:
+
+ - Their options are specified in a Pod spec (yaml or json) rather than an /etc/init.d file or
+ systemd unit.
+ - They are kept running by Kubernetes rather than by init.
+
+### etcd
+
+You will need to run one or more instances of etcd.
+
+ - Highly available and easy to restore - Run 3 or 5 etcd instances with, their logs written to a directory backed
+ by durable storage (RAID, GCE PD)
+ - Not highly available, but easy to restore - Run one etcd instance, with its log written to a directory backed
+ by durable storage (RAID, GCE PD).
+
+ {{< note >}}**Note:** May result in operations outages in case of
+ instance outage. {{< /note >}}
+ - Highly available - Run 3 or 5 etcd instances with non durable storage.
+
+ {{< note >}}**Note:** Log can be written to non-durable storage
+ because storage is replicated.{{< /note >}}
+
+See [cluster-troubleshooting](/docs/admin/cluster-troubleshooting/) for more discussion on factors affecting cluster
+availability.
+
+To run an etcd instance:
+
+1. Copy [`cluster/gce/manifests/etcd.manifest`](https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/manifests/etcd.manifest)
+1. Make any modifications needed
+1. Start the pod by putting it into the kubelet manifest directory
+
+### Apiserver, Controller Manager, and Scheduler
+
+The apiserver, controller manager, and scheduler will each run as a pod on the master node.
+
+For each of these components, the steps to start them running are similar:
+
+1. Start with a provided template for a pod.
+1. Set the `HYPERKUBE_IMAGE` to the values chosen in [Selecting Images](#selecting-images).
+1. Determine which flags are needed for your cluster, using the advice below each template.
+1. Set the flags to be individual strings in the command array (for example $ARGN below)
+1. Start the pod by putting the completed template into the kubelet manifest directory.
+1. Verify that the pod is started.
+
+#### Apiserver pod template
+
+```json
+{
+ "kind": "Pod",
+ "apiVersion": "v1",
+ "metadata": {
+ "name": "kube-apiserver"
+ },
+ "spec": {
+ "hostNetwork": true,
+ "containers": [
+ {
+ "name": "kube-apiserver",
+ "image": "${HYPERKUBE_IMAGE}",
+ "command": [
+ "/hyperkube",
+ "apiserver",
+ "$ARG1",
+ "$ARG2",
+ ...
+ "$ARGN"
+ ],
+ "ports": [
+ {
+ "name": "https",
+ "hostPort": 443,
+ "containerPort": 443
+ },
+ {
+ "name": "local",
+ "hostPort": 8080,
+ "containerPort": 8080
+ }
+ ],
+ "volumeMounts": [
+ {
+ "name": "srvkube",
+ "mountPath": "/srv/kubernetes",
+ "readOnly": true
+ },
+ {
+ "name": "etcssl",
+ "mountPath": "/etc/ssl",
+ "readOnly": true
+ }
+ ],
+ "livenessProbe": {
+ "httpGet": {
+ "scheme": "HTTP",
+ "host": "127.0.0.1",
+ "port": 8080,
+ "path": "/healthz"
+ },
+ "initialDelaySeconds": 15,
+ "timeoutSeconds": 15
+ }
+ }
+ ],
+ "volumes": [
+ {
+ "name": "srvkube",
+ "hostPath": {
+ "path": "/srv/kubernetes"
+ }
+ },
+ {
+ "name": "etcssl",
+ "hostPath": {
+ "path": "/etc/ssl"
+ }
+ }
+ ]
+ }
+}
+```
+
+Here are some apiserver flags you may need to set:
+
+- `--cloud-provider=` see [cloud providers](#cloud-providers)
+- `--cloud-config=` see [cloud providers](#cloud-providers)
+- `--address=${MASTER_IP}` *or* `--bind-address=127.0.0.1` and `--address=127.0.0.1` if you want to run a proxy on the master node.
+- `--service-cluster-ip-range=$SERVICE_CLUSTER_IP_RANGE`
+- `--etcd-servers=http://127.0.0.1:4001`
+- `--tls-cert-file=/srv/kubernetes/server.cert`
+- `--tls-private-key-file=/srv/kubernetes/server.key`
+- `--enable-admission-plugins=$RECOMMENDED_LIST`
+ - See [admission controllers](/docs/reference/access-authn-authz/admission-controllers/) for recommended arguments.
+- `--allow-privileged=true`, only if you trust your cluster user to run pods as root.
+
+If you are following the firewall-only security approach, then use these arguments:
+
+- `--token-auth-file=/dev/null`
+- `--insecure-bind-address=$MASTER_IP`
+- `--advertise-address=$MASTER_IP`
+
+If you are using the HTTPS approach, then set:
+
+- `--client-ca-file=/srv/kubernetes/ca.crt`
+- `--token-auth-file=/srv/kubernetes/known_tokens.csv`
+- `--basic-auth-file=/srv/kubernetes/basic_auth.csv`
+
+This pod mounts several node file system directories using the `hostPath` volumes. Their purposes are:
+
+- The `/etc/ssl` mount allows the apiserver to find the SSL root certs so it can
+ authenticate external services, such as a cloud provider.
+ - This is not required if you do not use a cloud provider (bare-metal for example).
+- The `/srv/kubernetes` mount allows the apiserver to read certs and credentials stored on the
+ node disk. These could instead be stored on a persistent disk, such as a GCE PD, or baked into the image.
+- Optionally, you may want to mount `/var/log` as well and redirect output there (not shown in template).
+ - Do this if you prefer your logs to be accessible from the root filesystem with tools like journalctl.
+
+*TODO* document proxy-ssh setup.
+
+##### Cloud Providers
+
+Apiserver supports several cloud providers.
+
+- options for `--cloud-provider` flag are `aws`, `azure`, `cloudstack`, `fake`, `gce`, `mesos`, `openstack`, `ovirt`, `rackspace`, `vsphere`, or unset.
+- unset used for bare metal setups.
+- support for new IaaS is added by contributing code [here](https://releases.k8s.io/{{< param "githubbranch" >}}/pkg/cloudprovider/providers)
+
+Some cloud providers require a config file. If so, you need to put config file into apiserver image or mount through hostPath.
+
+- `--cloud-config=` set if cloud provider requires a config file.
+- Used by `aws`, `gce`, `mesos`, `openstack`, `ovirt` and `rackspace`.
+- You must put config file into apiserver image or mount through hostPath.
+- Cloud config file syntax is [Gcfg](https://code.google.com/p/gcfg/).
+- AWS format defined by type [AWSCloudConfig](https://releases.k8s.io/{{< param "githubbranch" >}}/pkg/cloudprovider/providers/aws/aws.go)
+- There is a similar type in the corresponding file for other cloud providers.
+
+#### Scheduler pod template
+
+Complete this template for the scheduler pod:
+
+```json
+{
+ "kind": "Pod",
+ "apiVersion": "v1",
+ "metadata": {
+ "name": "kube-scheduler"
+ },
+ "spec": {
+ "hostNetwork": true,
+ "containers": [
+ {
+ "name": "kube-scheduler",
+ "image": "$HYPERKUBE_IMAGE",
+ "command": [
+ "/hyperkube",
+ "scheduler",
+ "--master=127.0.0.1:8080",
+ "$SCHEDULER_FLAG1",
+ ...
+ "$SCHEDULER_FLAGN"
+ ],
+ "livenessProbe": {
+ "httpGet": {
+ "scheme": "HTTP",
+ "host": "127.0.0.1",
+ "port": 10251,
+ "path": "/healthz"
+ },
+ "initialDelaySeconds": 15,
+ "timeoutSeconds": 15
+ }
+ }
+ ]
+ }
+}
+```
+
+Typically, no additional flags are required for the scheduler.
+
+Optionally, you may want to mount `/var/log` as well and redirect output there.
+
+#### Controller Manager Template
+
+Template for controller manager pod:
+
+```json
+{
+ "kind": "Pod",
+ "apiVersion": "v1",
+ "metadata": {
+ "name": "kube-controller-manager"
+ },
+ "spec": {
+ "hostNetwork": true,
+ "containers": [
+ {
+ "name": "kube-controller-manager",
+ "image": "$HYPERKUBE_IMAGE",
+ "command": [
+ "/hyperkube",
+ "controller-manager",
+ "$CNTRLMNGR_FLAG1",
+ ...
+ "$CNTRLMNGR_FLAGN"
+ ],
+ "volumeMounts": [
+ {
+ "name": "srvkube",
+ "mountPath": "/srv/kubernetes",
+ "readOnly": true
+ },
+ {
+ "name": "etcssl",
+ "mountPath": "/etc/ssl",
+ "readOnly": true
+ }
+ ],
+ "livenessProbe": {
+ "httpGet": {
+ "scheme": "HTTP",
+ "host": "127.0.0.1",
+ "port": 10252,
+ "path": "/healthz"
+ },
+ "initialDelaySeconds": 15,
+ "timeoutSeconds": 15
+ }
+ }
+ ],
+ "volumes": [
+ {
+ "name": "srvkube",
+ "hostPath": {
+ "path": "/srv/kubernetes"
+ }
+ },
+ {
+ "name": "etcssl",
+ "hostPath": {
+ "path": "/etc/ssl"
+ }
+ }
+ ]
+ }
+}
+```
+
+Flags to consider using with controller manager:
+
+ - `--cluster-cidr=`, the CIDR range for pods in cluster.
+ - `--allocate-node-cidrs=`, if you are using `--cloud-provider=`, allocate and set the CIDRs for pods on the cloud provider.
+ - `--cloud-provider=` and `--cloud-config` as described in apiserver section.
+ - `--service-account-private-key-file=/srv/kubernetes/server.key`, used by the [service account](/docs/user-guide/service-accounts) feature.
+ - `--master=127.0.0.1:8080`
+
+#### Starting and Verifying Apiserver, Scheduler, and Controller Manager
+
+Place each completed pod template into the kubelet config dir
+(whatever `--config=` argument of kubelet is set to, typically
+`/etc/kubernetes/manifests`). The order does not matter: scheduler and
+controller manager will retry reaching the apiserver until it is up.
+
+Use `ps` or `docker ps` to verify that each process has started. For example, verify that kubelet has started a container for the apiserver like this:
+
+```shell
+$ sudo docker ps | grep apiserver
+5783290746d5 k8s.gcr.io/kube-apiserver:e36bf367342b5a80d7467fd7611ad873 "/bin/sh -c '/usr/lo'" 10 seconds ago Up 9 seconds k8s_kube-apiserver.feb145e7_kube-apiserver-kubernetes-master_default_eaebc600cf80dae59902b44225f2fc0a_225a4695
+```
+
+Then try to connect to the apiserver:
+
+```shell
+$ echo $(curl -s http://localhost:8080/healthz)
+ok
+$ curl -s http://localhost:8080/api
+{
+ "versions": [
+ "v1"
+ ]
+}
+```
+
+If you have selected the `--register-node=true` option for kubelets, they will now begin self-registering with the apiserver.
+You should soon be able to see all your nodes by running the `kubectl get nodes` command.
+Otherwise, you will need to manually create node objects.
+
+### Starting Cluster Services
+
+You will want to complete your Kubernetes clusters by adding cluster-wide
+services. These are sometimes called *addons*, and [an overview
+of their purpose is in the admin guide](/docs/admin/cluster-components/#addons).
+
+Notes for setting up each cluster service are given below:
+
+* Cluster DNS:
+ * Required for many Kubernetes examples
+ * [Setup instructions](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/)
+ * [Admin Guide](/docs/concepts/services-networking/dns-pod-service/)
+* Cluster-level Logging
+ * [Cluster-level Logging Overview](/docs/user-guide/logging/overview/)
+ * [Cluster-level Logging with Elasticsearch](/docs/user-guide/logging/elasticsearch/)
+ * [Cluster-level Logging with Stackdriver Logging](/docs/user-guide/logging/stackdriver/)
+* Container Resource Monitoring
+ * [Setup instructions](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/cluster-monitoring/)
+* GUI
+ * [Setup instructions](https://github.com/kubernetes/dashboard)
+
+## Troubleshooting
+
+### Running validate-cluster
+
+`cluster/validate-cluster.sh` is used by `cluster/kube-up.sh` to determine if
+the cluster start succeeded.
+
+Example usage and output:
+
+```shell
+KUBECTL_PATH=$(which kubectl) NUM_NODES=3 KUBERNETES_PROVIDER=local cluster/validate-cluster.sh
+Found 3 node(s).
+NAME STATUS AGE VERSION
+node1.local Ready 1h v1.6.9+a3d1dfa6f4335
+node2.local Ready 1h v1.6.9+a3d1dfa6f4335
+node3.local Ready 1h v1.6.9+a3d1dfa6f4335
+Validate output:
+NAME STATUS MESSAGE ERROR
+controller-manager Healthy ok
+scheduler Healthy ok
+etcd-1 Healthy {"health": "true"}
+etcd-2 Healthy {"health": "true"}
+etcd-0 Healthy {"health": "true"}
+Cluster validation succeeded
+```
+
+### Inspect pods and services
+
+Try to run through the "Inspect your cluster" section in one of the other Getting Started Guides, such as [GCE](/docs/setup/turnkey/gce/#inspect-your-cluster).
+You should see some services. You should also see "mirror pods" for the apiserver, scheduler and controller-manager, plus any add-ons you started.
+
+### Try Examples
+
+At this point you should be able to run through one of the basic examples, such as the [nginx example](/examples/application/deployment.yaml).
+
+### Running the Conformance Test
+
+You may want to try to run the [Conformance test](http://releases.k8s.io/{{< param "githubbranch" >}}/test/e2e_node/conformance/run_test.sh). Any failures may give a hint as to areas that need more attention.
+
+### Networking
+
+The nodes must be able to connect to each other using their private IP. Verify this by
+pinging or SSH-ing from one node to another.
+
+### Getting Help
+
+If you run into trouble, please see the section on [troubleshooting](/docs/setup/turnkey/gce/#troubleshooting), post to the
+[kubernetes-users group](https://groups.google.com/forum/#!forum/kubernetes-users), or come ask questions on [Slack](/docs/troubleshooting#slack).
+
+## Support Level
+
+
+IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
+-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
+any | any | any | any | [docs](/docs/getting-started-guides/scratch/) | | Community ([@erictune](https://github.com/erictune))
+
+
+For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions/) chart.
diff --git a/content/ko/docs/setup/turnkey/_index.md b/content/ko/docs/setup/turnkey/_index.md
new file mode 100644
index 000000000..da6af45f1
--- /dev/null
+++ b/content/ko/docs/setup/turnkey/_index.md
@@ -0,0 +1,3 @@
+---
+title: Turnkey Cloud Solutions
+---
diff --git a/content/ko/docs/setup/turnkey/alibaba-cloud.md b/content/ko/docs/setup/turnkey/alibaba-cloud.md
new file mode 100644
index 000000000..a15951551
--- /dev/null
+++ b/content/ko/docs/setup/turnkey/alibaba-cloud.md
@@ -0,0 +1,20 @@
+---
+reviewers:
+- colemickens
+- brendandburns
+title: Running Kubernetes on Alibaba Cloud
+---
+
+## Alibaba Cloud Container Service
+
+The [Alibaba Cloud Container Service](https://www.aliyun.com/product/containerservice) lets you run and manage Docker applications on a cluster of Alibaba Cloud ECS instances. It supports the popular open source container orchestrators: Docker Swarm and Kubernetes.
+
+To simplify cluster deployment and management, use [Kubernetes Support for Alibaba Cloud Container Service](https://www.aliyun.com/solution/kubernetes/). You can get started quickly by following the [Kubernetes walk-through](https://help.aliyun.com/document_detail/53751.html), and there are some [tutorials for Kubernetes Support on Alibaba Cloud](https://yq.aliyun.com/teams/11/type_blog-cid_200-page_1) in Chinese.
+
+To use custom binaries or open source Kubernetes, follow the instructions below.
+
+## Custom Deployments
+
+The source code for [Kubernetes with Alibaba Cloud provider implementation](https://github.com/AliyunContainerService/kubernetes) is open source and available on GitHub.
+
+For more information, see "[Quick deployment of Kubernetes - VPC environment on Alibaba Cloud](https://www.alibabacloud.com/forum/read-830)" in English and [Chinese](https://yq.aliyun.com/articles/66474).
diff --git a/content/ko/docs/setup/turnkey/aws.md b/content/ko/docs/setup/turnkey/aws.md
new file mode 100644
index 000000000..128209fb7
--- /dev/null
+++ b/content/ko/docs/setup/turnkey/aws.md
@@ -0,0 +1,80 @@
+---
+reviewers:
+- justinsb
+- clove
+title: Running Kubernetes on AWS EC2
+---
+
+{{< toc >}}
+
+
+## Supported Production Grade Tools
+
+* [conjure-up](/docs/getting-started-guides/ubuntu/) is an open-source installer for Kubernetes that creates Kubernetes clusters with native AWS integrations on Ubuntu.
+
+* [Kubernetes Operations](https://github.com/kubernetes/kops) - Production Grade K8s Installation, Upgrades, and Management. Supports running Debian, Ubuntu, CentOS, and RHEL in AWS.
+
+* [CoreOS Tectonic](https://coreos.com/tectonic/) includes the open-source [Tectonic Installer](https://github.com/coreos/tectonic-installer) that creates Kubernetes clusters with Container Linux nodes on AWS.
+
+* CoreOS originated and the Kubernetes Incubator maintains [a CLI tool, `kube-aws`](https://github.com/kubernetes-incubator/kube-aws), that creates and manages Kubernetes clusters with [Container Linux](https://coreos.com/why/) nodes, using AWS tools: EC2, CloudFormation and Autoscaling.
+
+---
+
+## Getting started with your cluster
+
+### Command line administration tool: `kubectl`
+
+The cluster startup script will leave you with a `kubernetes` directory on your workstation.
+Alternately, you can download the latest Kubernetes release from [this page](https://github.com/kubernetes/kubernetes/releases).
+
+Next, add the appropriate binary folder to your `PATH` to access kubectl:
+
+```shell
+# macOS
+export PATH=/platforms/darwin/amd64:$PATH
+
+# Linux
+export PATH=/platforms/linux/amd64:$PATH
+```
+
+An up-to-date documentation page for this tool is available here: [kubectl manual](/docs/user-guide/kubectl/)
+
+By default, `kubectl` will use the `kubeconfig` file generated during the cluster startup for authenticating against the API.
+For more information, please read [kubeconfig files](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
+
+### Examples
+
+See [a simple nginx example](/docs/tasks/run-application/run-stateless-application-deployment/) to try out your new cluster.
+
+The "Guestbook" application is another popular example to get started with Kubernetes: [guestbook example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/)
+
+For more complete applications, please look in the [examples directory](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/)
+
+## Scaling the cluster
+
+Adding and removing nodes through `kubectl` is not supported. You can still scale the amount of nodes manually through adjustments of the 'Desired' and 'Max' properties within the [Auto Scaling Group](http://docs.aws.amazon.com/autoscaling/latest/userguide/as-manual-scaling.html), which was created during the installation.
+
+## Tearing down the cluster
+
+Make sure the environment variables you used to provision your cluster are still exported, then call the following script inside the
+`kubernetes` directory:
+
+```shell
+cluster/kube-down.sh
+```
+
+## Support Level
+
+
+IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
+-------------------- | ------------ | ------------- | ---------- | --------------------------------------------- | ---------| ----------------------------
+AWS | kops | Debian | k8s (VPC) | [docs](https://github.com/kubernetes/kops) | | Community ([@justinsb](https://github.com/justinsb))
+AWS | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/aws) | | Community
+AWS | Juju | Ubuntu | flannel, calico, canal | [docs](/docs/getting-started-guides/ubuntu) | 100% | Commercial, Community
+
+For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
+
+## Further reading
+
+Please see the [Kubernetes docs](/docs/) for more details on administering
+and using a Kubernetes cluster.
diff --git a/content/ko/docs/setup/turnkey/azure.md b/content/ko/docs/setup/turnkey/azure.md
new file mode 100644
index 000000000..534f1a553
--- /dev/null
+++ b/content/ko/docs/setup/turnkey/azure.md
@@ -0,0 +1,39 @@
+---
+reviewers:
+- colemickens
+- brendandburns
+title: Running Kubernetes on Azure
+---
+
+## Azure Container Service
+
+The [Azure Container Service](https://azure.microsoft.com/en-us/services/container-service/) offers simple
+deployments of one of three open source orchestrators: DC/OS, Swarm, and Kubernetes clusters.
+
+For an example of deploying a Kubernetes cluster onto Azure via the Azure Container Service:
+
+**[Microsoft Azure Container Service - Kubernetes Walkthrough](https://docs.microsoft.com/en-us/azure/aks/intro-kubernetes)**
+
+## Custom Deployments: ACS-Engine
+
+The core of the Azure Container Service is **open source** and available on GitHub for the community
+to use and contribute to: **[ACS-Engine](https://github.com/Azure/acs-engine)**.
+
+ACS-Engine is a good choice if you need to make customizations to the deployment beyond what the Azure Container
+Service officially supports. These customizations include deploying into existing virtual networks, utilizing multiple
+agent pools, and more. Some community contributions to ACS-Engine may even become features of the Azure Container Service.
+
+The input to ACS-Engine is similar to the ARM template syntax used to deploy a cluster directly with the Azure Container Service.
+The resulting output is an Azure Resource Manager Template that can then be checked into source control and can then be used
+to deploy Kubernetes clusters into Azure.
+
+You can get started quickly by following the **[ACS-Engine Kubernetes Walkthrough](https://github.com/Azure/acs-engine/blob/master/docs/kubernetes.md)**.
+
+## CoreOS Tectonic for Azure
+
+The CoreOS Tectonic Installer for Azure is **open source** and available on GitHub for the community to use and contribute to: **[Tectonic Installer](https://github.com/coreos/tectonic-installer)**.
+
+Tectonic Installer is a good choice when you need to make cluster customizations as it is built on [Hashicorp's Terraform](https://www.terraform.io/docs/providers/azurerm/) Azure Resource Manager (ARM) provider. This enables users to customize or integrate using familiar Terraform tooling.
+
+You can get started using the [Tectonic Installer for Azure Guide](https://coreos.com/tectonic/docs/latest/install/azure/azure-terraform.html).
+
diff --git a/content/ko/docs/setup/turnkey/clc.md b/content/ko/docs/setup/turnkey/clc.md
new file mode 100644
index 000000000..e21e5dc05
--- /dev/null
+++ b/content/ko/docs/setup/turnkey/clc.md
@@ -0,0 +1,341 @@
+---
+title: Running Kubernetes on CenturyLink Cloud
+---
+
+{: toc}
+
+These scripts handle the creation, deletion and expansion of Kubernetes clusters on CenturyLink Cloud.
+
+You can accomplish all these tasks with a single command. We have made the Ansible playbooks used to perform these tasks available [here](https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc/blob/master/ansible/README.md).
+
+## Find Help
+
+If you run into any problems or want help with anything, we are here to help. Reach out to use via any of the following ways:
+
+- Submit a github issue
+- Send an email to Kubernetes AT ctl DOT io
+- Visit [http://info.ctl.io/kubernetes](http://info.ctl.io/kubernetes)
+
+## Clusters of VMs or Physical Servers, your choice.
+
+- We support Kubernetes clusters on both Virtual Machines or Physical Servers. If you want to use physical servers for the worker nodes (minions), simple use the --minion_type=bareMetal flag.
+- For more information on physical servers, visit: [https://www.ctl.io/bare-metal/](https://www.ctl.io/bare-metal/)
+- Physical serves are only available in the VA1 and GB3 data centers.
+- VMs are available in all 13 of our public cloud locations
+
+## Requirements
+
+The requirements to run this script are:
+
+- A linux administrative host (tested on ubuntu and macOS)
+- python 2 (tested on 2.7.11)
+ - pip (installed with python as of 2.7.9)
+- git
+- A CenturyLink Cloud account with rights to create new hosts
+- An active VPN connection to the CenturyLink Cloud from your linux host
+
+## Script Installation
+
+After you have all the requirements met, please follow these instructions to install this script.
+
+1) Clone this repository and cd into it.
+
+```shell
+git clone https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc
+```
+
+2) Install all requirements, including
+
+ * Ansible
+ * CenturyLink Cloud SDK
+ * Ansible Modules
+
+```shell
+sudo pip install -r ansible/requirements.txt
+```
+
+3) Create the credentials file from the template and use it to set your ENV variables
+
+```shell
+cp ansible/credentials.sh.template ansible/credentials.sh
+vi ansible/credentials.sh
+source ansible/credentials.sh
+
+```
+
+4) Grant your machine access to the CenturyLink Cloud network by using a VM inside the network or [ configuring a VPN connection to the CenturyLink Cloud network.](https://www.ctl.io/knowledge-base/network/how-to-configure-client-vpn/)
+
+
+#### Script Installation Example: Ubuntu 14 Walkthrough
+
+If you use an ubuntu 14, for your convenience we have provided a step by step
+guide to install the requirements and install the script.
+
+```shell
+# system
+apt-get update
+apt-get install -y git python python-crypto
+curl -O https://bootstrap.pypa.io/get-pip.py
+python get-pip.py
+
+# installing this repository
+mkdir -p ~home/k8s-on-clc
+cd ~home/k8s-on-clc
+git clone https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc.git
+cd adm-kubernetes-on-clc/
+pip install -r requirements.txt
+
+# getting started
+cd ansible
+cp credentials.sh.template credentials.sh; vi credentials.sh
+source credentials.sh
+```
+
+
+
+## Cluster Creation
+
+To create a new Kubernetes cluster, simply run the ```kube-up.sh``` script. A complete
+list of script options and some examples are listed below.
+
+```shell
+CLC_CLUSTER_NAME=[name of kubernetes cluster]
+cd ./adm-kubernetes-on-clc
+bash kube-up.sh -c="$CLC_CLUSTER_NAME"
+```
+
+It takes about 15 minutes to create the cluster. Once the script completes, it
+will output some commands that will help you setup kubectl on your machine to
+point to the new cluster.
+
+When the cluster creation is complete, the configuration files for it are stored
+locally on your administrative host, in the following directory
+
+```shell
+> CLC_CLUSTER_HOME=$HOME/.clc_kube/$CLC_CLUSTER_NAME/
+```
+
+
+#### Cluster Creation: Script Options
+
+```shell
+Usage: kube-up.sh [OPTIONS]
+Create servers in the CenturyLinkCloud environment and initialize a Kubernetes cluster
+Environment variables CLC_V2_API_USERNAME and CLC_V2_API_PASSWD must be set in
+order to access the CenturyLinkCloud API
+
+All options (both short and long form) require arguments, and must include "="
+between option name and option value.
+
+ -h (--help) display this help and exit
+ -c= (--clc_cluster_name=) set the name of the cluster, as used in CLC group names
+ -t= (--minion_type=) standard -> VM (default), bareMetal -> physical]
+ -d= (--datacenter=) VA1 (default)
+ -m= (--minion_count=) number of kubernetes minion nodes
+ -mem= (--vm_memory=) number of GB ram for each minion
+ -cpu= (--vm_cpu=) number of virtual cps for each minion node
+ -phyid= (--server_conf_id=) physical server configuration id, one of
+ physical_server_20_core_conf_id
+ physical_server_12_core_conf_id
+ physical_server_4_core_conf_id (default)
+ -etcd_separate_cluster=yes create a separate cluster of three etcd nodes,
+ otherwise run etcd on the master node
+```
+
+## Cluster Expansion
+
+To expand an existing Kubernetes cluster, run the ```add-kube-node.sh```
+script. A complete list of script options and some examples are listed [below](#cluster-expansion-script-options).
+This script must be run from the same host that created the cluster (or a host
+that has the cluster artifact files stored in ```~/.clc_kube/$cluster_name```).
+
+```shell
+cd ./adm-kubernetes-on-clc
+bash add-kube-node.sh -c="name_of_kubernetes_cluster" -m=2
+```
+
+#### Cluster Expansion: Script Options
+
+```shell
+Usage: add-kube-node.sh [OPTIONS]
+Create servers in the CenturyLinkCloud environment and add to an
+existing CLC kubernetes cluster
+
+Environment variables CLC_V2_API_USERNAME and CLC_V2_API_PASSWD must be set in
+order to access the CenturyLinkCloud API
+
+ -h (--help) display this help and exit
+ -c= (--clc_cluster_name=) set the name of the cluster, as used in CLC group names
+ -m= (--minion_count=) number of kubernetes minion nodes to add
+```
+
+## Cluster Deletion
+
+There are two ways to delete an existing cluster:
+
+1) Use our python script:
+
+```shell
+python delete_cluster.py --cluster=clc_cluster_name --datacenter=DC1
+```
+
+2) Use the CenturyLink Cloud UI. To delete a cluster, log into the CenturyLink
+Cloud control portal and delete the parent server group that contains the
+Kubernetes Cluster. We hope to add a scripted option to do this soon.
+
+## Examples
+
+Create a cluster with name of k8s_1, 1 master node and 3 worker minions (on physical machines), in VA1
+
+```shell
+bash kube-up.sh --clc_cluster_name=k8s_1 --minion_type=bareMetal --minion_count=3 --datacenter=VA1
+```
+
+Create a cluster with name of k8s_2, an ha etcd cluster on 3 VMs and 6 worker minions (on VMs), in VA1
+
+```shell
+bash kube-up.sh --clc_cluster_name=k8s_2 --minion_type=standard --minion_count=6 --datacenter=VA1 --etcd_separate_cluster=yes
+```
+
+Create a cluster with name of k8s_3, 1 master node, and 10 worker minions (on VMs) with higher mem/cpu, in UC1:
+
+```shell
+bash kube-up.sh --clc_cluster_name=k8s_3 --minion_type=standard --minion_count=10 --datacenter=VA1 -mem=6 -cpu=4
+```
+
+
+
+## Cluster Features and Architecture
+
+We configure the Kubernetes cluster with the following features:
+
+* KubeDNS: DNS resolution and service discovery
+* Heapster/InfluxDB: For metric collection. Needed for Grafana and auto-scaling.
+* Grafana: Kubernetes/Docker metric dashboard
+* KubeUI: Simple web interface to view Kubernetes state
+* Kube Dashboard: New web interface to interact with your cluster
+
+We use the following to create the Kubernetes cluster:
+
+* Kubernetes 1.1.7
+* Ubuntu 14.04
+* Flannel 0.5.4
+* Docker 1.9.1-0~trusty
+* Etcd 2.2.2
+
+## Optional add-ons
+
+* Logging: We offer an integrated centralized logging ELK platform so that all
+ Kubernetes and docker logs get sent to the ELK stack. To install the ELK stack
+ and configure Kubernetes to send logs to it, follow [the log
+ aggregation documentation](https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc/blob/master/log_aggregration.md). Note: We don't install this by default as
+ the footprint isn't trivial.
+
+## Cluster management
+
+The most widely used tool for managing a Kubernetes cluster is the command-line
+utility ```kubectl```. If you do not already have a copy of this binary on your
+administrative machine, you may run the script ```install_kubectl.sh``` which will
+download it and install it in ```/usr/bin/local```.
+
+The script requires that the environment variable ```CLC_CLUSTER_NAME``` be defined
+
+```install_kubectl.sh``` also writes a configuration file which will embed the necessary
+authentication certificates for the particular cluster. The configuration file is
+written to the ```${CLC_CLUSTER_HOME}/kube``` directory
+
+```shell
+export KUBECONFIG=${CLC_CLUSTER_HOME}/kube/config
+kubectl version
+kubectl cluster-info
+```
+
+### Accessing the cluster programmatically
+
+It's possible to use the locally stored client certificates to access the apiserver. For example, you may want to use any of the [Kubernetes API client libraries](/docs/reference/using-api/client-libraries/) to program against your Kubernetes cluster in the programming language of your choice.
+
+To demonstrate how to use these locally stored certificates, we provide the following example of using ```curl``` to communicate to the master apiserver via https:
+
+```shell
+curl \
+ --cacert ${CLC_CLUSTER_HOME}/pki/ca.crt \
+ --key ${CLC_CLUSTER_HOME}/pki/kubecfg.key \
+ --cert ${CLC_CLUSTER_HOME}/pki/kubecfg.crt https://${MASTER_IP}:6443
+```
+
+But please note, this *does not* work out of the box with the ```curl``` binary
+distributed with macOS.
+
+### Accessing the cluster with a browser
+
+We install [the kubernetes dashboard](/docs/tasks/web-ui-dashboard/). When you
+create a cluster, the script should output URLs for these interfaces like this:
+
+kubernetes-dashboard is running at ```https://${MASTER_IP}:6443/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy```.
+
+Note on Authentication to the UIs: The cluster is set up to use basic
+authentication for the user _admin_. Hitting the url at
+```https://${MASTER_IP}:6443``` will require accepting the self-signed certificate
+from the apiserver, and then presenting the admin password written to file at:
+
+```> _${CLC_CLUSTER_HOME}/kube/admin_password.txt_```
+
+
+### Configuration files
+
+Various configuration files are written into the home directory *CLC_CLUSTER_HOME* under
+```.clc_kube/${CLC_CLUSTER_NAME}``` in several subdirectories. You can use these files
+to access the cluster from machines other than where you created the cluster from.
+
+* ```config/```: Ansible variable files containing parameters describing the master and minion hosts
+* ```hosts/```: hosts files listing access information for the ansible playbooks
+* ```kube/```: ```kubectl``` configuration files, and the basic-authentication password for admin access to the Kubernetes API
+* ```pki/```: public key infrastructure files enabling TLS communication in the cluster
+* ```ssh/```: SSH keys for root access to the hosts
+
+
+## ```kubectl``` usage examples
+
+There are a great many features of _kubectl_. Here are a few examples
+
+List existing nodes, pods, services and more, in all namespaces, or in just one:
+
+```shell
+kubectl get nodes
+kubectl get --all-namespaces services
+kubectl get --namespace=kube-system replicationcontrollers
+```
+
+The Kubernetes API server exposes services on web URLs, which are protected by requiring
+client certificates. If you run a kubectl proxy locally, ```kubectl``` will provide
+the necessary certificates and serve locally over http.
+
+```shell
+kubectl proxy -p 8001
+```
+
+Then, you can access urls like ```http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/``` without the need for client certificates in your browser.
+
+
+## What Kubernetes features do not work on CenturyLink Cloud
+
+These are the known items that don't work on CenturyLink cloud but do work on other cloud providers:
+
+- At this time, there is no support services of the type [LoadBalancer](/docs/tasks/access-application-cluster/create-external-load-balancer/). We are actively working on this and hope to publish the changes sometime around April 2016.
+
+- At this time, there is no support for persistent storage volumes provided by
+ CenturyLink Cloud. However, customers can bring their own persistent storage
+ offering. We ourselves use Gluster.
+
+
+## Ansible Files
+
+If you want more information about our Ansible files, please [read this file](https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc/blob/master/ansible/README.md)
+
+## Further reading
+
+Please see the [Kubernetes docs](/docs/) for more details on administering
+and using a Kubernetes cluster.
+
+
+
diff --git a/content/ko/docs/setup/turnkey/gce.md b/content/ko/docs/setup/turnkey/gce.md
new file mode 100644
index 000000000..3e51581a6
--- /dev/null
+++ b/content/ko/docs/setup/turnkey/gce.md
@@ -0,0 +1,216 @@
+---
+reviewers:
+- brendandburns
+- jbeda
+- mikedanese
+- thockin
+title: Running Kubernetes on Google Compute Engine
+---
+
+The example below creates a Kubernetes cluster with 4 worker node Virtual Machines and a master Virtual Machine (i.e. 5 VMs in your cluster). This cluster is set up and controlled from your workstation (or wherever you find convenient).
+
+{{< toc >}}
+
+### Before you start
+
+If you want a simplified getting started experience and GUI for managing clusters, please consider trying [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) for hosted cluster installation and management.
+
+For an easy way to experiment with the Kubernetes development environment, click the button below
+to open a Google Cloud Shell with an auto-cloned copy of the Kubernetes source repo.
+
+[![Open in Cloud Shell](https://gstatic.com/cloudssh/images/open-btn.png)](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/kubernetes/kubernetes&page=editor&open_in_editor=README.md)
+
+If you want to use custom binaries or pure open source Kubernetes, please continue with the instructions below.
+
+### Prerequisites
+
+1. You need a Google Cloud Platform account with billing enabled. Visit the [Google Developers Console](https://console.cloud.google.com) for more details.
+1. Install `gcloud` as necessary. `gcloud` can be installed as a part of the [Google Cloud SDK](https://cloud.google.com/sdk/).
+1. Enable the [Compute Engine Instance Group Manager API](https://console.developers.google.com/apis/api/replicapool.googleapis.com/overview) in the [Google Cloud developers console](https://console.developers.google.com/apis/library).
+1. Make sure that gcloud is set to use the Google Cloud Platform project you want. You can check the current project using `gcloud config list project` and change it via `gcloud config set project `.
+1. Make sure you have credentials for GCloud by running `gcloud auth login`.
+1. (Optional) In order to make API calls against GCE, you must also run `gcloud auth application-default login`.
+1. Make sure you can start up a GCE VM from the command line. At least make sure you can do the [Create an instance](https://cloud.google.com/compute/docs/instances/#startinstancegcloud) part of the GCE Quickstart.
+1. Make sure you can SSH into the VM without interactive prompts. See the [Log in to the instance](https://cloud.google.com/compute/docs/instances/#sshing) part of the GCE Quickstart.
+
+### Starting a cluster
+
+You can install a client and start a cluster with either one of these commands (we list both in case only one is installed on your machine):
+
+
+```shell
+curl -sS https://get.k8s.io | bash
+```
+
+or
+
+```shell
+wget -q -O - https://get.k8s.io | bash
+```
+
+Once this command completes, you will have a master VM and four worker VMs, running as a Kubernetes cluster.
+
+By default, some containers will already be running on your cluster. Containers like `fluentd` provide [logging](/docs/concepts/cluster-administration/logging/), while `heapster` provides [monitoring](http://releases.k8s.io/master/cluster/addons/cluster-monitoring/README.md) services.
+
+The script run by the commands above creates a cluster with the name/prefix "kubernetes". It defines one specific cluster config, so you can't run it more than once.
+
+Alternately, you can download and install the latest Kubernetes release from [this page](https://github.com/kubernetes/kubernetes/releases), then run the `/cluster/kube-up.sh` script to start the cluster:
+
+```shell
+cd kubernetes
+cluster/kube-up.sh
+```
+
+If you want more than one cluster running in your project, want to use a different name, or want a different number of worker nodes, see the `/cluster/gce/config-default.sh` file for more fine-grained configuration before you start up your cluster.
+
+If you run into trouble, please see the section on [troubleshooting](/docs/setup/turnkey/gce/#troubleshooting), post to the
+[kubernetes-users group](https://groups.google.com/forum/#!forum/kubernetes-users), or come ask questions on [Slack](/docs/troubleshooting/#slack).
+
+The next few steps will show you:
+
+1. How to set up the command line client on your workstation to manage the cluster
+1. Examples of how to use the cluster
+1. How to delete the cluster
+1. How to start clusters with non-default options (like larger clusters)
+
+### Installing the Kubernetes command line tools on your workstation
+
+The cluster startup script will leave you with a running cluster and a `kubernetes` directory on your workstation.
+
+The [kubectl](/docs/user-guide/kubectl/) tool controls the Kubernetes cluster
+manager. It lets you inspect your cluster resources, create, delete, and update
+components, and much more. You will use it to look at your new cluster and bring
+up example apps.
+
+You can use `gcloud` to install the `kubectl` command-line tool on your workstation:
+
+ gcloud components install kubectl
+
+**Note:** The kubectl version bundled with `gcloud` may be older than the one
+downloaded by the get.k8s.io install script. See [Installing kubectl](/docs/tasks/kubectl/install/)
+document to see how you can set up the latest `kubectl` on your workstation.
+
+### Getting started with your cluster
+
+#### Inspect your cluster
+
+Once `kubectl` is in your path, you can use it to look at your cluster. E.g., running:
+
+```shell
+$ kubectl get --all-namespaces services
+```
+
+should show a set of [services](/docs/user-guide/services) that look something like this:
+
+```shell
+NAMESPACE NAME TYPE CLUSTER_IP EXTERNAL_IP PORT(S) AGE
+default kubernetes ClusterIP 10.0.0.1 443/TCP 1d
+kube-system kube-dns ClusterIP 10.0.0.2 53/TCP,53/UDP 1d
+kube-system kube-ui ClusterIP 10.0.0.3 80/TCP 1d
+...
+```
+
+Similarly, you can take a look at the set of [pods](/docs/user-guide/pods) that were created during cluster startup.
+You can do this via the
+
+```shell
+$ kubectl get --all-namespaces pods
+```
+
+command.
+
+You'll see a list of pods that looks something like this (the name specifics will be different):
+
+```shell
+NAMESPACE NAME READY STATUS RESTARTS AGE
+kube-system fluentd-cloud-logging-kubernetes-minion-63uo 1/1 Running 0 14m
+kube-system fluentd-cloud-logging-kubernetes-minion-c1n9 1/1 Running 0 14m
+kube-system fluentd-cloud-logging-kubernetes-minion-c4og 1/1 Running 0 14m
+kube-system fluentd-cloud-logging-kubernetes-minion-ngua 1/1 Running 0 14m
+kube-system kube-dns-v5-7ztia 3/3 Running 0 15m
+kube-system kube-ui-v1-curt1 1/1 Running 0 15m
+kube-system monitoring-heapster-v5-ex4u3 1/1 Running 1 15m
+kube-system monitoring-influx-grafana-v1-piled 2/2 Running 0 15m
+```
+
+Some of the pods may take a few seconds to start up (during this time they'll show `Pending`), but check that they all show as `Running` after a short period.
+
+#### Run some examples
+
+Then, see [a simple nginx example](/docs/user-guide/simple-nginx) to try out your new cluster.
+
+For more complete applications, please look in the [examples directory](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/). The [guestbook example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/) is a good "getting started" walkthrough.
+
+### Tearing down the cluster
+
+To remove/delete/teardown the cluster, use the `kube-down.sh` script.
+
+```shell
+cd kubernetes
+cluster/kube-down.sh
+```
+
+Likewise, the `kube-up.sh` in the same directory will bring it back up. You do not need to rerun the `curl` or `wget` command: everything needed to setup the Kubernetes cluster is now on your workstation.
+
+### Customizing
+
+The script above relies on Google Storage to stage the Kubernetes release. It
+then will start (by default) a single master VM along with 4 worker VMs. You
+can tweak some of these parameters by editing `kubernetes/cluster/gce/config-default.sh`
+You can view a transcript of a successful cluster creation
+[here](https://gist.github.com/satnam6502/fc689d1b46db9772adea).
+
+### Troubleshooting
+
+#### Project settings
+
+You need to have the Google Cloud Storage API, and the Google Cloud Storage
+JSON API enabled. It is activated by default for new projects. Otherwise, it
+can be done in the Google Cloud Console. See the [Google Cloud Storage JSON
+API Overview](https://cloud.google.com/storage/docs/json_api/) for more
+details.
+
+Also ensure that-- as listed in the [Prerequisites section](#prerequisites)-- you've enabled the `Compute Engine Instance Group Manager API`, and can start up a GCE VM from the command line as in the [GCE Quickstart](https://cloud.google.com/compute/docs/quickstart) instructions.
+
+#### Cluster initialization hang
+
+If the Kubernetes startup script hangs waiting for the API to be reachable, you can troubleshoot by SSHing into the master and node VMs and looking at logs such as `/var/log/startupscript.log`.
+
+**Once you fix the issue, you should run `kube-down.sh` to cleanup** after the partial cluster creation, before running `kube-up.sh` to try again.
+
+#### SSH
+
+If you're having trouble SSHing into your instances, ensure the GCE firewall
+isn't blocking port 22 to your VMs. By default, this should work but if you
+have edited firewall rules or created a new non-default network, you'll need to
+expose it: `gcloud compute firewall-rules create default-ssh --network=
+--description "SSH allowed from anywhere" --allow tcp:22`
+
+Additionally, your GCE SSH key must either have no passcode or you need to be
+using `ssh-agent`.
+
+#### Networking
+
+The instances must be able to connect to each other using their private IP. The
+script uses the "default" network which should have a firewall rule called
+"default-allow-internal" which allows traffic on any port on the private IPs.
+If this rule is missing from the default network or if you change the network
+being used in `cluster/config-default.sh` create a new rule with the following
+field values:
+
+* Source Ranges: `10.0.0.0/8`
+* Allowed Protocols and Port: `tcp:1-65535;udp:1-65535;icmp`
+
+## Support Level
+
+
+IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
+-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
+GCE | Saltstack | Debian | GCE | [docs](/docs/setup/turnkey/gce/) | | Project
+
+For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
+
+## Further reading
+
+Please see the [Kubernetes docs](/docs/) for more details on administering
+and using a Kubernetes cluster.
diff --git a/content/ko/docs/setup/turnkey/stackpoint.md b/content/ko/docs/setup/turnkey/stackpoint.md
new file mode 100644
index 000000000..cebe925f5
--- /dev/null
+++ b/content/ko/docs/setup/turnkey/stackpoint.md
@@ -0,0 +1,189 @@
+---
+reviewers:
+- baldwinspc
+title: Running Kubernetes on Multiple Clouds with Stackpoint.io
+content_template: templates/concept
+---
+
+{{% capture overview %}}
+
+[StackPointCloud](https://stackpoint.io/) is the universal control plane for Kubernetes Anywhere. StackPointCloud allows you to deploy and manage a Kubernetes cluster to the cloud provider of your choice in 3 steps using a web-based interface.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## AWS
+
+To create a Kubernetes cluster on AWS, you will need an Access Key ID and a Secret Access Key from AWS.
+
+1. Choose a Provider
+
+ a. Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitter account.
+
+ b. Click **+ADD A CLUSTER NOW**.
+
+ c. Click to select Amazon Web Services (AWS).
+
+1. Configure Your Provider
+
+ a. Add your Access Key ID and a Secret Access Key from AWS. Select your default StackPointCloud SSH keypair, or click **ADD SSH KEY** to add a new keypair.
+
+ b. Click **SUBMIT** to submit the authorization information.
+
+1. Configure Your Cluster
+
+ Choose any extra options you may want to include with your cluster, then click **SUBMIT** to create the cluster.
+
+1. Run the Cluster
+
+ You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters).
+
+ For information on using and managing a Kubernetes cluster on AWS, [consult the Kubernetes documentation](/docs/getting-started-guides/aws/).
+
+
+## GCE
+
+To create a Kubernetes cluster on GCE, you will need the Service Account JSON Data from Google.
+
+1. Choose a Provider
+
+ a. Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitter account.
+
+ b. Click **+ADD A CLUSTER NOW**.
+
+ c. Click to select Google Compute Engine (GCE).
+
+1. Configure Your Provider
+
+ a. Add your Service Account JSON Data from Google. Select your default StackPointCloud SSH keypair, or click **ADD SSH KEY** to add a new keypair.
+
+ b. Click **SUBMIT** to submit the authorization information.
+
+1. Configure Your Cluster
+
+ Choose any extra options you may want to include with your cluster, then click **SUBMIT** to create the cluster.
+
+1. Run the Cluster
+
+ You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters).
+
+ For information on using and managing a Kubernetes cluster on GCE, [consult the Kubernetes documentation](/docs/getting-started-guides/gce/).
+
+
+## Google Kubernetes Engine
+
+To create a Kubernetes cluster on Google Kubernetes Engine, you will need the Service Account JSON Data from Google.
+
+1. Choose a Provider
+
+ a. Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitter account.
+
+ b. Click **+ADD A CLUSTER NOW**.
+
+ c. Click to select Google Kubernetes Engine.
+
+1. Configure Your Provider
+
+ a. Add your Service Account JSON Data from Google. Select your default StackPointCloud SSH keypair, or click **ADD SSH KEY** to add a new keypair.
+
+ b. Click **SUBMIT** to submit the authorization information.
+
+1. Configure Your Cluster
+
+ Choose any extra options you may want to include with your cluster, then click **SUBMIT** to create the cluster.
+
+1. Run the Cluster
+
+ You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters).
+
+ For information on using and managing a Kubernetes cluster on Google Kubernetes Engine, consult [the official documentation](/docs/home/).
+
+
+## DigitalOcean
+
+To create a Kubernetes cluster on DigitalOcean, you will need a DigitalOcean API Token.
+
+1. Choose a Provider
+
+ a. Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitter account.
+
+ b. Click **+ADD A CLUSTER NOW**.
+
+ c. Click to select DigitalOcean.
+
+1. Configure Your Provider
+
+ a. Add your DigitalOcean API Token. Select your default StackPointCloud SSH keypair, or click **ADD SSH KEY** to add a new keypair.
+
+ b. Click **SUBMIT** to submit the authorization information.
+
+1. Configure Your Cluster
+
+ Choose any extra options you may want to include with your cluster, then click **SUBMIT** to create the cluster.
+
+1. Run the Cluster
+
+ You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters).
+
+ For information on using and managing a Kubernetes cluster on DigitalOcean, consult [the official documentation](/docs/home/).
+
+
+## Microsoft Azure
+
+To create a Kubernetes cluster on Microsoft Azure, you will need an Azure Subscription ID, Username/Email, and Password.
+
+1. Choose a Provider
+
+ a. Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitter account.
+
+ b. Click **+ADD A CLUSTER NOW**.
+
+ c. Click to select Microsoft Azure.
+
+1. Configure Your Provider
+
+ a. Add your Azure Subscription ID, Username/Email, and Password. Select your default StackPointCloud SSH keypair, or click **ADD SSH KEY** to add a new keypair.
+
+ b. Click **SUBMIT** to submit the authorization information.
+
+1. Configure Your Cluster
+
+ Choose any extra options you may want to include with your cluster, then click **SUBMIT** to create the cluster.
+
+1. Run the Cluster
+
+ You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters).
+
+ For information on using and managing a Kubernetes cluster on Azure, [consult the Kubernetes documentation](/docs/getting-started-guides/azure/).
+
+
+## Packet
+
+To create a Kubernetes cluster on Packet, you will need a Packet API Key.
+
+1. Choose a Provider
+
+ a. Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitter account.
+
+ b. Click **+ADD A CLUSTER NOW**.
+
+ c. Click to select Packet.
+
+1. Configure Your Provider
+
+ a. Add your Packet API Key. Select your default StackPointCloud SSH keypair, or click **ADD SSH KEY** to add a new keypair.
+
+ b. Click **SUBMIT** to submit the authorization information.
+
+1. Configure Your Cluster
+
+ Choose any extra options you may want to include with your cluster, then click **SUBMIT** to create the cluster.
+
+1. Run the Cluster
+
+ You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters).
+
+ For information on using and managing a Kubernetes cluster on Packet, consult [the official documentation](/docs/home/).
+
+{{% /capture %}}
diff --git a/content/ko/docs/tutorials/_index.md b/content/ko/docs/tutorials/_index.md
new file mode 100644
index 000000000..04013216c
--- /dev/null
+++ b/content/ko/docs/tutorials/_index.md
@@ -0,0 +1,77 @@
+---
+title: Tutorials
+main_menu: true
+weight: 60
+content_template: templates/concept
+---
+
+{{% capture overview %}}
+
+This section of the Kubernetes documentation contains tutorials.
+A tutorial shows how to accomplish a goal that is larger than a single
+[task](/docs/tasks/). Typically a tutorial has several sections,
+each of which has a sequence of steps.
+Before walking through each tutorial, you may want to bookmark the
+[Standardized Glossary](/docs/reference/glossary/) page for later references.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Basics
+
+* [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) is an in-depth interactive tutorial that helps you understand the Kubernetes system and try out some basic Kubernetes features.
+
+* [Scalable Microservices with Kubernetes (Udacity)](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615)
+
+* [Introduction to Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x#)
+
+* [Hello Minikube](/docs/tutorials/hello-minikube/)
+
+## Configuration
+
+* [Configuring Redis Using a ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/)
+
+## Stateless Applications
+
+* [Exposing an External IP Address to Access an Application in a Cluster](/docs/tutorials/stateless-application/expose-external-ip-address/)
+
+* [Example: Deploying PHP Guestbook application with Redis](/docs/tutorials/stateless-application/guestbook/)
+
+## Stateful Applications
+
+* [StatefulSet Basics](/docs/tutorials/stateful-application/basic-stateful-set/)
+
+* [Example: WordPress and MySQL with Persistent Volumes](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/)
+
+* [Example: Deploying Cassandra with Stateful Sets](/docs/tutorials/stateful-application/cassandra/)
+
+* [Running ZooKeeper, A CP Distributed System](/docs/tutorials/stateful-application/zookeeper/)
+
+## CI/CD Pipeline
+
+* [Set Up a CI/CD Pipeline with Kubernetes Part 1: Overview](https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/5/set-cicd-pipeline-kubernetes-part-1-overview)
+
+* [Set Up a CI/CD Pipeline with a Jenkins Pod in Kubernetes (Part 2)](https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/6/set-cicd-pipeline-jenkins-pod-kubernetes-part-2)
+
+* [Run and Scale a Distributed Crossword Puzzle App with CI/CD on Kubernetes (Part 3)](https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/run-and-scale-distributed-crossword-puzzle-app-cicd-kubernetes-part-3)
+
+* [Set Up CI/CD for a Distributed Crossword Puzzle App on Kubernetes (Part 4)](https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/set-cicd-distributed-crossword-puzzle-app-kubernetes-part-4)
+
+## Clusters
+
+* [AppArmor](/docs/tutorials/clusters/apparmor/)
+
+## Services
+
+* [Using Source IP](/docs/tutorials/services/source-ip/)
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+If you would like to write a tutorial, see
+[Using Page Templates](/docs/home/contribute/page-templates/)
+for information about the tutorial page type and the tutorial template.
+
+{{% /capture %}}
diff --git a/content/ko/docs/tutorials/hello-minikube.md b/content/ko/docs/tutorials/hello-minikube.md
new file mode 100644
index 000000000..7f02a333a
--- /dev/null
+++ b/content/ko/docs/tutorials/hello-minikube.md
@@ -0,0 +1,425 @@
+---
+title: Hello Minikube
+content_template: templates/tutorial
+weight: 5
+---
+
+{{% capture overview %}}
+
+The goal of this tutorial is for you to turn a simple Hello World Node.js app
+into an application running on Kubernetes. The tutorial shows you how to
+take code that you have developed on your machine, turn it into a Docker
+container image and then run that image on [Minikube](/docs/getting-started-guides/minikube).
+Minikube provides a simple way of running Kubernetes on your local machine for free.
+
+{{% /capture %}}
+
+{{% capture objectives %}}
+
+* Run a hello world Node.js application.
+* Deploy the application to Minikube.
+* View application logs.
+* Update the application image.
+
+
+{{% /capture %}}
+
+{{% capture prerequisites %}}
+
+* For macOS, you can use [Homebrew](https://brew.sh) to install Minikube.
+
+ {{< note >}}
+ **Note:** If you see the following Homebrew error when you run `brew update` after you update your computer to macOS 10.13:
+
+ ```
+ Error: /usr/local is not writable. You should change the ownership
+ and permissions of /usr/local back to your user account:
+ sudo chown -R $(whoami) /usr/local
+ ```
+ You can resolve the issue by reinstalling Homebrew:
+ ```
+ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
+ ```
+ {{< /note >}}
+
+* [NodeJS](https://nodejs.org/en/) is required to run the sample application.
+
+* Install Docker. On macOS, we recommend
+[Docker for Mac](https://docs.docker.com/engine/installation/mac/).
+
+
+{{% /capture %}}
+
+{{% capture lessoncontent %}}
+
+## Create a Minikube cluster
+
+This tutorial uses [Minikube](https://github.com/kubernetes/minikube) to
+create a local cluster. This tutorial also assumes you are using
+[Docker for Mac](https://docs.docker.com/engine/installation/mac/)
+on macOS. If you are on a different platform like Linux, or using VirtualBox
+instead of Docker for Mac, the instructions to install Minikube may be
+slightly different. For general Minikube installation instructions, see
+the [Minikube installation guide](/docs/getting-started-guides/minikube/).
+
+Use Homebrew to install the latest Minikube release:
+```shell
+brew cask install minikube
+```
+
+Install the HyperKit driver, as described by the
+[Minikube driver installation guide](https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#hyperkit-driver).
+
+Use Homebrew to download the `kubectl` command-line tool, which you can
+use to interact with Kubernetes clusters:
+
+```shell
+brew install kubernetes-cli
+```
+
+Determine whether you can access sites like [https://cloud.google.com/container-registry/](https://cloud.google.com/container-registry/) directly without a proxy, by opening a new terminal and using
+
+```shell
+curl --proxy "" https://cloud.google.com/container-registry/
+```
+
+Make sure that the Docker daemon is started. You can determine if docker is running by using a command such as:
+
+```shell
+docker images
+```
+
+If NO proxy is required, start the Minikube cluster:
+
+```shell
+minikube start --vm-driver=hyperkit
+```
+If a proxy server is required, use the following method to start Minikube cluster with proxy setting:
+
+```shell
+minikube start --vm-driver=hyperkit --docker-env HTTP_PROXY=http://your-http-proxy-host:your-http-proxy-port --docker-env HTTPS_PROXY=http(s)://your-https-proxy-host:your-https-proxy-port
+```
+
+The `--vm-driver=hyperkit` flag specifies that you are using Docker for Mac. The
+default VM driver is VirtualBox.
+
+Now set the Minikube context. The context is what determines which cluster
+`kubectl` is interacting with. You can see all your available contexts in the
+`~/.kube/config` file.
+
+```shell
+kubectl config use-context minikube
+```
+
+Verify that `kubectl` is configured to communicate with your cluster:
+
+```shell
+kubectl cluster-info
+```
+
+Open the Kubernetes dashboard in a browser:
+
+```shell
+minikube dashboard
+```
+
+## Create your Node.js application
+
+The next step is to write the application. Save this code in a folder named `hellonode`
+with the filename `server.js`:
+
+{{< codenew language="js" file="minikube/server.js" >}}
+
+Run your application:
+
+```shell
+node server.js
+```
+
+You should be able to see your "Hello World!" message at http://localhost:8080/.
+
+Stop the running Node.js server by pressing **Ctrl-C**.
+
+The next step is to package your application in a Docker container.
+
+## Create a Docker container image
+
+Create a file, also in the `hellonode` folder, named `Dockerfile`. A Dockerfile describes
+the image that you want to build. You can build a Docker container image by extending an
+existing image. The image in this tutorial extends an existing Node.js image.
+
+{{< codenew language="conf" file="minikube/Dockerfile" >}}
+
+This recipe for the Docker image starts from the official Node.js LTS image
+found in the Docker registry, exposes port 8080, copies your `server.js` file
+to the image and starts the Node.js server.
+
+Because this tutorial uses Minikube, instead of pushing your Docker image to a
+registry, you can simply build the image using the same Docker host as
+the Minikube VM, so that the images are automatically present. To do so, make
+sure you are using the Minikube Docker daemon:
+
+```shell
+eval $(minikube docker-env)
+```
+
+{{< note >}}
+**Note:** Later, when you no longer wish to use the Minikube host, you can undo
+this change by running `eval $(minikube docker-env -u)`.
+{{< /note >}}
+
+Build your Docker image, using the Minikube Docker daemon (mind the trailing dot):
+
+```shell
+docker build -t hello-node:v1 .
+```
+
+Now the Minikube VM can run the image you built.
+
+## Create a Deployment
+
+A Kubernetes [*Pod*](/docs/concepts/workloads/pods/pod/) is a group of one or more Containers,
+tied together for the purposes of administration and networking. The Pod in this
+tutorial has only one Container. A Kubernetes
+[*Deployment*](/docs/concepts/workloads/controllers/deployment/) checks on the health of your
+Pod and restarts the Pod's Container if it terminates. Deployments are the
+recommended way to manage the creation and scaling of Pods.
+
+Use the `kubectl run` command to create a Deployment that manages a Pod. The
+Pod runs a Container based on your `hello-node:v1` Docker image. Set the
+`--image-pull-policy` flag to `Never` to always use the local image, rather than
+pulling it from your Docker registry (since you haven't pushed it there):
+
+```shell
+kubectl run hello-node --image=hello-node:v1 --port=8080 --image-pull-policy=Never
+```
+
+View the Deployment:
+
+
+```shell
+kubectl get deployments
+```
+
+Output:
+
+
+```shell
+NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
+hello-node 1 1 1 1 3m
+```
+
+View the Pod:
+
+
+```shell
+kubectl get pods
+```
+
+Output:
+
+
+```shell
+NAME READY STATUS RESTARTS AGE
+hello-node-714049816-ztzrb 1/1 Running 0 6m
+```
+
+View cluster events:
+
+```shell
+kubectl get events
+```
+
+View the `kubectl` configuration:
+
+```shell
+kubectl config view
+```
+
+For more information about `kubectl`commands, see the
+[kubectl overview](/docs/user-guide/kubectl-overview/).
+
+## Create a Service
+
+By default, the Pod is only accessible by its internal IP address within the
+Kubernetes cluster. To make the `hello-node` Container accessible from outside the
+Kubernetes virtual network, you have to expose the Pod as a
+Kubernetes [*Service*](/docs/concepts/services-networking/service/).
+
+From your development machine, you can expose the Pod to the public internet
+using the `kubectl expose` command:
+
+```shell
+kubectl expose deployment hello-node --type=LoadBalancer
+```
+
+View the Service you just created:
+
+```shell
+kubectl get services
+```
+
+Output:
+
+```shell
+NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+hello-node 10.0.0.71 8080/TCP 6m
+kubernetes 10.0.0.1 443/TCP 14d
+```
+
+The `--type=LoadBalancer` flag indicates that you want to expose your Service
+outside of the cluster. On cloud providers that support load balancers,
+an external IP address would be provisioned to access the Service. On Minikube,
+the `LoadBalancer` type makes the Service accessible through the `minikube service`
+command.
+
+```shell
+minikube service hello-node
+```
+
+This automatically opens up a browser window using a local IP address that
+serves your app and shows the "Hello World" message.
+
+Assuming you've sent requests to your new web service using the browser or curl,
+you should now be able to see some logs:
+
+```shell
+kubectl logs
+```
+
+## Update your app
+
+Edit your `server.js` file to return a new message:
+
+```javascript
+response.end('Hello World Again!');
+
+```
+
+Build a new version of your image (mind the trailing dot):
+
+```shell
+docker build -t hello-node:v2 .
+```
+
+Update the image of your Deployment:
+
+```shell
+kubectl set image deployment/hello-node hello-node=hello-node:v2
+```
+
+Run your app again to view the new message:
+
+```shell
+minikube service hello-node
+```
+
+## Enable addons
+
+Minikube has a set of built-in addons that can be enabled, disabled and opened in the local Kubernetes environment.
+
+First list the currently supported addons:
+
+```shell
+minikube addons list
+```
+
+Output:
+
+```shell
+- storage-provisioner: enabled
+- kube-dns: enabled
+- registry: disabled
+- registry-creds: disabled
+- addon-manager: enabled
+- dashboard: disabled
+- default-storageclass: enabled
+- coredns: disabled
+- heapster: disabled
+- efk: disabled
+- ingress: disabled
+```
+
+Minikube must be running for these commands to take effect. To enable `heapster` addon, for example:
+
+```shell
+minikube addons enable heapster
+```
+
+Output:
+
+```shell
+heapster was successfully enabled
+```
+
+View the Pod and Service you just created:
+
+```shell
+kubectl get po,svc -n kube-system
+```
+
+Output:
+
+```shell
+NAME READY STATUS RESTARTS AGE
+po/heapster-zbwzv 1/1 Running 0 2m
+po/influxdb-grafana-gtht9 2/2 Running 0 2m
+
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+svc/heapster NodePort 10.0.0.52 80:31655/TCP 2m
+svc/monitoring-grafana NodePort 10.0.0.33 80:30002/TCP 2m
+svc/monitoring-influxdb ClusterIP 10.0.0.43 8083/TCP,8086/TCP 2m
+```
+
+Open the endpoint to interacting with heapster in a browser:
+
+```shell
+minikube addons open heapster
+```
+
+Output:
+
+```shell
+Opening kubernetes service kube-system/monitoring-grafana in default browser...
+```
+
+## Clean up
+
+Now you can clean up the resources you created in your cluster:
+
+```shell
+kubectl delete service hello-node
+kubectl delete deployment hello-node
+```
+
+Optionally, force removal of the Docker images created:
+
+```shell
+docker rmi hello-node:v1 hello-node:v2 -f
+```
+
+Optionally, stop the Minikube VM:
+
+```shell
+minikube stop
+eval $(minikube docker-env -u)
+```
+
+Optionally, delete the Minikube VM:
+
+```shell
+minikube delete
+```
+
+{{% /capture %}}
+
+
+{{% capture whatsnext %}}
+
+* Learn more about [Deployment objects](/docs/concepts/workloads/controllers/deployment/).
+* Learn more about [Deploying applications](/docs/user-guide/deploying-applications/).
+* Learn more about [Service objects](/docs/concepts/services-networking/service/).
+
+{{% /capture %}}
+
+
diff --git a/content/ko/docs/tutorials/kubernetes-basics/_index.md b/content/ko/docs/tutorials/kubernetes-basics/_index.md
new file mode 100644
index 000000000..da5e3cb31
--- /dev/null
+++ b/content/ko/docs/tutorials/kubernetes-basics/_index.md
@@ -0,0 +1,5 @@
+---
+title: Kubernetes Basics
+weight: 10
+---
+
diff --git a/content/ko/docs/tutorials/kubernetes-basics/create-cluster/_index.md b/content/ko/docs/tutorials/kubernetes-basics/create-cluster/_index.md
new file mode 100644
index 000000000..b81792654
--- /dev/null
+++ b/content/ko/docs/tutorials/kubernetes-basics/create-cluster/_index.md
@@ -0,0 +1,4 @@
+---
+title: Create a Cluster
+weight: 10
+---
diff --git a/content/ko/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html b/content/ko/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html
new file mode 100644
index 000000000..0e3d2f5bc
--- /dev/null
+++ b/content/ko/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html
@@ -0,0 +1,41 @@
+---
+title: Interactive Tutorial - Creating a Cluster
+weight: 20
+---
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ To interact with the Terminal, please use the desktop/tablet version
+
+
+
+
diff --git a/content/ko/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html b/content/ko/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html
new file mode 100644
index 000000000..09f6b5c95
--- /dev/null
+++ b/content/ko/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html
@@ -0,0 +1,135 @@
+---
+title: Using Minikube to Create a Cluster
+weight: 10
+---
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Objectives
+
+
Learn what a Kubernetes cluster is.
+
Learn what Minikube is.
+
Start a Kubernetes cluster using an online terminal.
+
+
+
+
+
Kubernetes Clusters
+
+ Kubernetes coordinates a highly available cluster of computers that are connected to work as a
+ single unit. The abstractions in Kubernetes allow you to deploy containerized applications
+ to a cluster without tying them specifically to individual machines. To make use of this new model
+ of deployment, applications need to be packaged in a way that decouples them from individual hosts:
+ they need to be containerized. Containerized applications are more flexible and available than in
+ past deployment models, where applications were installed directly onto specific machines as
+ packages deeply integrated into the host. Kubernetes automates the distribution and scheduling of
+ application containers across a cluster in a more efficient way. Kubernetes is an open-source platform and is
+ production-ready.
+
+
A Kubernetes cluster consists of two types of resources:
+
+
The Master coordinates the cluster
+
Nodes are the workers that run applications
+
+
+
+
+
+
+
Summary:
+
+
Kubernetes cluster
+
Minikube
+
+
+
+
+ Kubernetes is a production-grade, open-source platform that orchestrates the placement
+ (scheduling) and execution of application containers within and across computer clusters.
+
+
+
+
+
+
+
+
+
Cluster Diagram
+
+
+
+
+
+
+
+
+
+
+
+
+
The Master is responsible for managing the cluster. The master coordinates all activities in
+ your cluster, such as scheduling applications, maintaining applications' desired state, scaling
+ applications, and rolling out new updates.
+
A node is a VM or a physical computer that serves as a worker machine in a Kubernetes cluster.
+ Each node has a Kubelet, which is an agent for managing the node and communicating with the
+ Kubernetes master. The node should also have tools for handling container operations, such as Docker or rkt. A
+ Kubernetes cluster that handles production traffic should have a minimum of three nodes.
+
+
+
+
+
Masters manage the cluster and the nodes are used to host the running applications.
+
+
+
+
+
+
+
When you deploy applications on Kubernetes, you tell the master to start the application containers.
+ The master schedules the containers to run on the cluster's nodes. The nodes communicate with the
+ master using the Kubernetes API, which the master exposes. End users can also use the
+ Kubernetes API directly to interact with the cluster.
+
+
A Kubernetes cluster can be deployed on either physical or virtual machines. To get started with
+ Kubernetes development, you can use Minikube.
+ Minikube is a lightweight Kubernetes implementation that creates a VM on your local machine and
+ deploys a simple cluster containing only one node. Minikube is available for Linux, macOS, and
+ Windows systems. The Minikube CLI provides basic bootstrapping operations for working with your
+ cluster, including start, stop, status, and delete. For this tutorial, however, you'll use a
+ provided online terminal with Minikube pre-installed.
+
+
Now that you know what Kubernetes is, let's go to the online tutorial and start our first
+ cluster!
+
+
+
diff --git a/content/ko/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html b/content/ko/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html
new file mode 100644
index 000000000..ba80da88b
--- /dev/null
+++ b/content/ko/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html
@@ -0,0 +1,132 @@
+---
+title: Using kubectl to Create a Deployment
+weight: 10
+---
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Objectives
+
+
Learn about application Deployments.
+
Deploy your first app on Kubernetes with kubectl.
+
+
+
+
+
Kubernetes Deployments
+
+ Once you have a running Kubernetes cluster, you can deploy your containerized applications on top of
+ it.
+ To do so, you create a Kubernetes Deployment configuration. The Deployment instructs
+ Kubernetes
+ how to create and update instances of your application. Once you've created a Deployment, the
+ Kubernetes
+ master schedules mentioned application instances onto individual Nodes in the cluster.
+
+
+
Once the application instances are created, a Kubernetes Deployment Controller continuously monitors
+ those instances. If the Node hosting an instance goes down or is deleted, the Deployment controller
+ replaces it. This provides a self-healing mechanism to address machine failure or
+ maintenance.
+
+
In a pre-orchestration world, installation scripts would often be used to start applications, but
+ they did not allow recovery from machine failure. By both creating your application instances and
+ keeping them running across Nodes, Kubernetes Deployments provide a fundamentally different approach
+ to application management.
+
+
+
+
+
+
Summary:
+
+
Deployments
+
Kubectl
+
+
+
+
+ A Deployment is responsible for creating and updating instances of your application
+
+
+
+
+
+
+
+
+
Deploying your first app on Kubernetes
+
+
+
+
+
+
+
+
+
+
+
+
+
+
You can create and manage a Deployment by using the Kubernetes command line interface, Kubectl.
+ Kubectl uses the Kubernetes API to interact with the cluster. In this module, you'll learn the most
+ common Kubectl commands needed to create Deployments that run your applications on a Kubernetes
+ cluster.
+
+
When you create a Deployment, you'll need to specify the container image for your application and the
+ number of replicas that you want to run. You can change that information later by updating your
+ Deployment; Modules 5 and 6 of the bootcamp discuss how you
+ can scale and update your Deployments.
+
+
+
+
+
+
Applications need to be packaged into one of the supported container formats in order to be
+ deployed on Kubernetes
+
+
+
+
+
+
+
+
For our first Deployment, we'll use a Node.js application packaged
+ in a Docker container. The source code and the Dockerfile are available in the GitHub
+ repository for the Kubernetes Basics.
+
+
Now that you know what Deployments are, let's go to the online tutorial and deploy our first app!
When you created a Deployment in Module 2, Kubernetes created a Pod to
+ host your application instance. A Pod is a Kubernetes abstraction that represents a group of one or
+ more application containers (such as Docker or rkt), and some shared resources for those containers.
+ Those resources include:
+
+
Shared storage, as Volumes
+
Networking, as a unique cluster IP address
+
Information about how to run each container, such as the container image version or specific
+ ports to use
+
+
+
A Pod models an application-specific "logical host" and can contain different application containers
+ which are relatively tightly coupled. For example, a Pod might include both the container with your
+ Node.js app as well as a different container that feeds the data to be published by the Node.js
+ webserver. The containers in a Pod share an IP Address and port space, are always co-located and
+ co-scheduled, and run in a shared context on the same Node.
+
+
Pods are the atomic unit on the Kubernetes platform. When we create a Deployment on Kubernetes, that
+ Deployment creates Pods with containers inside them (as opposed to creating containers directly).
+ Each Pod is tied to the Node where it is scheduled, and remains there until termination (according
+ to restart policy) or deletion. In case of a Node failure, identical Pods are scheduled on other
+ available Nodes in the cluster.
+
+
+
+
+
Summary:
+
+
Pods
+
Nodes
+
Kubectl main commands
+
+
+
+
+ A Pod is a group of one or more application containers (such as Docker or rkt) and includes
+ shared storage (volumes), IP address and information about how to run them.
+
+
+
+
+
+
+
+
+
Pods overview
+
+
+
+
+
+
+
+
+
+
+
+
+
Nodes
+
A Pod always runs on a Node. A Node is a worker machine in Kubernetes and may be either a
+ virtual or a physical machine, depending on the cluster. Each Node is managed by the Master. A Node
+ can have multiple pods, and the Kubernetes master automatically handles scheduling the pods across
+ the Nodes in the cluster. The Master's automatic scheduling takes into account the available
+ resources on each Node.
+
+
Every Kubernetes Node runs at least:
+
+
Kubelet, a process responsible for communication between the Kubernetes Master and the Node; it
+ manages the Pods and the containers running on a machine.
+
+
A container runtime (like Docker, rkt) responsible for pulling the container image from a
+ registry, unpacking the container, and running the application.
+
+
+
+
+
+
+
Containers should only be scheduled together in a single Pod if they are tightly coupled and
+ need to share resources such as disk.
+
+
+
+
+
+
+
+
+
Node overview
+
+
+
+
+
+
+
+
+
+
+
+
+
Troubleshooting with kubectl
+
In Module 2, you used Kubectl
+ command-line interface. You'll continue to use it in Module 3 to get information about deployed
+ applications and their environments. The most common operations can be done with the following
+ kubectl commands:
+
+
kubectl get - list resources
+
kubectl describe - show detailed information about a resource
+
kubectl logs - print the logs from a container in a pod
+
kubectl exec - execute a command on a container in a pod
+
+
+
You can use these commands to see when applications were deployed, what their current statuses are,
+ where they are running and what their configurations are.
+
+
Now that we know more about our cluster components and the command line, let's explore our
+ application.
+
+
+
+
+
A node is a worker machine in Kubernetes and may be a VM or physical machine, depending on
+ the cluster. Multiple Pods can run on one Node.
+
+
+
diff --git a/content/ko/docs/tutorials/kubernetes-basics/expose/expose-intro.html b/content/ko/docs/tutorials/kubernetes-basics/expose/expose-intro.html
new file mode 100644
index 000000000..1bdb24f41
--- /dev/null
+++ b/content/ko/docs/tutorials/kubernetes-basics/expose/expose-intro.html
@@ -0,0 +1,157 @@
+---
+title: Using a Service to Expose Your App
+weight: 10
+---
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Objectives
+
+
Learn about a Service in Kubernetes
+
Understand how labels and LabelSelector objects relate to a Service
+
Expose an application outside a Kubernetes cluster using a Service
+
+
+
+
+
Overview of Kubernetes Services
+
+
Kubernetes Pods are mortal. Pods in fact
+ have a lifecycle. When a worker node
+ dies, the Pods running on the Node are also lost. A ReplicationController
+ might then dynamically drive the cluster back to desired state via creation of new Pods to keep your
+ application running. As another example, consider an image-processing backend with 3 replicas. Those
+ replicas are fungible; the front-end system should not care about backend replicas or even if a Pod
+ is lost and recreated. That said, each Pod in a Kubernetes cluster has a unique IP address, even
+ Pods on the same Node, so there needs to be a way of automatically reconciling changes among Pods so
+ that your applications continue to function.
+
+
A Service in Kubernetes is an abstraction which defines a logical set of Pods and a policy by which
+ to access them. Services enable a loose coupling between dependent Pods. A Service is defined using
+ YAML (preferred) or JSON,
+ like all Kubernetes objects. The set of Pods targeted by a Service is usually determined by a LabelSelector
+ (see below for why you might want a Service without including selector in the spec).
+
+
+
Although each Pod has a unique IP address, those IPs are not exposed outside the cluster without a
+ Service. Services allow your applications to receive traffic. Services can be exposed in different
+ ways by specifying a type in the ServiceSpec:
+
+
ClusterIP (default) - Exposes the Service on an internal IP in the cluster. This type
+ makes the Service only reachable from within the cluster.
+
+
NodePort - Exposes the Service on the same port of each selected Node in the cluster
+ using NAT. Makes a Service accessible from outside the cluster using <NodeIP>:<NodePort>.
+ Superset of ClusterIP.
+
+
LoadBalancer - Creates an external load balancer in the current cloud (if supported) and
+ assigns a fixed, external IP to the Service. Superset of NodePort.
+
+
ExternalName - Exposes the Service using an arbitrary name (specified by externalName
+ in the spec) by returning a CNAME record with the name. No proxy is used. This type requires
+ v1.7 or higher of kube-dns.
+
Additionally, note that there are some use cases with Services that involve not defining selector
+ in the spec. A Service created without selector will also not create the corresponding
+ Endpoints object. This allows users to manually map a Service to specific endpoints. Another
+ possibility why there may be no selector is you are strictly using type: ExternalName.
+
+
+
+
+
Summary
+
+
Exposing Pods to external traffic
+
Load balancing traffic across multiple Pods
+
Using labels
+
+
+
+
A Kubernetes Service is an abstraction layer which defines a logical set of Pods and enables
+ external traffic exposure, load balancing and service discovery for those Pods.
+
+
+
+
+
+
+
+
Services and Labels
+
+
+
+
+
+
+
+
+
+
+
+
A Service routes traffic across a set of Pods. Services are the abstraction that allow pods to die
+ and replicate in Kubernetes without impacting your application. Discovery and routing among
+ dependent Pods (such as the frontend and backend components in an application) is handled by
+ Kubernetes Services.
+
Services match a set of Pods using labels
+ and selectors, a grouping primitive that allows logical operation on objects in Kubernetes.
+ Labels are key/value pairs attached to objects and can be used in any number of ways:
+
+
Designate objects for development, test, and production
+
Embed version tags
+
Classify an object using tags
+
+
+
+
+
+
You can create a Service at the same time you create a Deployment by
+ using --expose in kubectl.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Labels can be attached to objects at creation time or later on. They can be modified at any time.
+ Let's expose our application now using a Service and apply some labels.
This tutorial provides a walkthrough of the basics of the Kubernetes cluster orchestration system.
+ Each module contains some background information on major Kubernetes features and concepts, and
+ includes an interactive online tutorial. These interactive tutorials let you manage a simple cluster
+ and its containerized
+ applications for yourself.
+
Using the interactive tutorials, you can learn to:
+
+
Deploy a containerized application on a cluster
+
Scale the deployment
+
Update the containerized application with a new software version
+
Debug the containerized application
+
+
The tutorials use Katacoda to run a virtual terminal in your web browser that runs Minikube, a
+ small-scale local deployment of Kubernetes that can run anywhere. There's no need to install any
+ software or configure anything; each interactive tutorial runs directly out of your web browser
+ itself.
+
+
+
+
+
+
+
+
What can Kubernetes do for you?
+
With modern web services, users expect applications to be available 24/7, and developers expect to
+ deploy new versions of those applications several times a day. Containerization helps package
+ software to serve these goals, enabling applications to be released and updated in an easy and fast
+ way without downtime. Kubernetes helps you make sure those containerized applications run where and
+ when you want, and helps them find the resources and tools they need to work. Kubernetes is a production-ready,
+ open source platform designed with Google's accumulated experience in container orchestration,
+ combined with best-of-breed ideas from the community.
In the previous modules we created a Deployment,
+ and then exposed it publicly via a Service. The
+ Deployment created only one Pod for running our application. When traffic increases, we will need to
+ scale the application to keep up with user demand.
+
+
Scaling is accomplished by changing the number of replicas in a Deployment
+
+
+
+
+
Summary:
+
+
Scaling a Deployment
+
+
+
+
You can create from the start a Deployment with multiple instances using the --replicas
+ parameter for the kubectl run command
Scaling out a Deployment will ensure new Pods are created and scheduled to Nodes with available
+ resources. Scaling will increase the number of Pods to the new desired state. Kubernetes also
+ supports autoscaling
+ of Pods, but it is outside of the scope of this tutorial. Scaling to zero is also possible, and it
+ will terminate all Pods of the specified Deployment.
+
+
Running multiple instances of an application will require a way to distribute the traffic to all of
+ them. Services have an integrated load-balancer that will distribute network traffic to all Pods of
+ an exposed Deployment. Services will monitor continuously the running Pods using endpoints, to
+ ensure the traffic is sent only to available Pods.
+
+
+
+
+
Scaling is accomplished by changing the number of replicas in a Deployment.
+
+
+
+
+
+
+
+
+
Once you have multiple instances of an Application running, you would be able to do Rolling updates
+ without downtime. We'll cover that in the next module. Now, let's go to the online terminal and
+ scale our application.
+ To interact with the Terminal, please use the desktop/tablet version
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/ko/docs/tutorials/kubernetes-basics/update/update-intro.html b/content/ko/docs/tutorials/kubernetes-basics/update/update-intro.html
new file mode 100644
index 000000000..6219f4c4d
--- /dev/null
+++ b/content/ko/docs/tutorials/kubernetes-basics/update/update-intro.html
@@ -0,0 +1,149 @@
+---
+title: Performing a Rolling Update
+weight: 10
+---
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Objectives
+
+
Perform a rolling update using kubectl.
+
+
+
+
+
Updating an application
+
+
Users expect applications to be available all the time and developers are expected to deploy new
+ versions of them several times a day. In Kubernetes this is done with rolling updates. Rolling
+ updates allow Deployments' update to take place with zero downtime by incrementally updating
+ Pods instances with new ones. The new Pods will be scheduled on Nodes with available resources.
+
+
In the previous module we scaled our application to run multiple instances. This is a requirement for
+ performing updates without affecting application availability. By default, the maximum number of
+ Pods that can be unavailable during the update and the maximum number of new Pods that can be
+ created, is one. Both options can be configured to either numbers or percentages (of Pods).
+ In Kubernetes, updates are versioned and any Deployment update can be reverted to previous (stable)
+ version.
+
+
+
+
+
Summary:
+
+
Updating an app
+
+
+
+
Rolling updates allow Deployments' update to take place with zero downtime by incrementally
+ updating Pods instances with new ones.
Similar to application Scaling, if a Deployment is exposed publicly, the Service will load-balance
+ the traffic only to available Pods during the update. An available Pod is an instance that is
+ available to the users of the application.
+
+
Rolling updates allow the following actions:
+
+
Promote an application from one environment to another (via container image updates)
+
Rollback to previous versions
+
Continuous Integration and Continuous Delivery of applications with zero downtime
+
+
+
+
+
+
+
If a Deployment is exposed publicly, the Service will load-balance the traffic only to
+ available Pods during the update.
+
+
+
+
+
+
+
+
+
In the following interactive tutorial, we'll update our application to a new version, and also
+ perform a rollback.