From a4f0df635bbdbebae4f1c00238ba78ea8ce993b5 Mon Sep 17 00:00:00 2001 From: "Claudia J.Kang" Date: Mon, 1 Oct 2018 14:12:22 +0900 Subject: [PATCH] Setup page's header and subheader translate into Korean. (#51) --- content/ko/docs/setup/independent/_index.md | 2 +- content/ko/docs/setup/minikube.md | 56 ++++++++--------- content/ko/docs/setup/multiple-zones.md | 22 +++---- content/ko/docs/setup/node-conformance.md | 16 ++--- content/ko/docs/setup/scratch.md | 70 ++++++++++----------- 5 files changed, 83 insertions(+), 83 deletions(-) diff --git a/content/ko/docs/setup/independent/_index.md b/content/ko/docs/setup/independent/_index.md index f5698df38..e87c31872 100755 --- a/content/ko/docs/setup/independent/_index.md +++ b/content/ko/docs/setup/independent/_index.md @@ -1,5 +1,5 @@ --- -title: "Bootstrapping Clusters with kubeadm" +title: "kubeadm으로 클러스터 부트스트래핑 하기" weight: 30 --- diff --git a/content/ko/docs/setup/minikube.md b/content/ko/docs/setup/minikube.md index c0f2a421c..1507cce2d 100644 --- a/content/ko/docs/setup/minikube.md +++ b/content/ko/docs/setup/minikube.md @@ -1,12 +1,12 @@ --- -title: Running Kubernetes Locally via Minikube +title: Minikube로 로컬 상에서 쿠버네티스 구동 --- Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day. {{< toc >}} -### Minikube Features +### Minikube 특징 * Minikube supports Kubernetes features such as: * DNS @@ -17,11 +17,11 @@ Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a * Enabling CNI (Container Network Interface) * Ingress -## Installation +## 설치 See [Installing Minikube](/docs/tasks/tools/install-minikube/). -## Quickstart +## 빠른 시작 Here's a brief demo of minikube usage. If you want to change the VM driver add the appropriate `--vm-driver=xxx` flag to `minikube start`. Minikube supports @@ -74,7 +74,7 @@ Stopping local Kubernetes cluster... Stopping "minikube"... ``` -### Alternative Container Runtimes +### 다른 컨테이너 런타임 #### containerd @@ -120,7 +120,7 @@ $ minikube start \ --bootstrapper=kubeadm ``` -#### rkt container engine +#### rkt 컨테이너 엔진 To use [rkt](https://github.com/rkt/rkt) as the container runtime run: @@ -132,12 +132,12 @@ $ minikube start \ This will use an alternative minikube ISO image containing both rkt, and Docker, and enable CNI networking. -### Driver plugins +### 드라이버 플러그인 See [DRIVERS](https://git.k8s.io/minikube/docs/drivers.md) for details on supported drivers and how to install plugins, if required. -### Reusing the Docker daemon +### 도커 데몬 재사용 When using a single VM of Kubernetes, it's really handy to reuse the minikube's built-in Docker daemon; as this means you don't have to build a docker registry on your host machine and push the image into it - you can just build inside the same docker daemon as minikube which speeds up local experiments. Just make sure you tag your Docker image with something other than 'latest' and use that tag while you pull the image. Otherwise, if you do not specify version of your image, it will be assumed as `:latest`, with pull image policy of `Always` correspondingly, which may eventually result in `ErrImagePull` as you may not have any versions of your Docker image out there in the default docker registry (usually DockerHub) yet. @@ -170,9 +170,9 @@ The fix is to update /etc/sysconfig/docker to ensure that minikube's environment Remember to turn off the imagePullPolicy:Always, as otherwise Kubernetes won't use images you built locally. -## Managing your Cluster +## 클러스터 관리 -### Starting a Cluster +### 클러스터 시작 The `minikube start` command can be used to start your cluster. This command creates and configures a virtual machine that runs a single-node Kubernetes cluster. @@ -189,7 +189,7 @@ Unfortunately just setting the environment variables will not work. Minikube will also create a "minikube" context, and set it to default in kubectl. To switch back to this context later, run this command: `kubectl config use-context minikube`. -#### Specifying the Kubernetes version +#### 쿠버네티스 버전 지정 Minikube supports running multiple different versions of Kubernetes. You can access a list of all available versions via @@ -206,7 +206,7 @@ example, to run version `v1.7.3`, you would run the following: minikube start --kubernetes-version v1.7.3 ``` -### Configuring Kubernetes +### 쿠버네티스 구성 Minikube has a "configurator" feature that allows users to configure the Kubernetes components with arbitrary values. To use this feature, you can use the `--extra-config` flag on the `minikube start` command. @@ -226,7 +226,7 @@ Here is the documentation for each supported configuration: * [etcd](https://godoc.org/github.com/coreos/etcd/etcdserver#ServerConfig) * [scheduler](https://godoc.org/k8s.io/kubernetes/pkg/apis/componentconfig#KubeSchedulerConfiguration) -#### Examples +#### 예제 To change the `MaxPods` setting to 5 on the Kubelet, pass this flag: `--extra-config=kubelet.MaxPods=5`. @@ -234,16 +234,16 @@ This feature also supports nested structs. To change the `LeaderElection.LeaderE To set the `AuthorizationMode` on the `apiserver` to `RBAC`, you can use: `--extra-config=apiserver.Authorization.Mode=RBAC`. -### Stopping a Cluster +### 클러스터 중지 The `minikube stop` command can be used to stop your cluster. This command shuts down the minikube virtual machine, but preserves all cluster state and data. Starting the cluster again will restore it to it's previous state. -### Deleting a Cluster +### 클러스터 삭제 The `minikube delete` command can be used to delete your cluster. This command shuts down and deletes the minikube virtual machine. No data or state is preserved. -## Interacting with Your Cluster +## 클러스터와 상호 작용 ### Kubectl @@ -256,7 +256,7 @@ Minikube sets this context to default automatically, but if you need to switch b Or pass the context on each command like this: `kubectl get pods --context=minikube`. -### Dashboard +### 대시보드 To access the [Kubernetes Dashboard](/docs/tasks/access-application-cluster/web-ui-dashboard/), run this command in a shell after starting minikube to get the address: @@ -264,7 +264,7 @@ To access the [Kubernetes Dashboard](/docs/tasks/access-application-cluster/web- minikube dashboard ``` -### Services +### 서비스 To access a service exposed via a node port, run this command in a shell after starting minikube to get the address: @@ -272,7 +272,7 @@ To access a service exposed via a node port, run this command in a shell after s minikube service [-n NAMESPACE] [--url] NAME ``` -## Networking +## 네트워킹 The minikube VM is exposed to the host system via a host-only IP address, that can be obtained with the `minikube ip` command. Any services of type `NodePort` can be accessed over that IP address, on the NodePort. @@ -281,7 +281,7 @@ To determine the NodePort for your service, you can use a `kubectl` command like `kubectl get service $SERVICE --output='jsonpath="{.spec.ports[0].nodePort}"'` -## Persistent Volumes +## 퍼시스턴트 볼륨 Minikube supports [PersistentVolumes](/docs/concepts/storage/persistent-volumes/) of type `hostPath`. These PersistentVolumes are mapped to a directory inside the minikube VM. @@ -308,7 +308,7 @@ spec: path: /data/pv0001/ ``` -## Mounted Host Folders +## 호스트 폴더 마운트 Some drivers will mount a host folder within the VM so that you can easily share files between the VM and host. These are not configurable at the moment and different for the driver and OS you are using. **Note:** Host folder sharing is not implemented in the KVM driver yet. @@ -322,20 +322,20 @@ Some drivers will mount a host folder within the VM so that you can easily share | Xhyve | macOS | /Users | /Users | -## Private Container Registries +## 프라이빗 컨테이너 레지스트리 To access a private container registry, follow the steps on [this page](/docs/concepts/containers/images/). We recommend you use `ImagePullSecrets`, but if you would like to configure access on the minikube VM you can place the `.dockercfg` in the `/home/docker` directory or the `config.json` in the `/home/docker/.docker` directory. -## Add-ons +## 애드온 In order to have minikube properly start or restart custom addons, place the addons you wish to be launched with minikube in the `~/.minikube/addons` directory. Addons in this folder will be moved to the minikube VM and launched each time minikube is started or restarted. -## Using Minikube with an HTTP Proxy +## HTTP 프록시 환경에서 Minikube 사용 Minikube creates a Virtual Machine that includes Kubernetes and a Docker daemon. When Kubernetes attempts to schedule containers using Docker, the Docker daemon may require external network access to pull containers. @@ -357,19 +357,19 @@ To by-pass proxy configuration for this IP address, you should modify your no_pr $ export no_proxy=$no_proxy,$(minikube ip) ``` -## Known Issues +## 알려진 이슈 * Features that require a Cloud Provider will not work in Minikube. These include: * LoadBalancers * Features that require multiple nodes. These include: * Advanced scheduling policies -## Design +## 설계 Minikube uses [libmachine](https://github.com/docker/machine/tree/master/libmachine) for provisioning VMs, and [localkube](https://git.k8s.io/minikube/pkg/localkube) (originally written and donated to this project by [RedSpread](https://github.com/redspread)) for running the cluster. For more information about minikube, see the [proposal](https://git.k8s.io/community/contributors/design-proposals/cluster-lifecycle/local-cluster-ux.md). -## Additional Links: +## 추가적인 링크: * **Goals and Non-Goals**: For the goals and non-goals of the minikube project, please see our [roadmap](https://git.k8s.io/minikube/docs/contributors/roadmap.md). * **Development Guide**: See [CONTRIBUTING.md](https://git.k8s.io/minikube/CONTRIBUTING.md) for an overview of how to send pull requests. * **Building Minikube**: For instructions on how to build/test minikube from source, see the [build guide](https://git.k8s.io/minikube/docs/contributors/build_guide.md) @@ -377,6 +377,6 @@ For more information about minikube, see the [proposal](https://git.k8s.io/commu * **Adding a New Addon**: For instruction on how to add a new addon for minikube see the [adding an addon guide](https://git.k8s.io/minikube/docs/contributors/adding_an_addon.md) * **Updating Kubernetes**: For instructions on how to update kubernetes see the [updating Kubernetes guide](https://git.k8s.io/minikube/docs/contributors/updating_kubernetes.md) -## Community +## 커뮤니티 Contributions, questions, and comments are all welcomed and encouraged! minikube developers hang out on [Slack](https://kubernetes.slack.com) in the #minikube channel (get an invitation [here](http://slack.kubernetes.io/)). We also have the [kubernetes-dev Google Groups mailing list](https://groups.google.com/forum/#!forum/kubernetes-dev). If you are posting to the list please prefix your subject with "minikube: ". diff --git a/content/ko/docs/setup/multiple-zones.md b/content/ko/docs/setup/multiple-zones.md index e41ff66ba..ea1d053fc 100644 --- a/content/ko/docs/setup/multiple-zones.md +++ b/content/ko/docs/setup/multiple-zones.md @@ -1,9 +1,9 @@ --- -title: Running in Multiple Zones +title: 여러 영역에서 구동 weight: 90 --- -## Introduction +## 소개 Kubernetes 1.2 adds support for running a single cluster in multiple failure zones (GCE calls them simply "zones", AWS calls them "availability zones", here we'll refer to them as "zones"). @@ -25,7 +25,7 @@ for the appropriate labels to be added to nodes and volumes). {{< toc >}} -## Functionality +## 기능 When nodes are started, the kubelet automatically adds labels to them with zone information. @@ -48,7 +48,7 @@ admission controller automatically adds zone labels to them. The scheduler (via given volume are only placed into the same zone as that volume, as volumes cannot be attached across zones. -## Limitations +## 제한 사항 There are some important limitations of the multizone support: @@ -84,14 +84,14 @@ The following limitations are addressed with [topology-aware volume binding](/do StatefulSet, which will ensure that all the volumes for a replica are provisioned in the same zone. -## Walkthrough +## 연습 We're now going to walk through setting up and using a multi-zone cluster on both GCE & AWS. To do so, you bring up a full cluster (specifying `MULTIZONE=true`), and then you add nodes in additional zones by running `kube-up` again (specifying `KUBE_USE_EXISTING_MASTER=true`). -### Bringing up your cluster +### 클러스터 가져오기 Create the cluster as normal, but pass MULTIZONE to tell the cluster to manage multiple zones; creating nodes in us-central1-a. @@ -110,7 +110,7 @@ curl -sS https://get.k8s.io | MULTIZONE=true KUBERNETES_PROVIDER=aws KUBE_AWS_ZO This step brings up a cluster as normal, still running in a single zone (but `MULTIZONE=true` has enabled multi-zone capabilities). -### Nodes are labeled +### 라벨이 지정된 노드 확인 View the nodes; you can see that they are labeled with zone information. They are all in `us-central1-a` (GCE) or `us-west-2a` (AWS) so far. The @@ -128,7 +128,7 @@ kubernetes-minion-9vlv Ready 6m v1.11.1 kubernetes-minion-a12q Ready 6m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-a12q ``` -### Add more nodes in a second zone +### 두번째 영역에 더 많은 노드 추가하기 Let's add another set of nodes to the existing cluster, reusing the existing master, running in a different zone (us-central1-b or us-west-2b). @@ -166,7 +166,7 @@ kubernetes-minion-pp2f Ready 2m v1.11.1 kubernetes-minion-wf8i Ready 2m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-wf8i ``` -### Volume affinity +### 볼륨 어피니티 Create a volume using the dynamic volume creation (only PersistentVolumes are supported for zone affinity): @@ -245,7 +245,7 @@ NAME STATUS AGE VERSION LABELS kubernetes-minion-9vlv Ready 22m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv ``` -### Pods are spread across zones +### 여러 영역에 파드 분배하기 Pods in a replication controller or service are automatically spread across zones. First, let's launch more nodes in a third zone: @@ -310,7 +310,7 @@ LoadBalancer Ingress: 130.211.126.21 The load balancer correctly targets all the pods, even though they are in multiple zones. -### Shutting down the cluster +### 클러스터 강제 종료 When you're done, clean up: diff --git a/content/ko/docs/setup/node-conformance.md b/content/ko/docs/setup/node-conformance.md index 73717175c..c59e89e8a 100644 --- a/content/ko/docs/setup/node-conformance.md +++ b/content/ko/docs/setup/node-conformance.md @@ -1,23 +1,23 @@ --- -title: Validate Node Setup +title: 노드 구성 검증하기 --- {{< toc >}} -## Node Conformance Test +## 노드 적합성 테스트 *Node conformance test* is a containerized test framework that provides a system verification and functionality test for a node. The test validates whether the node meets the minimum requirements for Kubernetes; a node that passes the test is qualified to join a Kubernetes cluster. -## Limitations +## 제한 사항 In Kubernetes version 1.5, node conformance test has the following limitations: * Node conformance test only supports Docker as the container runtime. -## Node Prerequisite +## 노드 필수 구성 요소 To run node conformance test, a node must satisfy the same prerequisites as a standard Kubernetes node. At a minimum, the node should have the following @@ -26,7 +26,7 @@ daemons installed: * Container Runtime (Docker) * Kubelet -## Running Node Conformance Test +## 노드 적합성 테스트 실행 To run the node conformance test, perform the following steps: @@ -48,7 +48,7 @@ sudo docker run -it --rm --privileged --net=host \ k8s.gcr.io/node-test:0.2 ``` -## Running Node Conformance Test for Other Architectures +## 다른 아키텍처에서 노드 적합성 테스트 실행 Kubernetes also provides node conformance test docker images for other architectures: @@ -59,7 +59,7 @@ architectures: arm | node-test-arm | arm64 | node-test-arm64 | -## Running Selected Test +## 선택된 테스트 실행 To run specific tests, overwrite the environment variable `FOCUS` with the regular expression of tests you want to run. @@ -88,7 +88,7 @@ Theoretically, you can run any node e2e test if you configure the container and mount required volumes properly. But **it is strongly recommended to only run conformance test**, because it requires much more complex configuration to run non-conformance test. -## Caveats +## 주의 사항 * The test leaves some docker images on the node, including the node conformance test image and images of containers used in the functionality diff --git a/content/ko/docs/setup/scratch.md b/content/ko/docs/setup/scratch.md index 42005bedb..6b9c481f3 100644 --- a/content/ko/docs/setup/scratch.md +++ b/content/ko/docs/setup/scratch.md @@ -1,5 +1,5 @@ --- -title: Creating a Custom Cluster from Scratch +title: 맨 처음부터 사용자 지정 클러스터 생성 --- This guide is for people who want to craft a custom Kubernetes cluster. If you @@ -16,9 +16,9 @@ steps that existing cluster setup scripts are making. {{< toc >}} -## Designing and Preparing +## 설계 및 준비 -### Learning +### 학습 계획 1. You should be familiar with using Kubernetes already. We suggest you set up a temporary cluster by following one of the other Getting Started Guides. @@ -27,7 +27,7 @@ steps that existing cluster setup scripts are making. effect of completing one of the other Getting Started Guides. If not, follow the instructions [here](/docs/tasks/kubectl/install/). -### Cloud Provider +### 클라우드 공급자 Kubernetes has the concept of a Cloud Provider, which is a module which provides an interface for managing TCP Load Balancers, Nodes (Instances) and Networking Routes. @@ -36,7 +36,7 @@ create a custom cluster without implementing a cloud provider (for example if us bare-metal), and not all parts of the interface need to be implemented, depending on how flags are set on various components. -### Nodes +### 노드 - You can use virtual or physical machines. - While you can build a cluster with 1 machine, in order to run all the examples and tests you @@ -50,9 +50,9 @@ on how flags are set on various components. - Other nodes can have any reasonable amount of memory and any number of cores. They need not have identical configurations. -### Network +### 네트워크 -#### Network Connectivity +#### 네트워크 연결 Kubernetes has a distinctive [networking model](/docs/concepts/cluster-administration/networking/). Kubernetes allocates an IP address to each pod. When creating a cluster, you @@ -123,13 +123,13 @@ Also, you need to pick a static IP for master node. - Open any firewalls to allow access to the apiserver ports 80 and/or 443. - Enable ipv4 forwarding sysctl, `net.ipv4.ip_forward = 1` -#### Network Policy +#### 네트워크 폴리시 Kubernetes enables the definition of fine-grained network policy between Pods using the [NetworkPolicy](/docs/concepts/services-networking/network-policies/) resource. Not all networking providers support the Kubernetes NetworkPolicy API, see [Using Network Policy](/docs/tasks/configure-pod-container/declare-network-policy/) for more information. -### Cluster Naming +### 클러스터 이름 구성 You should pick a name for your cluster. Pick a short name for each cluster which is unique from future cluster names. This will be used in several ways: @@ -140,7 +140,7 @@ region of the world, etc. - Kubernetes clusters can create cloud provider resources (for example, AWS ELBs) and different clusters need to distinguish which resources each created. Call this `CLUSTER_NAME`. -### Software Binaries +### 소프트웨어 바이너리 You will need binaries for: @@ -155,7 +155,7 @@ You will need binaries for: - kube-controller-manager - kube-scheduler -#### Downloading and Extracting Kubernetes Binaries +#### 쿠버네티스 바이너리 다운로드 및 압축 해제 A Kubernetes binary release includes all the Kubernetes binaries as well as the supported release of etcd. You can use a Kubernetes binary release (recommended) or build your Kubernetes binaries following the instructions in the @@ -168,7 +168,7 @@ Then locate `./kubernetes/server/kubernetes-server-linux-amd64.tar.gz` and unzip Then, within the second set of unzipped files, locate `./kubernetes/server/bin`, which contains all the necessary binaries. -#### Selecting Images +#### 이미지 선택 You will run docker, kubelet, and kube-proxy outside of a container, the same way you would run any system daemon, so you just need the bare binaries. For etcd, kube-apiserver, kube-controller-manager, and kube-scheduler, @@ -202,7 +202,7 @@ The remainder of the document assumes that the image identifiers have been chose - `HYPERKUBE_IMAGE=k8s.gcr.io/hyperkube:$TAG` - `ETCD_IMAGE=k8s.gcr.io/etcd:$ETCD_VERSION` -### Security Models +### 보안 모델 There are two main options for security: @@ -216,7 +216,7 @@ There are two main options for security: If following the HTTPS approach, you will need to prepare certs and credentials. -#### Preparing Certs +#### 인증서 준비 You need to prepare several certs: @@ -243,7 +243,7 @@ You will end up with the following files (we will use these variables later on) - `KUBELET_KEY` - optional -#### Preparing Credentials +#### 자격 증명 준비 The admin user (and any users) need: @@ -307,7 +307,7 @@ Put the kubeconfig(s) on every node. The examples later in this guide assume that there are kubeconfigs in `/var/lib/kube-proxy/kubeconfig` and `/var/lib/kubelet/kubeconfig`. -## Configuring and Installing Base Software on Nodes +## 노드의 기본 소프트웨어 구성 및 설치 This section discusses how to configure machines to be Kubernetes nodes. @@ -418,7 +418,7 @@ cannot be started successfully. For more details about debugging kube-proxy problems, refer to [Debug Services](/docs/tasks/debug-application-cluster/debug-service/) -### Networking +### 네트워킹 Each node needs to be allocated its own CIDR range for pod networking. Call this `NODE_X_POD_CIDR`. @@ -448,7 +448,7 @@ NOTE: This is environment specific. Some environments will not need any masquerading at all. Others, such as GCE, will not allow pod IPs to send traffic to the internet, but have no problem with them inside your GCE Project. -### Other +### 기타 - Enable auto-upgrades for your OS package manager, if desired. - Configure log rotation for all node components (for example using [logrotate](http://linux.die.net/man/8/logrotate)). @@ -457,14 +457,14 @@ traffic to the internet, but have no problem with them inside your GCE Project. - Install any client binaries for optional volume types, such as `glusterfs-client` for GlusterFS volumes. -### Using Configuration Management +### 구성 관리 사용 The previous steps all involved "conventional" system administration techniques for setting up machines. You may want to use a Configuration Management system to automate the node configuration process. There are examples of [Saltstack](/docs/setup/salt/), Ansible, Juju, and CoreOS Cloud Config in the various Getting Started Guides. -## Bootstrapping the Cluster +## 클러스터 부트스트랩 While the basic node services (kubelet, kube-proxy, docker) are typically started and managed using traditional system administration/automation approaches, the remaining *master* components of Kubernetes are @@ -499,7 +499,7 @@ To run an etcd instance: 1. Make any modifications needed 1. Start the pod by putting it into the kubelet manifest directory -### Apiserver, Controller Manager, and Scheduler +### API 서버, 컨트롤러 관리자, 스케줄러 The apiserver, controller manager, and scheduler will each run as a pod on the master node. @@ -512,7 +512,7 @@ For each of these components, the steps to start them running are similar: 1. Start the pod by putting the completed template into the kubelet manifest directory. 1. Verify that the pod is started. -#### Apiserver pod template +#### API 서버 파드 템플릿 ```json { @@ -626,7 +626,7 @@ This pod mounts several node file system directories using the `hostPath` volum *TODO* document proxy-ssh setup. -##### Cloud Providers +##### 클라우드 공급자 Apiserver supports several cloud providers. @@ -643,7 +643,7 @@ Some cloud providers require a config file. If so, you need to put config file i - AWS format defined by type [AWSCloudConfig](https://releases.k8s.io/{{< param "githubbranch" >}}/pkg/cloudprovider/providers/aws/aws.go) - There is a similar type in the corresponding file for other cloud providers. -#### Scheduler pod template +#### 스케줄러 파드 템플릿 Complete this template for the scheduler pod: @@ -688,7 +688,7 @@ Typically, no additional flags are required for the scheduler. Optionally, you may want to mount `/var/log` as well and redirect output there. -#### Controller Manager Template +#### 컨트롤러 관리자 템플릿 Template for controller manager pod: @@ -762,7 +762,7 @@ Flags to consider using with controller manager: - `--service-account-private-key-file=/srv/kubernetes/server.key`, used by the [service account](/docs/user-guide/service-accounts) feature. - `--master=127.0.0.1:8080` -#### Starting and Verifying Apiserver, Scheduler, and Controller Manager +#### API 서버, 스케줄러, 컨트롤러 관리자 시작 및 확인 Place each completed pod template into the kubelet config dir (whatever `--config=` argument of kubelet is set to, typically @@ -793,7 +793,7 @@ If you have selected the `--register-node=true` option for kubelets, they will n You should soon be able to see all your nodes by running the `kubectl get nodes` command. Otherwise, you will need to manually create node objects. -### Starting Cluster Services +### 클러스터 서비스 시작 You will want to complete your Kubernetes clusters by adding cluster-wide services. These are sometimes called *addons*, and [an overview @@ -814,9 +814,9 @@ Notes for setting up each cluster service are given below: * GUI * [Setup instructions](https://github.com/kubernetes/dashboard) -## Troubleshooting +## 문제 해결 -### Running validate-cluster +### validate-cluster 명령 실행 `cluster/validate-cluster.sh` is used by `cluster/kube-up.sh` to determine if the cluster start succeeded. @@ -840,30 +840,30 @@ etcd-0 Healthy {"health": "true"} Cluster validation succeeded ``` -### Inspect pods and services +### 파드와 서비스 검사 Try to run through the "Inspect your cluster" section in one of the other Getting Started Guides, such as [GCE](/docs/setup/turnkey/gce/#inspect-your-cluster). You should see some services. You should also see "mirror pods" for the apiserver, scheduler and controller-manager, plus any add-ons you started. -### Try Examples +### 예제 실행하기 At this point you should be able to run through one of the basic examples, such as the [nginx example](/examples/application/deployment.yaml). -### Running the Conformance Test +### 적합성 테스트 실행 You may want to try to run the [Conformance test](http://releases.k8s.io/{{< param "githubbranch" >}}/test/e2e_node/conformance/run_test.sh). Any failures may give a hint as to areas that need more attention. -### Networking +### 네트워킹 The nodes must be able to connect to each other using their private IP. Verify this by pinging or SSH-ing from one node to another. -### Getting Help +### 도움말 얻기 If you run into trouble, see the section on [troubleshooting](/docs/setup/turnkey/gce/#troubleshooting), post to the [kubernetes-users group](https://groups.google.com/forum/#!forum/kubernetes-users), or come ask questions on [Slack](/docs/troubleshooting#slack). -## Support Level +## 지원 레벨 IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level